So you want to unify dev/beta/live environments all together with a single script? Good. You are reading the right article. It might turn out to be 2 or 3 scripts but I can promise you that everything will be done with a click of a button. But first things first, let’s install a few things! Oh yeah, and you can find the final demo files here.
Install Ansible:
sudo apt-get install software-properties-common -y
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible -y
Install docker-py:
pip install docker-py
docker-py will not be used to build any containers but it will be useful for launching them with Ansible. Actually we will not build any container at all, instead we will use Ansible to provision and configure empty containers!
Ansible-Docker communication without SSH
So we want to connect to Docker containers and do all sorts of things with Ansible. The main issue is that Ansible uses SSH by default to connect to a host and a container is not supposed to be running any SSH daemon. The reason a container should not be running an SSH server has been discussed a lot but it boils down to technical constraints (ie. no daemons running in a container). So how do we get Ansible to connect to Docker without SSH?
Thanks to Ansible’s extendibility you can use different ways of communication. SSH is the default but nothing locks you down to that. As of Ansible 2.0 you can currenly use a docker-type communication. (Essentially docker exec
is being used under the hood.))
But let’s demonstrate.
Run a simple web server in a container:
docker run -it --name myapp -p 8000:8000 centos:7 python -m "SimpleHTTPServer"
Nothing special here, just a normal container running a web server. Visiting http://localhost:8000 you should see some files being served. The important bit from the command is the --name
flag. We need it so we can refer to the running container from Ansible (as seen below).
Let’s run ls -l
in the container via Ansible:
ansible all -i 'myapp,' --connection docker --module-name 'command' --args 'ls -l'
And the output should look something like this:
myapp | SUCCESS | rc=0 >>
total 68
-rw-r--r-- 1 root root 18293 Dec 23 18:12 anaconda-post.log
lrwxrwxrwx 1 root root 7 Dec 23 18:06 bin -> usr/bin
drwxr-xr-x 5 root root 380 Feb 3 10:06 dev
drwxr-xr-x 48 root root 4096 Feb 3 10:06 etc
...
By setting --connection docker
we are able to talk to a docker container directly - without SSH! The ansible command might look quite long and the main reason is that Ansible normally works with files. I squeezed everything in a single line just for a fast demonstration. From now on we will be doing Ansible the proper way, with files.
Inventories and playbooks
Ansible scripts are essentially decomposed in three things:
- An inventory file - a file that keeps all hosts we’re interested in.
- playbooks - yml files with instructions on what to be done (tasks) on a host.
- ansible.cfg - an optional configuration file (ie. telling Ansible which inventory file to use).
Let’s transform the previous example into files then.
hosts
[local]
127.0.0.1 ansible_connection=local
[docker]
myapp ansible_connection=docker
ansible_connection
defines how Ansible will try to communicate with whatever host. By specifying it to docker
we can enter a running container without SSH.
ansible.cfg
[defaults]
inventory = hosts
dockerup.yml
- name: Get necessary containers up
hosts: local
tasks:
- name: Get up a container for the main app
docker:
name: "myapp"
image: centos:7
state: running
net: host
expose: 8000
command: python -m "SimpleHTTPServer"
The sole reason I use a centos:7
image (instead of say Ubuntu) is to ensure that Ansible will work (doesn’t work with Python3 out of the box) and to make sure we have the python SimpleHTTPServer module for our demonstration purposes.
So let’s get the container up!
ansible-playbook dockerup.yml
If you can visit http://localhost:8000 then we have succeeded!
Provisioning the container
That’s all nice.. we have a container, but what about provisioning? Well we need to simply create a new playbook file for that. The provision.yml file will simply install pip inside the container.
provision.yml
- name: Provision docker container
hosts: myapp
tasks:
- name: Add extra package repositories
yum: name=epel-release state=installed
- name: Install pip
yum: name=python-pip state=installed
The epel-release
simply adds the pip package to yum’s repositories. Then we install it.
So let’s provision the container!
ansible-playbook provision.yml
Unify everything - factoring out tasks
We managed to get a simple line to provision the whole container.
Ideally we would like to be able and run the provisioning step not only against the container but against our beta or live server. Essentially we want to be able and reuse the tasks. So let’s break them out into a task-file.
Making provision.yml
a task-file is essentially stripping it down to its tasks.
- name: Add extra package repositories
yum: name=epel-release state=installed
- name: Install pip
yum: name=python-pip state=installed
We do the same for dockerup.yml
.
- name: Get up a container for the main app
docker:
name: "myapp"
image: centos:7
state: running
net: host
expose: 8000
command: python -m "SimpleHTTPServer"
So now we can use the above files as we please. To finish the puzzle we create a development.yml
.
- name: Get the development environment up
hosts: local
tasks:
- include: dockerup.yml
- include: provision.yml
delegate_to: myapp
Notice that this file includes the other two. Also a key thing to notice is the delegate_to
keyword. Essentially it makes the provisioning tasks to be made against myapp
and not the local host (which is given in the top of the file).
Let’s run it all!
ansible-playbook development.yml
This single command should get up the docker container and then provision it! To test it you could simply run docker exec myapp pip --version
.
Unify everything - reusing the tasks
Now we have our provisioning tasks in provision.yml
and getting the containers up in dockerup.yml
. Imagine now that we are running CentOS on bare metal for our live servers and would like to reuse provision.yml
for that. Thanks to the factorization we did earlier this is super simple.
First we add the (fictional) production server to our hosts file.
[local]
127.0.0.1 ansible_connection=local
[docker]
myapp ansible_connection=docker
[production]
10.10.10.10 ansible_connection=ssh
Then we simply create production.yml
based on what we did earlier.
- name: Provision the production server
hosts: production
tasks:
- include: provision.yml
That’s it! Running something like ansible-playbook production.yml
would have as result to provision the 10.10.10.10 server.
Worth it?
Personally I like this workflow a lot for two main reasons:
- We use lightweight containers without the overhead of hard-coding them with Dockerfile instructions.
- Provisioning/deployment steps are re-producible both at container, VM and bare-metal.
- Easiness - not needing to think everytime I need to develop and everytime I need to deploy makes me such a happy lad.
I hope I made your life a bit easier as I did make mine. If you need the source files for this post you can find them all here. For Ansible’s excellent documentation have a look here. I also recommend looking at the intro video they have here.