From zero to Continuous Delivery with Docker, day 1
Sometimes I use my blog as a place to store my notes online. Today I attended the first day of the Workshop From zero to Continuous Delivery with Docker, carried out by Robert Firek and some of his colleagues from Codurance, a special event for LSCC. I have tried to learn Docker a few times, ranging from an hour talk to a full Pluralsight video, I still didn’t feel confident enough to apply it in my job.
Still one more day left to finish, but this two day workshop seems to be the final push I needed with this amazing technology that is Docker. I know there are plenty of blogs out there talking about Docker, but here are my notes:
Introduction to Docker
In the early beginning of the Internet, web servers run in a bare metal physical machine. Everything was fine until people started to think: why don’t we run several apps in the same machine. Problems started to appear, compatibility issues between different runtimes, environments, also the cost was really high. First solution: use virtual machines running in those bare metal machine. Some other problems appeared Problem: it is very demanding for the physical machine, since a vm runs a whole operating system.
In Linux there is something called containers, which are a lot simpler. They are based on standard Linux technologies:
- namespaces: an application will run in a namespace.
- cgroups: it gives access to hardware, it gives some % of resources.
- unionfs: they are changeable file system.
All this is already in Linux but Docker put it all together and hides this for us into something called libcontainer.
A Docker image is based on layers. Each layer will add the required files to the unionfs. One layer can be linux, java,.. When we run a container, Docker will pull image, it creates the container, prepares the filesystem, network and finally it runs the application.
Somethings we can do with containers:
- Pull: gets an image from a registry
- Run: creates a container.
- Build: runs the container
- Push: push an image to the registry.
Docker commands
We need a machine running a Docker daemon, Linux, Mac or Windows.
Basic commands
Get the version of Docker:
More info about the Docker daemon:
Help is really useful:
Example: docker help run
Find what images we’ve got downloaded:
Images are stored in registries, the registry by default is DockerHub. To get an image into our machine:
That is the same as:
In general the convention in the registry is [username/]repository:tag
We can navigate to Docker hub and search for Linux Centos and pull one of them.
If we want to have several images of the same thing like ubuntu you can put a tags. For example: docker pull library/ubuntu:14.04
. Suggestion, always use tags.
We can remove images (It is a dangerous operation):
And with this:
To run a command:
In this case it pulled an image from DockerHub called busybox and run inside the container a shell, Hello world.
We can run a command based on name or label:
Some commands require to use the imageId, but we don’t need to provide the full imageId, but the first two numbers or 3 numbers.
To get info about an image:
And to see the logs:
Actually with the docker inspect command we can find where the log files of the process are and then we can see them with:
Let’s run a more complicated program that prints Hello world in a while loop:
We can kill it:
Or more gracefully:
Let actually run an application, we can use one of the training apps provided by Docker, in this case Python:
With -P it will expose a port number externally:
To map a external port to an internal one:
Let’s see what containers we’ve got running:
We can use this command to delete the running containers:
To run a container interactively:
For example:
Now you are in the container and you can run the terminal. You can echo "Hello world" > HelloWorld.txt
We can also:
This way we have created a Dockerfile from inside the container. The Docker file can execute linux commands for us and much more.
Given a folder with a DockerFile we can build a container:
It is untagged so we can:
The following sets a new image based on that container or add changes to an image, container.
(we found 42 the image id of one container with docker ps -a)
Then we can run the container, even providing a parameter into the Docker run:
Registries
We can create private registries to hold images. In the workshop we had a private one
Before we tagged the image:
Then pushing the image given that my machine is connected to the private repo with the VPN:
Finding all the images in a private repository:
Finding all the history of an image and all the tags:
Something really interesting is that we can mount an external folder into the container so that we can modify the content or access it from the container:
Then inside the container we can see the files:
And even modify the external file since it is mounted inside:
To only read it but not possible to modify it:
We can also create a copy of certain files:
Networks
To find all the connections running in a container:
One of them is the bridge, the one we use to communicate containers together.
Create new network:
Create a web app and add it to the network:
Create another app with the bash interactively and add it to the network:
Now inside the second container, we install curl and dnsutils:
With this we can do: dig web
and we can query the other container from the second container: curl web:5000
Inspect what containers any remote machine is running:
Managing containers
All this is nice but we have something called Docker compose that makes our life easier. If we create a docker-compose.yml file and put this:
We are telling docker compose to create 3 containers based on a given image and each will run the command. Also we are telling the network. For the second app we are also aliasing the network. Be careful with the tab position, it needs to match.
Now we can start all these containers at once:
And stop them:
We can tell compose to create services and run them in the background:
Now let’s connect inside one of the containers:
We can again install curl and dnsutils. To do that, since they are new we have to do: apt-get update
. Now we are ready to connect from one container to the others.
This is awesome, we can tell one of the containers to scale to more containers:
Deployment of an application using Docker and TeamCity
In order to deploy an application with Docker we need to push an image to a registry during the build. When we deploy the app, we will do docker run and pull the image from the registry.
First let’s fork this repo into our own github account: https://github.com/codurance/simple_rest. This is a simple web app written in Java. It uses something called Gradle that it is used to run Java apps.
Now let’s clone it into the machine with Docker:
Let’s go to TeamCity now and set up a Project. This is the address: http://teamcity.training.codurance.io/project.html?projectId=Workstation8. We put a name of the project. In VCS Roots we put the url of our github repo: https://github.com/javflores/simple_rest.git
1. Build
Let’s create a Build configuration that will be responsible to generate the executable. In General Settings we specify the Artifact paths that this build needs: docker/Dockerfile and build/distributions/simple_rest.tar
We set up a Build step of type Gradle, we tell the gradle file: in this case build.gradle.
Let’s setup also a VCS trigger so that anytime there is a change in github master, this will trigger.
2. Release
This will publish a docker container in our machine. To execute this we need a trigger which is the previous step, 1. Build.
The Build Steps we need here are two.
The first one is a command line step to build a docker container and push it to the registry, the tag is based on the build number:
The second one is another command line to publish the release version:
We need to set a snapshot dependency on the previous step and it needs the artifacts dependencies from the previous build (mark Clean destination paths before downloading artifacts, so we start clean): Dockerfile and simple_rest.tar.
In General setting we tell the artifacts paths that this step will publish:
3. Deploy
This will publish a docker container in our machine.
To execute this we need a trigger which is the previous step, 2. Release. Here we have only one Build step which again is a command line with the following:
First we check for errors, we create a couple of variables. Then we delete the previous container that existed in our machine. The first time it won’t exit so we just do echo. Finally we simply run the container.
Now we can do any changes in github or hit Run in the first step in teamcity. We should be able to run in our browser: localhost/hello and localhost/healthcheck
Also in our machine we can check that the container was created and if it running:
We can also do:
and we should see Hello world.
Using Docker Compose in Team City
The last thing of the day was to use Docker compose with the team city setup. Everything is the same but…
In step 1, we publish another artifact, the docker-compose.yml:
In step 2 we publish that artifact again:
In step 3, in the Build step add the following:
Really simple, just stop containers define in the Docker compose file and bring them up again.
That was it for the day. Quite a day, too much to process. Much more tomorrow. Thanks to Robert and Codurance to run this great workshop. I’m enjoying and finally get on better with Docker.