Homepage » Docker Container Virtualization
Docker is a tool that can package an application and its dependencies in a virtual container. This container can then be run on any Linux server that has docker installed.
This has benefits for several groups of people:
Docker includes the libcontainer library as a reference implementation for containers, and builds on top of libvirt, LXC (Linux containers) and systemd-nspawn, which provide interfaces to the facilities provided by the Linux kernel.
Containers offer quite some pros over virtual machines e.g. :
Organizations that use Docker are e.g. Atlassian, eBay, Gilt, Groupon, RelateIQ, Spotify, Tutum
Docker (Host) support
Docker can be run on any x64 host (64-bit architecture e.g. x86_64 and amd64 only – 32-bit is not supported) running a modern Linux kernel (>= 3.8). The kernel must support an appropriate storage driver e.g. Device Mapper, AUFS, vfs, btfrs (default is usually Device Mapper).
At the base of a container is a boot filesystem, bootfs which resembles the typical Linux/Unix boot filesystem. When a container has booted, it is moved into memory, and the boot filesystem is unmounted to free up the RAM used by the initrd disk image. Docker next layers a root filesystem, rootfs, on top of the boot filesystem. This rootfs can be one or more operating systems (e.g., a Debian or Ubuntu filesystem).
In opposite to traditional Linux distributions rootfs stays in read-only mode and Docker takes advantage of a union mount to add more read only filesystems onto the root filesystem. A union mount is a mount that allows several filesystems to be mounted at one time but appear to be one filesystem. The union mount overlays the filesystems on top of one another so that the resulting filesystem may contain files and subdirectories from any or all of the underlying filesystems.
Docker calls each of these filesystems images. Images can be layered on top of one another. Images can either be defined by Dockerfiles or by committing a container. When trying to run a container, docker will automatically download the image you specified. This will become more clear at the examples.
The Docker filesystem layers look like this…
When Docker first starts a container, the initial read-write layer is empty. As changes occur, they are applied to this layer; for example, if you want to change a file, then that file will be copied from the read-only layer below into the read-write layer. The read-only version of the file will still exist but it is now hidden underneath the copy.
This pattern is traditionally called “copy on write” and is one of the features that makes Docker so powerful.
Local images live on the local Docker host in the
var/lib/docker/containers directory. Each image will be inside a directory named for your storage driver; for example, aufs or devicemapper.
Images live inside repositories, and repositories live on registries. There are two types of registries: public and private.
Docker, Inc. itself provides a public hub where you can upload and share your images. You get one free private repository and unlimited public repositories. You could also host your own registry, more instructions can be found here.
Once the image is on a registry, you can run the same container on another location.
Containers are created from an image. These containers hold everything needed for your apps to run. These containers can be run/started/stopped/removed/…
It is possible to turn a container into an image by using the docker commit command.
docker run ubuntu:14.04 date
docker run- means execute the run command
ubuntu:14.04- instantiate a container of Ubuntu version 14.04
date- is the command that our container will execute
--name– Docker automatically generates a name at random for each container. If you want to specify a particular container name in place of the automatically generated name, you use the
docker run- creates a container
docker stop <container name>- stops it
docker start <container name>- will start it again
docker restart- restarts a container
docker kill- sends a SIGKILL to a container
docker attach- will connect to a running container
docker wait- blocks until container stops
docker remove <container name>
docker run -t -i -p 8080:80 <yourname>/apache2 /bin/bash
docker run- means execute the run command
-t- means that you want a tty to be applied
-i- means that you want to be able to interact with your container
-p- port mapping! host_port:container_port. Your website will be available at localhost:8080
<yourname>/apache2is a simple Apache2 image
bin/bashis the command that the container will execute
docker run -p 8080:80 -v /home/apache2/var/www/ <yourname>/apache2:latest /start.sh
-v- stands for volume. In this case, a volume shared between the host and the container.
docker run -rmwill remove the container after it stops.
docker ps- shows running containers.
docker ps -a- shows running and stopped containers.
docker inspect- looks at all the info on a container (including IP address).
docker logs- gets logs from container.
docker events- gets events from container.
docker port- shows public facing port of container.
docker top- shows running processes in container.
docker diff- shows changed files in the container’s FS.
docker cp- copies files or folders out of a container’s filesystem.
docker export- turns container filesystem into tarball.
nsenterallows you to run any command (e.g. a shell) inside a container that’s already running another command (e.g. your database or web server). This allows you to see all mounted volumes, check on processes, log files etc. inside a running container.
docker images- shows all images
docker import- creates an image from a tarball.
docker build- creates image from Dockerfile.
docker commit- creates image from a container.
docker rmi- removes an image.
docker insert- inserts a file from URL into image. (kind of odd, you’d think images would be immutable after create)
docker load- loads an image from a tar archive as STDIN, including images and tags (as of 0.7).
docker save- saves an image to a tar archive stream to STDOUT with all parent layers, tags & versions (as of 0.7).
docker history- shows history of image.
docker tag- tags an image to a name (local or registry).
A repository is a hosted collection of tagged images that together create the file system for a container.
A registry is a host - a server that stores repositories and provides an HTTP API for managing the uploading and downloading of repositories.
Docker.io hosts its own index to a central registry which contains a large number of repositories.
docker login- to login to a registry.
docker search- searches registry for image.
docker pull- pulls an image from registry to local machine.
docker push- pushes an image to the registry from local machine.
docker rm -f $(docker ps -a -q) ; docker rmi $(docker images -q -a)- removing all containers and images
There are two ways to create a Docker image. a) via the
docker commit command or b) via the
docker build command with a Dockerfile e.g.
docker build -t="my_container_image"- the
-tflag allows us to specify a name for our new image, here my_container_image.
docker build -t="my_container_image:tag01" .
If a file named .dockerignore exists in the root of the build context then it is interpreted as a newline-separated list of exclusion patterns. Much like a .gitignore file it excludes the listed files from being upload to the build context.
FROM guttertec/ubuntu:14.04 MAINTAINER Axel Quack "email@example.com" ENV REFRESHED_AT_2014-09-16 RUN apt-get --qq update
The ENV instruction sets environment variables in the image. In this case, the ENV instruction set an environment variable called REFRESHED_AT, showing when the template was last updated. Whenever you want to refresh the build you just change the date in the ENV instruction.
MAINTAINER <author name>- set an author field
RUN <command>- execute a command in a shell or exec form. The RUN instruction adds a new layer on top of the newly created image.
ADD <src> <destination>- copy files from one location to another using the ADD instruction. It takes two arguments and
CMD ["executable", "param1", "param2"]or
CMD ["param1", "param2"]or
CMD command param1 param2- defaults for an executing container are provided using the CMD command. DockerFile allows usage of the CMD instruction only once. Multiple usage of CMD nullifies all previous CMD instructions.
EXPOSE <port>- specify the port on which the container will be listening at runtime by running the EXPOSE instruction.
ENTRYPOINT ["executable", "param1", "param2"]or
ENTRYPOINT command param1 param2- configure a container to run as an executable, which means a specific application can be set as default and run every time a container is created using the image. This also means that the image will be used only to run and target the specific application each time it is called.
ENV- The ENV command is used to set the environment variables (one or more). These variables consist of “key = value” pairs which can be accessed within the container by scripts and applications alike. This functionality of docker offers an enormous amount of flexibility for running programs.
FROM- defines the base image to use to start the build process.
USER- the USER directive is used to set the UID (or username) which is to run the container based on the image being built.
VOLUME- the VOLUME command is used to enable access from your container to a directory on the host machine (i.e. mounting it).
WORKDIR- the WORKDIR directive is used to set where the command defined with CMD is to be executed.
ONBUILD- the ONBUILD instruction adds triggers to images.
docker login- login to your Docker Hub account. Your authentication credentials will be stored in the .dockercfg authentication file in your home directory.
Dockerfilethat you want to build.
Dockerfileis located. The default is
Once the Automated Build is configured it will automatically trigger a build and, in a few minutes, you should see your new Automated Build on the Docker Hub Registry. It will stay in sync with your GitHub and BitBucket repository until you deactivate the Automated Build.
If you want to see the status of your Automated Builds, you can go to your Automated Builds page on the Docker Hub, and it will show you the status of your builds and their build history.
If you are running Mac OS X you should install Homebrew for easier handling…
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
With Homebrew, it’s trivial to install VirtualBox which is a prerequisite to running docker on OS X:
brew update brew tap phinze/homebrew-cask brew install brew-cask brew cask install virtualbox
Boot2docker is a small script that helps download and setup a minimal Linux VM that will be in charge of running docker daemon.
brew install boot2docker boot2docker init boot2docker up export DOCKER_HOST=tcp://localhost:4243 # docker will be automatically too, since it is a boot2docker dependency
Usually you can not use Shared Folders with boot2docker outside of the VM. Though there are still a few workarounds…
vagrant init dduportal/boot2docker && vagrant up
docker pull cpuguy83/nfs-serverand then
docker run -d --name nfs --privileged cpuguy83/nfs-server /path/to/share
docker pull svendowideit/sambaand then
docker run svendowideit/samba data
Share between containers is surely possible.
Docker daemon runs with root privileges, which implies there are some issues that need extra care. Some interesting points include the following:
Some key Docker security features include the following:
Some things I found as interesting but I did not manage to sort right now.
VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port4000,tcp,,4000,,4000"
These are still some things I have to learn about… (this is more or less a reminder to myself)