What is Docker?

What is Docker?

Docker has been on everyone's lips for years and is becoming increasingly popular. If you use one of the big cloud providers like AWS, Azure, GCE, or others for your applications, you can't avoid containers and dockers. Here in this article, I would like to explain a little bit why Docker is so popular and why it is better suited for many use cases than virtualization.

But let's start with the basics. To understand how Docker works, we first need to look at how virtualization works.


Everything starts with a server. On this server a hypervisor is running, one of the best known is ESXi from VMWare. The hypervisor is the basis for our virtualization. It enables us to run several virtual servers on one hardware. The hypervisor distributes the hardware so that each virtual machine (VM) gets the allocated resources.

The problem with virtualization is the overhead of the respective VM's. Each VM needs its own OS and everything that goes with it. As the name says, it is a virtualized server and that with all the trimmings. That means own kernel, boot process, volumes, ... a lot.

But what is the difference between a VM and a Docker Container? Let's first take a look at the structure of Docker.


First of all, we have our server here as well. But this time it is not running a hypervisor but a host OS. Here you can use almost every Linux distribution that supports Docker. This can be an Ubuntu but also a Linux which is specially designed to use containers like CoreOS or RancherOS. Such specialized Linux versions really only have the minimum on board what is needed for the operation of containers. Everything else can be done with containers.

But what happens now? The Container Engine runs on the host OS. I deliberately write here Container Engine because there are not only dockers. Besides Docker, there are other engines like rkt which comes from CoreOS (now RedHat). But basically, they all do the same thing.

And what is different now in contrast to a VM? Very simple, we don't virtualize a complete OS but use our host OS and give the container what it needs. That's why containers are very lightweight and contain only the most necessary for the operation of the respective application. Thus, a container starts within seconds which makes it completely different from a VM. Furthermore, containers are "immutable". This means that a container always has the status it had when it was built. You can imagine this very similar to a git commit. For this reason, deployment processes can be realized very well with Docker, because you can restore your software in a few seconds if there is an error in the new container.

Now you might think that these containers are insecure because they run from the host OS. But this is not so. Each container is completely isolated from the host OS and the other containers. Therefore no access from a container to the host system is possible. But of course, you can connect the containers through common networks so that they can communicate with each other. This technique is not new but has been around for a long time. Under Linux, it is basically the LXC (Linux Container) and under BSD the Jails.

But what is now THE KILLER FEATURE?

If you want to run several applications on one server that has similar but different dependencies, it will be difficult to realize this with a VM. Let's take a concrete example.

We have a Wordpress installation that requires PHP 5.x and we have a Wordpress installation that requires PHP 7.x. What to do? There are several possibilities. The easiest one is with Docker, but we will get to that in a moment.

In the world of VM's, you could create another VM to run the second Wordpress instance there or you can go the "hacky way" and do something with symlinks and co. No matter which variant, it is not nice.

With Docker, it looks completely different. Our host system only offers us to start containers that contain the dependencies of the respective applications. This means that we don't care which application needs which dependencies because they are delivered with the application in the container. This also has another advantage, the excuse "works on my machine" is no longer available. Docker provides us with consistent environments. So the container runs everywhere Docker runs. Fantastic, isn't it?

I hope I could give you a little insight into the world of virtualization and containers. If you have any questions or want me to write more about virtualization or Docker, please let me know via Twitter or Instagram.