
There is a lot of buzz around the docker
these days as most firms are moving towards the containerizing of their
application.An App working in quarantine. Don't worry it has nothing to do with the corona, containments of the app started way back before 2013. The world is moving towards fastening their software delivery cycles. Docker is an important open-source platform in achieving containerization. Below I document the learnings I make out of the Docker platform as I use it, I hope it helps out someone who is starting to learn it.
I have mostly followed the official documentation of docker and also some of the introductory lectures of Nigel poulton on Pluralsight(Docker deep-dive). For proper docker commands, I would recommend you to follow the official documentation as the commands keep changing with versions(not totally but minor optimizations keep on happening). The text here is arranged in a way to make you understand the platform rather than the commands.
"Anchor⚓your mind on this dock as we load containers into your memory-space "
First, let's see the
basic problems that docker solve in software development[1]
1.Environment disparity
Software gets developed
in the developer environment. The developer has to give a set of instructions so
as to replicate the same environment in the target environment where the
software is deployed.
This is a tedious
task. Some companies use an IaaC (Infrastructure as a Code) /Configuration
management technology like a chef to replicate the same environment. This is an
added effort for the developers.
Using docker you can
just containerize the apps, which will make sure all the dependencies are within
the container. The containers can run locally on a docker engine on any machine
be it developers, QA, or the prod. In the target environment, just a docker
engine is required and a docker run command will start the container without
any configuration efforts.
2.New developers on board:-
3.Microservices:-
Microservices can be
containerized, making them work in their isolated workspace. It also makes them
independently deployable and can be scaled as per the need. Kubernetes is
another tool which is can be used for the orchestration of services.
4. Directly moving legacy apps to containers:-
"Lift and
shift" of the legacy monolith app is possible with the container. This can
be the first step towards the decomposition of the monolith app to a
microservices-based app. Improves development,Testing efficiency plus deployment and disaster recovery is simplified.
Container
![]() |
| Containerized Applications |
A container is a
standard [2] unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to
another. Container creation capabilities were already there in the Linux which
were simplified by Docker. Docker provided the REST interface over docker
daemon for the creation, maintenance, and deletion of the containers.
Container-Images
A container is a
run-time construct. An executable package that contains everything needed to
run the application.Things included under everything :
1. Application code
2. Run time
3.System tools
4.System libraries
5.Settings [ environment variables
]
The image is a build
time construct.
" Docker-Container-Images becomes Docker-Container when they run on Docker-Engine "
Containers X Virtual-machine
A hypervisor carves out
a virtual machine out of the infrastructure - it creates a virtual ram, virtual
CPU, virtual memory that is dedicated to that machine. Each machine then has
its own dedicated fully blown operating system. The app is deployed over the
OS. As can be seen, having different OS for each VM eats a lot of resources.
Containers on the other hand are carved out of the existing Host operating system resources. Docker engine takes process tree, filesystem, and network stack OS resources and creates a securely isolated construct called a container. For a detailed comparison, you can visit reference[3].
Docker-Platform
The docker engine is a
client-server application with 3 major components:
1. Server: Docker Daemon process
2. REST API: Interface
to interact with the daemon
3. Docker CLI client:
"docker" command
Docker CLI interacts with the docker daemon via the REST API to manage the docker objects like images, containers, network, and data volumes.
| Docker engine |
Docker Client-Server
Architecture
| Fig:- Docker Architecture |
The docker client
interacts with the docker daemon that has to do the major work of
building(docker build), running(docker run) and distribution(docker pull) of
the container.
Docker registry is used
to store the docker images. Docker hub is a public registry, by default docker
is configured to look for images in the Docker hub registry. A private registry
can also be used and configured.
Docker objects Images,
Containers, and Services(Swarm).
The underlying technology used for container creation that was initially developed
in Linux:
1.Namespaces
Docker uses namespace
technology to provide isolated workspace called containers. Docker creates a
set of namespaces for the container.[9]Namespace provides a layer of isolation.
Each container runs in a separate namespace and its access is limited to that
namespace. In Linux for example we have process PID namespace (PID) and net
namespace for managing network interfaces.
2.C groups
C-groups allows docker
to put resource constraints on the container. For example, we can specify a
memory limit for a container.
3.Unified filesystem
Union file systems, or
UnionFS, are file systems that operate by creating layers, making them very
lightweight and fast. Docker Engine uses UnionFS to provide the building blocks
for containers. Docker Engine can use multiple UnionFS variants, including AUFS,
btrfs, vfs, and DeviceMapper.[4]
4.Container format
Docker Engine combines
the namespaces, control groups, and UnionFS into a wrapper called a container
format. The default container format is libcontainer.[4]
Lifecycle-of-Container
| Container Lifecycle |
Things start with the docker run command, it internally uses a docker container create and docker container start command. It checks if the image is present in the local repository, if not it is pulled from the repository using the docker image pull command. The engine then creates a docker container out of it and starts the main job inside the container. If the main job process exits after the execution the container also exits. The container is then in the exit state. A container can also be stopped from running by the docker container stop command, this command will give a signal to the main process to stop and the container will go down in the exit state.
Any new docker run command for the same image will create a new container with its unique id and name. The exit state container can be started again with a docker container start command. It can be removed from the space using the docker container rm command.
Commands are simple to remember if you know the life-cycle. The image is pulled(docker pull command), the container is created(docker container create), the container is started (docker container start), the container is stopped(docker container stop) and the container is removed using (docker container rm).
Can we remove docker images from the repository? The answer is yes, provided there are no active containers of that image. If there are first you need to remove those containers and then remove the image from repo using (docker images rm) command.
In the next part, I will document on the docker image - Layering & Image repositories.
References:-
[1]https://www.docker.com/use-cases
[1]https://www.docker.com/use-cases
[2]https://www.docker.com/resources/what-container
[3]https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/containers-vs-vm
[4]https://docs.docker.com/get-started/overview/#the-docker-platform
[3]https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/containers-vs-vm
[4]https://docs.docker.com/get-started/overview/#the-docker-platform
