Containers & Containerization

Rohan Rao
5 min readApr 4, 2022

As a developer, I have always been intrigued by the buzz word — CONTAINER. And I used to wander, what exactly is a container? How do we use it? Why Should we use it? Does it make our life any easier?

Well, I did a bit of reading on it over the weekend and here is what I found out.

What The Heck is a Container?

credits :- Docker

In technical terms, a container is a layer of images. Each image is a package of the application, its configuration & a start script required to run the application. At the lower most layer, we have the Linux based image and on the top we generally have the application image. The image is the actual artifact that can be moves around. Now, whenever we download an image & start it that actually creates the container image.

If that was hard for you then, A container is basically a way to package a application along with all of its configurations and dependencies so that it can be easily shared and moved around.
Containers are generally stored in a container repository which is a specialized repository for storing containers. Most of the firms that use containers generally maintain their own private container repository. But, public container repositories like docker repository can also be used to store these container packages.

How does it help Exactly?

Suppose, I am developing a python application which utilizes MySQL for storing data and Redis MemeCached for message service. Now, I would have to go onto the internet and find out the particular version of binary files of the software and install & configure it on my computer. It is a tediously long process where any step can go wrong and we have to then have to start from the beginning again. Now with container, I have to just find a image that already provides us with an isolated environment containing the required service and its configurations packed in it. I can install it in a single command and not worry about the tedious process of finding out each of the service and install and configure them manually. This would make the entire process of development less problematic. The same can be said for application deployment also. Before container, a traditional application deployment process involved developers bundling all the required artifacts and their configurations together with a set of guidelines on how to configure the artifacts on the servers and sending it to the operations team. The operations team will then configure & set up the servers as per the guidelines and run the artifacts on it. Sometimes there would arise dependency conflicts or version conflicts which the operations teams would then have to fix. Also, sometimes due to misunderstanding between the development team and operations teams, there would be lot of back and forth while setting up the solutions on the servers. But with containers the operations and the developer team work together to package the entire application in a single container. This saves the operation team a lot of overhead of actually configuring the application on the server as the applications is already encapsulated in an isolated image. They just need to configure the docker runtime on the server and then run the container on the server without any hassle.

So, Basically Container is a Virtual Machine?

Well, NO!

To understand why we have to first know the main components of a machine. Those are the hardware, the OS kernel and the application. The OS kernel communicates with the hardware and the application run on the OS kernel. Now a container when we install it, it utilizes the OS kernel of the host machine i.e. it is platform dependent. A windows based docker image cannot be run on a Linux machine and vice versa. The container does not need to run a full operating system on each instance — rather, they share the operating system kernel and gain access to hardware through the capabilities of the host operating system. This makes containers smaller, faster, and more portable.

But a virtual machine, on the other hand, creates its own instance of the OS kernel on top of the OS kernel of the host i.e. essentially it is platform independent. Each virtual machine contains a full operating system with applications and associated libraries, known as a “guest” OS. There is no dependency between the VM and the host operating system, so Linux VMs can run on Windows machines, and vice versa. Each VM has direct or virtualized access to CPU, memory, storage, and networking resources.

Last, but not the least, Why Should I choose a container?

Well, there are a couple of reason why containers can be useful :-

  1. Portability :- Containers can run anywhere, as long as the container engine supports the underlying operating system. Containers can run in virtual machines, on bare metal servers, locally on a developer’s laptop. They can easily be moved between on-premise machines and public cloud, and across all these environments, continue to work consistently.
  2. Resource Efficiency :- Containers do not require a separate operating system and therefore use fewer resources.
  3. Isolation & Resource sharing :- Multiple containers can be run on the same server, while ensuring they are completely isolated from each other. When containers crash, or applications within them fail, other container running the same application can continue to run as usual.

Conclusions

Containers are not your average transportation medium. Wait a second! Actually, YES! They are, for the applications.

--

--