Every developer has to come across this name containers after Dev-ops and micro-services introduced to them. Let us demystify Docker and Containers. 

To understand how container works we need to start with the brief history of container.

History

Is Docker introduced the container first ?. The answer is no. Then who introduced. Below are the Primitives for the Docker

NameSpaces(2002)

 Namespace create a own isolated instance of the global resource. Only members of the namespace can see the changes other process can’t see it.One use of namespaces is to implement containers. 

This works just like the chroot in in the unix operating system. So modified environment can’t create the files outside designated directory.

CGroups (2007)

 It’s know as Control groups.Isolate the resource usage. It allows to share the machine resource and limits the resources access.

LXC (2008)

LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.

There are 7 namespaces avaliable 

  1. Mount for filesystem
  2. UTS for hostname and domainname
  3. IPC for interprocess communication resources
  4. PID for PID number space
  5. Network for isolate the network interfaces
  6. User for isolate the UID/GID number spaces
  7. Cgroup isolate the root directory

Mostly containers use the above Namespaces to provide the isolation.

Eg:

unshare is the command line tool to create the namespace.

➜ unshare -h

Usage:

 unshare [options] […]

Run a program with some namespaces unshared from the parent.

Options:

 -m, - mount[=] unshare mounts namespace

 -u, - uts[=] unshare UTS namespace (hostname etc)

 -i, - ipc[=] unshare System V IPC namespace

 -n, - net[=] unshare network namespace

 -p, - pid[=] unshare pid namespace

 -U, - user[=] unshare user namespace

 -C, - cgroup[=] unshare cgroup namespace

Let’s do some dirty work enough of theory start with chroot to create the root directory

Chroot:

Create Jail environment so the container can’t access the host system files. 

We need linux file system to create jail environment. Debootstrap tool used to install debian base system into the subdirectory.

I used debian.

Note all the commands need to install as a root user

Step 1: Install debootstrap 

sudo apt-get install binutils debootstrap

Step 2: Create new folder to play with chroot

I created in home with folder name Container

cd $HOME
mkdir Containers

Step 3: Get the linux filesystem by downloading the image

sudo debootstrap - arch i386 stretch $HOME/Containers http://deb.debian.org/debian

Step 4: Go to Containers directory you can see the linux file system

cd Containers

Step 5: Run below command to make the containers directory as the root file system

sudo chroot /home/arunkumar/Containers /bin/bash

You can see the root@hostname: in your shell.

Now we isolate the folder access but the current user is root so he can kill the process not belongs to this. 

E.g

Open new terminal and run 

top

Then in chroot run below command to mount the process that helps to access the process in the system

root@disconnect:/\\\# mount -t proc proc /proc

Now run the below command 

root@disconnect:/\\\# pkill top

Alas it kill the other user process how can we prevent this. Here comes Namespaces

Exit the chroot and create the new chroot with the namespace

root@disconnect:/\\\# exit

Now run the below command

sudo unshare -p -f - -  mount-proc=$HOME/Containers/proc  chroot $HOME/Containers /bin/bash

Check the process now by

ps aux

Now it display only two Process one is root and another is process. Checkout the below image.

These are all some internal part of container many things are available like CGroups to limit memory for container and security packages like SELinux, seccomp, and capabilities.

Docker

Instead of inventing the wheel again docker provides all of these and made our life easy.

Docker leveraged existing computing concepts around containers specifically in the linux world.

How does this works?

Containers does the resource isolation like virtual machines using virtualizing of operating system instead of the hardware. With Docker, we can manage our infrastructure for each applications. 

Docker provides the ability to package and run an application in a loosely isolated environment called a container. 

Docker Engine:

Docker engine is a client-server application 

server known as a daemon process, REST API is the interface to talk to daemon and client command line interface 

Benefits:

Fast and reliable for continuous integration and continuous deployment workflows.

Running more workloads on the same hardware.

Architecture:

Daemon

It helps to manages the docker images and communication with other daemons to manages the docker services. Moreover listen to docker API request. 

Client

This is docker client command line interface it’s the primary way to interact with docker. 

Registries

Registries stores docker image. It’s sort of the central hub to store the docker images docker pull and docker push used to pull and push the image to these registries.

Objects

While creating docker image we need to create with below objects Images, containers, networking, storage.

Images

Image is a binary template used to build containers. Image is based on another image with additional customization or same image can be used.

Eg: Ubuntu image can be install with Apache or NGINX depends on your application need.

Containers:

Container is a runnable of an image. Can be control using Docker API or CLI it allow us to connect to network, storage or even allow to create new container.

Networking:

Docker provides various options for application developers by default bridge and overlay drivers given.

https://www.aquasec.com/wiki/display/containers/Docker+Networking+101

Installing Docker and play with it tools

Docker Introduction is over let’s get some hands on experience

Step - 1 First Install Docker

https://docs.docker.com/get-started/

Now check the docker version run the below command in terminal

docker –version

Now run the image to verify the installation. If you get permission issue change on docker.sock then change the permission of the file using below command

sudo chmod 666 /var/run/docker.sock

Then run below command to verify the docker installation.

docker run hello-world

Step - 2 : Creating container for Django with postgresql

Dockerfile → Docker build the image by reading the dockerfile. 

Docker-Compose → Allows to create and run multiple containers. Django and MySQL.

Each container should have only one concern. Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers.

Django Application I need python, django, postgresql so base image I’m going to use the python 3.7. Remaining dependencies I’ll keep it in docker-compose as services like application server, db.

Checkout dockerfile best practice for more information

https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

DockerFile:

We can create our own parent image or base image scratch. For parent image we need to use debootstrap tool .Base image pull the scratch from the docker hub. It’s minimal image with single binary.

Either we can pull the image like python and start creating other services from it. Instead of I’ll use the python image 

I’m going to create it for django

Step - 3 Create DockerFile in the project directory. File name is DockerFile*.* 

Add the below lines in the file. It’ll take the image from python:3 parent image.

FROM python:3 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/

Each instruction creates one layer:

  • FROM creates a layer from the python:3 Docker image.
  • COPY adds files from your Docker client’s current directory.
  • RUN builds your application with make.
  • CMD specifies what command to run within the container.

Step - 4 create requirements.txt file

Add the requirements for the project

Django>\=2.0,<3.0
psycopg2>\=2.7,<3.0

Step - 5 Create docker-compose.yml file

Here we need to give the services to project like webserver, pulling postgresql image.

So add the below lines inside it.

version: '3'
services:
 db:
 image: postgres
 web:
 build: .
 command: python manage.py runserver 0.0.0.0:8000
 volumes:
- .:/code
 ports:
- "8000:8000"
 depends\_on:
- db

Step - 6 Install docker-compose

We need to install docker-compose to our machine to build and run the images. Multiple ways to install docker-compose I chose pip to install. 

sudo pip install docker-compose

Step - 7 Run the below command to create a new django project

sudo docker-compose run web django-admin startproject dockertest .

Change the permission for the folder

sudo chown -R $USER:$USER .

Step - 8 Start django server

docker-compose up

Note:- Change the database setting in dockertest/settings.py

New django project with docker is ready. Stuck use the github for reference

github-link:- https://github.com/armsarun/djangodockersample

##References

https://docs.docker.com/engine/docker-overview/

https://www.aquasec.com/wiki/display/containers/Docker+Architecture

https://www.mongodb.com/containers-and-orchestration-explained

Docker Internals

https://ericchiang.github.io/post/containers-from-scratch/

https://medium.com/@nagarwal/understanding-the-docker-internals-7ccb052ce9fe

Docker-compose Vs DockerFile

http://v0cdocker.blogspot.com/2017/11/docker-compose-vs-dockerfile.html

https://stackoverflow.com/questions/29480099/docker-compose-vs-dockerfile-which-is-better

Namespaces

https://medium.com/@teddyking/linux-namespaces-850489d3ccf

https://www.toptal.com/linux/separation-anxiety-isolating-your-system-with-linux-namespaces

https://jvns.ca/blog/2016/10/10/what-even-is-a-container/

https://dev.to/nicolasmesa/container-creation-using-namespaces-and-bash-6g

Why Kubernetes?

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/