Docker Tutorial | What is Docker?

docker tutorial

Introduction to Docker

If you work in the Software Development industry you must have used or at least have heard of Docker. If not, that’s fine, we all begin somewhere. In this Docker Tutorial, I will be guiding you through all the different concepts that are necessary for you to understand Docker.

Alright, so let’s start.

What is Docker?

What you need to understand: Docker is an open-source tool, it is a tool that helps in developing, building, deploying, and executing software in isolation. How does it do this you ask? It does so by containerizing the complete software in short.

To understand this better I first need to tell you guys why Docker is used and what kind of problem does Docker helps us resolve in software development. This will help you guys get a better idea of Docker rather than just reading the definition of it.

Issues we faced before Docker & Containers were a thing

Right, so Software Development as far as all of us know, it is a process in which you develop software, it can be any type of software – web development, App development, Desktop programs, Embedded software, etc., pretty self-explanatory. But it’s always the details that people miss out on.

Let’s look at how it is developed in a standard development pipeline:

  • Here first the developer writes the code, this code can be for any type of software, for comprehension sake let’s say that this code that he is writing is for a website.
  • Now that he is written the code, he sends this code for building. The building process is basically when a piece of code is made into an executable. So when the code is taken with all of its dependencies (i.e. libraries/functions) and made into a single executable file its known as building a code.
  • Once the code has been built it will be sent to the tester, so that the tester can execute the code and run all of his tests and check the code for acceptability and all the other things a tester does. Here our scenario branches into two possibilities.
  • If the code gets executed perfectly it will go ahead for deployment (i.e. become the website). But if the code has some issues then the code cannot be sent for deployment.
  • If it’s the second case the Tester will let the developer know about all of the errors/bugs/ issues there may be with the code, the developer then rectify the code and the cycle will repeat.

It’s a very simplified version of how software is developed but it gives you a basic idea.

Right, so, now let’s look at one of the issues plaguing this development pipeline.

The coder says that “The code runs fine on his computer” while the tester says “This code does not run on my system”.

What can be the problem here? To understand this let’s look at their environments.

You can see that their environments are almost the same except for having different IDE and different versions of the Pytest. Now, you would expect them to use the same environment right? After all, they work in the same company for the same project. The thing is that this isn’t always the case. Even people working in the same company and even under the same project can have different working environments. It may be because their requirements for running their codes are ever-changing and keeping up with is tough.

In our above example, we can see that the developer is using a more advanced version of Pytest. His code is running fine with v5.4.3 but when the tester gets it he executes using pytest 5.3.0, which is an older version. So the error most probably occurred due to this inconsistency in the version of pytest being used.

To resolve this issue, we will try to use something called containers. Containers are a technology that wraps the code with all of its requirements like dependencies, Operating System, compiler, interpreter, etc. They create a layer of isolation between the code and the host system. You can think of containers like packaging boxes that we can take from one place to another without disturbing its contents.

So, let’s see what happens when we introduce these containers.

Also Read: How to create a DevOps Engineer Resume

We can see here that the compatibility issues is not there anymore, the code runs fine on both the developers and the testers system because they aren’t reliant on their systems to run the code, everything necessary to run the code is wrapped up with it inside of the container.

Instead of giving the code to the tester, the developer gives him a container.

What is a Container?

So once again, containers are software that wraps up the code, the dependencies and the environment that is required to run the code in a single package. These containers are used for development, deployment, testing, and management of software.

To get a better understanding of containers, let’s study it in comparison to VM. I’m sure you guys already know what VM is.

Containers vs VM

I’ll be using these criteria to compare a Container and a VM:

  • Operating Systems
  • Architecture
  • Isolation
  • Efficiency
  • Portability
  • Scalability
  • Deployment

Operating system:

  • Containers contain only the bare minimum parts of the Operating system required to run the software. Updates are easy and simple to do.
  • VMs contain the complete Operating system that is normally used on systems for general purpose. Updates are time consuming and tough.

Architecture:

  • Containers occupy a part of the host system’s kernel and acquire resources using it.
  • VMs are completely isolated from the host system and acquire resources through something called the hypervisor.

Isolation:

  • Container’s isolation isn’t as complete as of a VM but is adequate.
  • VM provides complete isolation from the concerning host system and is also more secure.

Efficiency:

  • Containers are way more efficient as they only utilise the most necessary parts of the Operating system. They act like any other software on the host system.
  • VM are less efficient as they have to manage full-blown guest operating system. VM’s have to access host resource through a hypervisor.

Portability:

  • Containers are self-contained environments that can easily be used on different Operating systems.
  • VMs aren’t that easily ported with the same settings from one operating system to another.

Scalability:

  • Containers are very easy to scale, they can be easily added and removed based on requirements due to their lightweight.
  • VMs aren’t very easily scalable as they are heavy in nature.

Deployment:

  • Containers can be deployed easily using the Docker CLI or making use of Cloud services provided by aws or azure.
  • VMs can be deployed by using the PowerShell by using the VMM or using cloud services such as aws or azure.

Why do we need Containers?

Now that we understand what containers are, let’s see why we need containers.

1. It allows us to maintain a consistent development environment. I have already talked about this when we were discussing the issues we faced before containers were a thing.

2.  It allows us to deploy software as micro-services. I will get into what micro-services in another blog. But right now, understand that software these days are not deployed as single files, but rather a set of smaller files, this is known as micro-services. And Docker helps us launch software in multiple containers as micro-services.

Again, what is Docker?

With that entire context, this definition should make more sense: Docker is an open-source tool. It is a tool that helps in developing, building, deploying and executing software in isolation.

It is developed and maintained by Docker Inc. which first introduced the product in 2013. It also has a very large community that contributes to Docker and discusses new ideas.

There are two versions of the Docker tool:

  • Docker EE (Enterprise Edition)
  • Docker CE(Community Edition) – this is one you need

Docker Installation & Setup

We can download and work with Docker on either of these platforms.

  • Linux
  • Windows
  • Mac

Linux for Docker is the one of the most widely used versions, so we will also go ahead with that one. We will be specifically be working with Ubuntu (as a lot of you may already have it).

If you don’t have the OS on your system, you can use Ubuntu on VM or if you have an AWS account launch an Ubuntu instance on them.

I have launched an Ubuntu Instance here. So our first step is to go ahead and type in the following command for updating Ubuntu repo:

$sudo apt-get update

The following command installs Docker on Ubuntu:

$sudo apt install docker.io -y

So this was the easy method that does not require a lot of effort. If you are here just to learn Docker, I would recommend this method. But if you want to learn how to properly install docker check out this link: https://docs.docker.com/engine/install/ubuntu/

Use the below command to make sure you have installed docker properly or not, if it works it will list the version of the docker that is installed:

$sudo docker –version

We will learn all of the other commands after we get an understanding of the Docker environment.

Check out the Docker best Practice.

Docker Environment

So Docker environment is basically all the things that make Docker. They are:

  • Docker Engine
  • Docker Objects
  • Docker Registry
  • Docker Compose
  • Docker Swarm

Docker Engine:

Docker engine is as the name suggests its technology that allows for the creation and management of all the Docker Processes. It has three major parts to it:

  • Docker CLI (Command Line Interface) – This is what we use to give commands to Docker. E.g. docker pull or docker run.
  • Docker API – This is what communicates the requests the users make to the Docker daemon.
  • Docker Daemon – This is what actually does all the process, i.e. creating and managing all of the Docker processes and objects.

So, for example, if I wrote a command $sudo docker run Ubuntu, it will be using the docker CLI. This request will be communicated to the Docker daemon using the docker API. The docker daemon will process the request and then act accordingly.

Docker Objects:

There are many objects in docker you can create and make use of, let’s see them:

  • Docker Images – These are basically blueprints for containers. They contain all of the information required to create a container like the OS, Working directory, Env variables, etc.
  • Docker Containers – We already know about this.
  • Docker Volumes – Containers don’t store anything permanently after they’re stopped, Docker Volumes allow for persistent storage of Data. They can be easily & safely attached and removed from the different container and they are also portable from system to another. Volumes are like Hard drives
  • Docker Networks – A Docker network is basically a connection between one or more containers. One of the more powerful things about the Docker containers is that they can be easily connected to one other and even other software, this makes it very easy to isolate and manage the containers
  • Docker Swarm Nodes & Services – We haven’t learned about docker swarm yet, so it will be hard to understand this object, so we will save it for when we learn about docker swarm.

Docker Registry:

To create containers we first need to run images, to create images we need to build text files called dockerfile. You can run multiple containers from a single image.

Since images are so important, they need to stored and distributed. To do this, we need a dedicated storage location and this is where Docker registries come in. Docker registries are dedicated storage locations of docker images. These images can be distributed easily from here to anywhere it is required.

The Docker images can also be versioned inside of a Docker Registry just like source code can be versioned.

You have many options for a Docker Registry. One of the most popular ones is DockerHub, which is again maintained by the docker inc. You can upload your docker images to it without paying, but they will be public, so if you want to make them private you will have to pay for a premium subscription to docker.

There are some alternatives but they are rarely entirely free, there is a limit and once you cross that limit you will have to pay. Some alternatives are: ECR ( Elastic Container Registry), Jfrog Artifactory, Azure Container Registry, Red Hat Quay, Google Container Registry, Zookeeper, Harbor etc.

You can always host your own images if you have the infrastructure and resources to do so and some organisations do this.

Docker Compose:

Docker-compose is a tool within docker that is used to launch and define multiple containers at the same time. Normally when you run a container using the docker run command you can only run one container at a time. So when you need to launch a whole bunch of services together you first define it within a docker-compose.yml file and then launch it using the docker-compose command.

It’s a very useful tool for testing, production, development and as well as staging purpose.

Docker Swarm:

Docker swarm is a little bit more advanced topic I won’t cover it entirely but I will give you guys an idea of what it is. Docker swarm by definition is a group of either physical or virtual machines that are running the Docker application and that has been configured to join together in a cluster. So when we want to manage a bunch of docker containers together we group them as clusters and then manage them.

Docker swarm officially is an orchestration tool that is used to group, deploy, and update multiple containers. People usually make use of it when they need to deploy an application with multiple containers.

There are two types of nodes on a Docker swarm:

  • Manager Nodes – Used to manage a cluster of other nodes.
  • Worker Nodes – Used to perform all the tasks.

Docker Architecture

Let’s explore the architecture of docker, since we know about all of its components we will understand the architecture much better.

Docker has three main parts, Docker CLI – allows us to communicate our requests to Docker, Docker Host – performs all the processing and creation of objects, Docker Registry – a dedicated storage place for Docker images. And of course, not mentioned in the diagram here, but there is Docker API which handles all the communications.

Let’s consider three commands here:

  • $ docker build
  • $ docker pull
  • $ docker run

And now let’s study what happens when each of these commands is executed.

$ docker build

This command is used to give the order to build an image. So when we run the command Docker build through the Docker CLI, the request is communicated to the Docker daemon which processes this request, i.e. looks at the instructions and then creates an image according to those requests.

Let’s say that the image to be created is of Ubuntu. So we will tell Docker about the creation of the image using the command: $ sudo docker build dockerfile –t ubuntu .  , Once the daemon gets to know about the request it will start building the image based on dockerfile you have written.

$ docker pull

This command is used to give the order to pull an image from the Docker registry. So when we run this command the request will be communicated to the docker registry through the Docker daemon and once the image is identified it will be stored in the host system, ready for access.

Let’s say we want to pull an apache web server image for hosting our server, for that we will use the command: $ sudo docker pull apache2 , Once the daemon gets the request it will look for the same image in the Docker registry and is it finds it will download the image and store it locally.

$ docker run

This command is used to run any image and create a container out of it. So when we run this command the request will be communicated to the docker daemon which will then select the image mentioned in the request and create a container out of it.

Let’s say we want to create a container based on the ubuntu image we had created earlier.  For this we will use the command:

$ sudo docker run ubuntu

, Once the daemon gets this request it will refer to the image named ubuntu and then create a container out of this image.

So this is in short how Docker functions as a tool to create containers.

Docker Common Commands

There are a few common commands you guys will need to know to get started, I will list each one and then explain what it does.

The following command will list down the version of the Docker tool that is installed on your system and is also a good way to know if you have docker installed in your system at all or not.

$ sudo docker –version

The following command is used to pull images from the Docker registry.

$ sudo docker pull <name of the image>

So this is what it should look like when you want to pull Ubuntu image from the Docker registry:

$ sudo docker pull ubuntu

Both of the following commands are used to list down all Docker images that are there stored in the systems you are using the command in.

$ sudo docker images or $ sudo docker image ls

So if you had only one image (an Ubuntu one) then this is what you should be seeing.

The following command will run an existing image and create a container based on that image.

$ sudo docker run <name of the image>

So if you wanted to create an Ubuntu container based on an Ubuntu image it will look something like this:

$ sudo docker run –it –d ubuntu

Now here you will notice that I included two flags; -it & -d

  • It – interactive flag, allows the container that was created to be interactive
  • d – Detach flag, allows the container to run in the foreground.

Once you have created a running container, you would want to know if it exited or is still running. For that reason you can use the following command, it basically lists all the running containers:

$ docker ps

Now if you wanted to see both exited containers and running containers then you can go ahead and the –a flag like so:

$ sudo docker ps –a

To stop a container you can use this command:

$ sudo docker stop <name of the container>

Like so:

If you want to kill a container then you can use this command:

$ sudo docker kill <name of the container>

If you want to remove any container either stopped or running you can go ahead and use this command:

$ sudo docker rm –f <name of the container>

Disclaimer: kill, stop and rm commands are different in the matter that stop allows for slow and steady stoppage of a container whereas kill command kills containers quickly and rm is basically used for clean-up purpose.

Docker File

So now we know the most basic docker commands, we can move on to learn how to create our own image.

To create a new image/ custom image you need to write a text file called dockerfile. In this file, you need to mention all of the instructions that will let docker know what to include in the image.

To understand this better let’s look at an example.

Here we are trying to create a container with the base image of Ubuntu latest version and running the commands to update the instance and install apache web server on it. And finally, the working directory is mentioned to be /var/www/html.

These are called instructions. There are many different types of instructions to use, such as:

FROM

Syntax: FROM <base image>

It is an instruction that informs docker about the base image that is to be used for the container. So basically if you have an image in mind whose properties you wish to inherit you can mention it using this instruction. This is always used as the first instruction for any docker image, but you can use it multiple times.

ADD

Syntax: ADD <source> <destination>

It is used to add new sources from your local directory or a URL to the filesystem of the image that will become the container in the designated location.

You can include multiple items as the source and can even make use of wildcards and if the destination that you have mentioned does not exist then it will create one.

COPY

Syntax: COPY <source> <destination>

It is used to copy new sources from only your local directory to the file system of the image that will become the container in the designated location.

You can include multiple items as the source and can even make use of wildcards and if the destination that you have mentioned does not exist then it will create one.

It’s similar to ADD, the difference being that ADD can also add a new URL to the file system whereas COPY can’t.

RUN

Syntax: RUN <command>

This instruction is used to run specific commands that you want to run in the container during its creation. For example, You want to update the ubuntu instance then you can use the instruction as such:

RUN apt-get update

WORKDIR

Syntax: RUN <command>

This instruction is responsible for setting the working directory so that you can run shell commands in that specific directory during the build time of the image.

CMD

Syntax: CMD <command>

This instruction tells the container what command to run when it gets started.

VOLUME:

Syntax: VOLUME <path>

This instruction makes a mount point for the volume of a specified name.

EXPOSE

Syntax: EXPOSE <ports>

This instruction tells what port the container should be exposed. But this can only happen for an internal network, the host will not be able to access the container from this port.

ENTRYPOINT

Syntax: ENTRYPOINT <command> <parameter 1> <parameter 2>

This instruction allows you to run commands when your container starts with parameters.

The difference between CMD and ENTRYPOINT is that with ENTRYPOINT your command is not overwritten during runtime. When you use ENTRYPOINT it will override any elements specified in another CMD instruction.

LABEL

Syntax: LABEL <key>=<value>

This instruction is used to add Metadata to your image. You need to make use of quotes & backslashes if you want to include spaces. If there are any older labels they will be replaced with the new label value. You can make use of Docker inspect command to see the container’s label.

Once you have created a dockerfile you can execute that dockerfile using the docker build command like so:

$ sudo docker build -t <name of the image> .

Since a new layer is created each time a new instruction is written, it is important to write in the most optimised way as possible and the least number of instructions as possible.

Conclusion

So, we learned today a lot about Docker, it is a tool that we use to maintain consistency across the development pipeline of software development and it also helps us to manage and deploy software as microsystems. It has many different components that help it become the amazing tool it is. I would recommend to anyone reading this to start learning more and more about Docker as what I cover here is just a drop in the ocean, do this especially if you want to get into DevOps.

If you found this helpful and wish to learn more such concepts, upskill with Great Learning Academy’s free courses.

→ Explore this Curated Program for You ←

Avatar photo
Great Learning Editorial Team
The Great Learning Editorial Staff includes a dynamic team of subject matter experts, instructors, and education professionals who combine their deep industry knowledge with innovative teaching methods. Their mission is to provide learners with the skills and insights needed to excel in their careers, whether through upskilling, reskilling, or transitioning into new fields.

Full Stack Software Development Course from UT Austin

Learn full-stack development and build modern web applications through hands-on projects. Earn a certificate from UT Austin to enhance your career in tech.

4.8 ★ Ratings

Course Duration : 28 Weeks

Cloud Computing PG Program by Great Lakes

Enroll in India's top-rated Cloud Program for comprehensive learning. Earn a prestigious certificate and become proficient in 120+ cloud services. Access live mentorship and dedicated career support.

4.62 ★ (2,760 Ratings)

Course Duration : 8 months

Scroll to Top