Day 21 Task: Docker Important interview Questions.

Docker Interview

Docker is a good topic to ask in DevOps Engineer Interviews, mostly for freshers. One must surely try these questions to be better in Docker.

1)What is the difference between an Image, Container and Engine?

Image: They are executable packages(bundled with application code & dependencies, software packages, etc.) to create containers. Docker images can be deployed to any docker environment and the containers can be spun up there to run the application. Images contain the code or binary, runtimes, dependencies, and other filesystem objects to run an application. Docker images are immutable, so you cannot change them once they are created.

Container: Containers consist of applications and all their dependencies(Including the code, a runtime, libraries, environment variables, and config files). They share the kernel and system resources with other containers and run as isolated systems in the host operating system. docker containers are to get rid of the infrastructure dependency while deploying and running applications, they are just the runtime instances of docker images.

Engine: Docker Engine is the underlying client-server technology that builds and runs containers using Docker's components and services. An engine is a software that is responsible for creating and managing containers. It is the component that handles the low-level details of container creation, networking, storage and other operations.

2)What is the difference between the Docker command COPY vs ADD?

The COPY and ADD commands in Docker are used to add files and directories to a container image.

The COPY command is used to copy local files and directories to the container file system. It is a more basic command and only supports local file and directory paths.

COPY is a docker file command that copies files from a local source location to a destination in the Docker container. The instruction can be used only for locally stored files. Therefore, you cannot use it with URLs to copy external files to your container.

The ADD command also copies local files and directories to the container file system, but it has some additional features. It can automatically decompress files and supports copying files from remote URLs.

ADD command is used to copy files/directories into a Docker image. It can also copy files from a URL and provides tar extraction support.

It is recommended to use COPY command for most cases as it's more secure and predictable, as ADD command has some features that may increase the chance of introducing security vulnerabilities.

3)What is the difference between the Docker command CMD vs RUN?

RUN is an image build step, the state of the container after a RUN command will be committed to the container image. A Dockerfile can have many RUN steps that layer on top of one another to build the image.

The RUN command is used to execute commands during the image build process. It runs the command and commits the result, creating a new layer in the image. This allows you to install software, make configuration changes, and perform other tasks when building an image.

The CMD command is used to set the default command that will be executed when a container is run from the image. It is used to specify the command that will be run when the container starts up, and it can be overridden when starting a container. Unlike RUN, CMD doesn't create a new layer on the image, but you can use multiple CMD commands to set defaults for an executable.

The RUN command is used during the image building process to execute commands and commit the results, while CMD command sets the command that will be run when the container starts up.

4)How Will you reduce the size of the Docker image?

There are several ways to reduce the size of a Docker image:

1. Use a Smaller Base Image(Alpine)

Alpine Linux is a lightweight Linux distribution that is popular for creating small Docker images. It is smaller than most other Linux distributions and has a smaller attack surface.

2. Use a .dockerignore file

A .dockerignore file allows you to specify files and directories that should be excluded from the build context sent to the Docker daemon. This helps to exclude unnecessary files from the build context, which in turn reduces the size of the image.

3. Utilize the Multi-Stage Builds Feature in Docker

It allows users to divide the Dockerfile into multiple stages. Multi-stage builds allow you to use multiple FROM statements in your Dockerfile. This allows you to use one image as a builder image and then copy only the necessary files to a smaller image.

4. Avoid Adding Unnecessary Layers

A Docker image takes up more space with every layer you add to it. Therefore, the more layers you have, the more space the image requires. Each RUN instruction in a Dockerfile adds a new layer to your image. Remove unnecessary files and dependencies from the image by using the RUN apt-get autoremove, RUN apt-get clean and RUN rm commands in your Dockerfile

5. Use Squash

Squash is a technique that allows you to combine all the layers of an image into a single layer. This can significantly reduce the size of an image.

6. Use official images

Official images are images that are maintained by the upstream software maintainers. These images are usually smaller in size and more secure than images built by other parties.

7. Keep Application Data Elsewhere

Storing application data in the image will unnecessarily increase the size of the images. It’s highly recommended to use the volume feature of the container runtimes to keep the image separate from the data.

5)Why and when to use Docker?

Docker is an open-source platform that enables developers to build, deploy, run, update and manage containers—standardized, executable components that combine application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Docker is a containerization platform that enables you to create, deploy, and run applications conveniently with the help of containers.

1. Consistent & Isolated Environment

Docker containers provide a consistent environment for running applications, regardless of the host system. This makes it easy to move applications between development, testing, and production environments. Containers isolate applications from one another and from the host system, providing an additional layer of security. It takes the responsibility of isolating your resources in such a way that each container becomes able to access all the required resources in an isolated manner i.e., without disturbing or depending on another container.

2. Rapid Application Deployment

The docker containers come up with the minimal runtime requirements of the application that allows them to deploy faster. Here, you’re not required to set up a new environment – all you need to do is download the Docker image to run it on different environments.

3. Portability

Containers are lightweight and portable, making it easy to move them between different systems and environments. The applications created with Docker containers are immensely portable. The Docker containers can run on any platform.

4. Scalability

With Docker, it is easy to scale applications up and down as needed, by running multiple instances of a container.

5. Ease of use

Docker provides a simple and consistent interface for creating, deploying, and running containers.

Docker is used in various scenarios, such as:

-Developing, testing and deploying microservices.

-Automating the deployment of complex applications.

-Simplifying the configuration management of systems.

-Building and testing software in a consistent and reliable environment.

-Managing and scaling applications in a cloud-based infrastructure.

-Running applications in a lightweight, portable and consistent environment.

6)Explain the Docker components and how they interact with each other.

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.

Docker consists of several components that work together to create, deploy, and manage containers. These components include:

  1. The Docker daemon: This is the background process that manages containers, images, networks, and storage on a host. It listens for API requests and performs the necessary operations.

  2. The Docker client: This is the command-line interface (CLI) that allows users to interact with the Docker daemon. It communicates with the daemon via REST API calls to perform various operations, such as creating and managing containers.

  3. The Docker registry: This is a service that stores and distributes Docker images. It is used to share images with others and to retrieve images for use on a host. The most popular registry is Docker Hub, which is maintained by Docker, Inc.

  4. The Docker engine: The Docker engine is the layer between the host and the container. This is the component that provides the low-level functionality for creating, starting, and stopping containers. It is built on top of the Docker daemon.

  5. The Docker image: An image is a pre-configured file that contains all the necessary dependencies and settings to run a specific application or service. It is used to create and run containers.

    The Docker client and daemon interact together, the client sends commands to the daemon, which performs the requested operations. The client and daemon can run on the same host, or the daemon can be running on a remote host and the client can communicate with it via the Docker API.

    Users interact with the Docker client to create, start, and stop containers, pull images from a registry and push images to a registry. The Docker daemon uses images to create and run the containers. The Docker registry is used to store and distribute images. The Docker engine is the layer between the host and the container, it provides low-level functionality for creating, starting and stopping containers.

7)Explain the terminology: Docker Compose, Docker File, Docker Image, Docker Container?

Docker Compose:

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Using a YAML configuration file, Docker Compose allows us to configure multiple containers in one place. We can then start and stop all of those containers at once using a single command. It also has commands for managing the whole lifecycle of your application.

Docker File:

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a script that uses the Docker platform to generate containers automatically. It is essentially a text document that contains all the instructions that a user may use to create an image from the command line. It is a simple text file that contains commands, such as FROM, RUN, COPY, EXPOSE, ENV, etc.

Docker Image:

An image is a pre-configured file that contains all the necessary dependencies and settings to run a specific application or service. It is used to create and run containers. An image is a read-only template that can be used to create new containers. They are executable packages(bundled with application code & dependencies, software packages, etc.) to create containers.

Docker Container:

A container is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files. Containers run within an Operating System, and share the host kernel. Once an image is run, it becomes a container. The container is a running instance of an image, and it can be started, stopped, moved, or deleted.

8)In what real scenarios have you used Docker?

Docker use cases for real scenarios:

1. Simplifying Configuration: The primary use case Docker advocates is simplifying configuration. One of the big advantages of VMs is the ability to run any platform with its own config on top of your infrastructure. The same Docker configuration can also be used in a variety of environments.

2. Code Pipeline Management: Docker provides a consistent environment for the application from development through production, easing the code development and deployment pipeline. Docker supports different computing environments. Given that, when developed codes are transferred between different systems to production, there would be many environments it passes through.

3. Developer Productivity: With the help of Docker, a development environment can be built with low memory capacity without adding an unnecessary memory footprint to the memory that the host repository already holds. A dozen services can run at low memory with Docker.

4. App infrastructure isolation: When you install a software package created on one machine on another, there are issues that the DevOps team may face in running apps with specific versions, libraries, dependencies, and so much more.

With the help of Docker, you can run multiple apps or the same app on different machines without letting something as versions or other such factors affect the development process. This is made possible as Docker uses the kernel of a host system and yet runs like an isolated application.

5. Consolidation of server requirements: Docker provides a powerful consolidation of multiple servers even when the memory footprint is available on various OSs. Not to mention, you can also share unused memory across numerous instances. With such a possibility, you can even use Docker to create, deploy and monitor a multi-tier app with any number of containers. The ability to isolate the app and its development environments is a boon offered by Docker.

6. Multi-tenancy support: Using Docker, it was easy and inexpensive to create isolated environments for running multiple instances of app tiers for each tenant. Development teams will have the power to run multiple instances of the application tiers on different tenants, respectively. The increased speed of Docker to spin up operations also serves as an easy way to view or manage the containers provisioned on any system.

7. Continuous rapid deployment: With Docker, you can easily run the app on any server environment, and evidently, it also provides version-controlled container images. Furthermore, stage environments can be set up via the Docker engine, which enables CD abilities.

9)Docker vs Hypervisor?

The software that supports virtual machine creation in which a virtual platform is provided for the operating system to manage and run virtual machines is called Hypervisor.

A hypervisor is a software or hardware-based solution that creates and runs virtual machines (VMs). Each VM runs its own operating system and has its own resources, such as CPU, memory, and storage. Hypervisors abstract the underlying hardware and provide a layer of isolation between the VMs and the host. Examples of hypervisors include VMware, Hyper-V, and VirtualBox. The hypervisor is independent of the operating system. They can run on Windows, Mac, and Linux.

Docker is a virtualization service used in operating systems, where software is distributed in a container with software, libraries, and configuration files.

Docker, on the other hand, uses a different virtualization approach called containerization. Containers are a lightweight alternative to VMs, they share the host operating system kernel and therefore don't require a separate operating system for each container. Instead, containers use the host's resources and abstract the application and its dependencies into a single package. Dockers, on the other hand, is to Linux only.

Docker containers result in high performance as they use the same operating system with no additional software (like hypervisor). Docker containers can start up quickly and result in less boot-up time.

Since VM uses a separate OS; it causes more resources to be used. Virtual machines don’t start quickly and lead to poor performance.

With docker containers, users can create an application and store it into a container image. Then, he/she can run it across any host environment.

While transferring files, VMs should have a copy of the OS and its dependencies because of which image size is increased and becomes a tedious process to share data.

10)What are the advantages and disadvantages of using docker?

Advantages:

  1. Consistency: Docker provides a consistent environment for running applications, regardless of the host system. This makes it easy to move applications between development, testing, and production environments.

  2. Isolation: Containers isolate applications from one another and from the host system, providing an additional layer of security.

  3. Portability: Containers are lightweight and portable, making it easy to move them between different systems and environments.

  4. Scalability: With Docker, it is easy to scale applications up and down as needed, by running multiple instances of a container.

  5. Ease of use: Docker provides a simple and consistent interface for creating, deploying, and running containers.

  6. Resource Efficiency: Docker utilizes the host resources more efficiently as it doesn't require a full-fledged operating system for each container, thus reducing the footprint of the application.

Disadvantages:

  1. Security: Docker containers share the host kernel and therefore, a vulnerability in the host can compromise all the containers running on that host.

  2. Performance overhead: Docker introduces a small amount of overhead due to the additional abstraction layer.

  3. Complexity: Docker can be complex to use, especially when running multiple containers or deploying large applications.

  4. Graphical applications do not operate well Docker was created as a solution for deploying server applications that don’t need a graphical interface. While there are some creative approaches that one can practice to run a GUI app inside a container, these solutions are solid at best.

  5. Few applications do not benefit from Docker Containers – In common, the applications that are intended to work as a collection of thoughtful microservices attain to get the most from containers. Contrarily, Docker’s one real benefit is that it can interpret application delivery by giving an easy packaging mechanism.

11)What is a Docker namespace?

A Docker namespace is a feature of the Docker engine that allows for the isolation of resources within a single host. Namespaces provide a way to divide a single host into multiple isolated environments, called namespaces. Each namespace has its own set of resources, such as network interfaces, process IDs, and filesystems.

Docker uses these namespaces to provide isolation between containers running on a single host, which allows for multiple containers to run on the same host without interfering with each other. Namespaces also enable the use of multiple instances of the same service on a single host, and also provide a way to limit the resources that a container can use, ensuring stability, security and control on the host.

There are several different types of namespaces in Docker:

1)Process Id(PID) namespace: isolates the process ID space, which means that processes in one namespace cannot see or signal processes in another namespace.

2)Network(net) namespace: isolates network interfaces, IP addresses, and routing tables, which means that containers in one namespace cannot communicate with containers in another namespace using their IP addresses.

3)Interprocess communication(ipc) namespace: isolates inter-process communication resources, such as System V semaphores and message queues, which means that processes in one namespace cannot communicate with processes in another namespace using these resources.

4)Mount(mnt) namespace: isolates the filesystem, which means that processes in one namespace cannot access files in another namespace.

5)User namespace: isolates the user and group IDs, which means that processes in one namespace cannot access resources that are owned by users or groups in another namespace.

12)What is a Docker registry?

A Docker registry is a service for storing and distributing Docker images. It is used to share images with others and to retrieve images for use on a host. The most popular registry is Docker Hub, which is maintained by Docker, Inc. A registry can be either public or private. Public registries, like Docker Hub, allow anyone to pull images, while private registries are only accessible to a specific group of users. Registries also provide versioning and tagging functionality, which allows users to keep track of different versions of an image and to easily switch between them. The registry is a server-side application that stores and distributes Docker images. It is stateless and extremely scalable.

13)What is an entry point?

Docker entrypoint is a Dockerfile directive or instruction that is used to specify the executable which should run when a container is started from a Docker image. It has two forms, first one is the ‘exec’ form and the second is the ‘shell’ form. An entry point is a command that is executed when a container is started from an image. It is used to specify the command that will be run when the container starts up. An entry point can be specified in a Dockerfile using the ENTRYPOINT instruction. The ENTRYPOINT instruction sets the command that will be run when the container starts, and it can be overridden when starting a container.

for example, ENTRYPOINT [“executable”, “parameter1”, “parameter2”]

a Dockerfile might have the following ENTRYPOINT instruction:

ENTRYPOINT ["/usr/local/bin/test-application"]

14)How to implement CI/CD in Docker?

1)Create a Dockerfile: A Dockerfile is a script that contains instructions for building a Docker image. It is a simple text file that contains commands such as FROM, RUN, COPY, EXPOSE, ENV, etc. These commands are executed by the Docker daemon during the build process to create an image.

2)Create a build pipeline: Set up a build pipeline that automatically builds the image from the Dockerfile whenever there is a change in the source code. This can be done using tools like Jenkins, CircleCI, etc.

3)Automate testing: Set up automated testing for the image, such as unit tests, integration tests, and acceptance tests, to ensure that the image is working as expected.

4)Push the image to a registry: Once the image is built and tested, it can be pushed to a Docker registry, such as Docker Hub, so that it can be easily distributed to other systems.

5)Deploy the image to production: Use a container orchestration tool like Kubernetes, Docker Swarm, or Amazon ECS to deploy the image to a production environment.

6)Monitor and scale: Monitor the deployed image and scale it as needed to handle increased.

15)Will data on the container be lost when the docker container exits?

By default, data stored within a container is not persistent and will be lost when the container exits or is removed. However, there are ways to make the data persistent and to persist data between container restarts:

  1. Volume mounts

  2. Data Volumes

  3. Volume Plugins

  4. Bind Mounts

16)What is a Docker swarm?

Docker Swarm is a clustering and scheduling tool for Docker containers. Docker Swarm is an orchestration management tool that runs on Docker applications. It helps end-users in creating and deploying a cluster of Docker nodes.

There are two types of nodes in Docker Swarm:

  1. Manager node= Maintains cluster management tasks

  2. Worker node= Receives and executes tasks from the manager node

A swarm consists of a manager node, which is responsible for managing the swarm, and worker nodes, which are responsible for running the containers. The manager node handles tasks such as scheduling containers, maintaining the desired state of the swarm, and providing a centralized point of management. The worker nodes run the containers and communicate with the manager node to receive tasks and report their status.

17)What are the docker commands for the following:

1)view running containers

docker ps

2)command to run the container under a specific name

docker run --name <container_name> <docker_image>

3)command to export a docker

docker export <container_id or name> > <filename>.tar

4)command to import an already existing docker image

docker import <options> file|URL|- <repository>:<tag>

5)commands to delete a container

docker rm <container_name or ID>

6)command to remove all stopped containers, unused networks, build caches, and dangling images?

docker system prune -a

  • Hope you like this blog. Please follow for more blogs like this.....

    Thank you.