Docker-Core Concepts
Docker is an open-source platform that automates the deployment, scaling, and management of applications. It uses containerization technology to package an application and its dependencies into a container, ensuring that the application runs consistently across different environments.
Key Concepts of Docker:
- Containerization: Docker containers encapsulate an application and its dependencies, including libraries, binaries, and configuration files, into a single package. Containers are lightweight and share the host OS kernel, making them more efficient than traditional virtual machines.
- Docker Image: A read-only template used to create containers. It includes the application code, runtime, libraries, and file system required to run the application.
- Docker Container: A runtime instance of a Docker image. Containers are isolated from each other and the host system, providing a consistent environment for the application.
- Docker Engine: The core part of Docker, responsible for building, running, and managing containers.
- Docker Hub: A cloud-based registry service that allows you to find and share container images with your team or the broader community.
Why We Need Docker:
- Consistency Across Environments: Docker ensures that the application behaves the same way in development, testing, and production environments by packaging the application with all its dependencies.
- Isolation: Containers run in isolated environments, preventing conflicts between applications and improving security.
- Efficiency: Containers are lightweight and use fewer resources than virtual machines because they share the host OS kernel.
- Scalability: Docker makes it easier to scale applications horizontally by allowing you to quickly deploy multiple container instances.
- Portability: Docker containers can run on any system that supports Docker, making it easier to move applications between different environments and cloud providers.
- Simplified DevOps: Docker integrates well with CI/CD pipelines, allowing for faster and more reliable software delivery.
- Version Control: Docker images can be versioned, making it easy to roll back to a previous version of an application if needed.
Overall, Docker streamlines the development and deployment process, improves consistency and efficiency, and enhances scalability and portability.
Docker Architecture
The diagram represents the architecture of Docker, showing the interaction between the client, Docker host, and registry.
Components:
- Client:
- docker run: Command to run a container from a specified image.
- docker build: Command to build a Docker image from a Dockerfile.
- docker pull: Command to pull a Docker image from a registry.
- Docker Host:
- Docker Daemon: The core component that manages Docker containers on the host system. It listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes.
- Images: Stored images that contain the application and its dependencies. Examples shown in the diagram are images for Python, Redis, and an unknown application (represented by the icon).
- Containers: Running instances of Docker images. The diagram shows multiple containers, each running isolated instances of applications.
- Registry:
- Images: A storage location for Docker images. Docker Hub is a popular public registry, but private registries can also be used. Examples shown in the diagram are images for NGINX, Ubuntu, PostgreSQL, and another application.
- Extensions: Add-ons that enhance the functionality of Docker. Examples in the diagram include JFrog and Portainer.
- Plugins: Additional software components that integrate with Docker to extend its capabilities. Examples in the diagram include the VS Code extension and another plugin.
Workflow:
- docker pull:
- The client sends a
docker pull
command to the Docker daemon. - The Docker daemon requests the specified image from the registry.
- The registry sends the image to the Docker daemon.
- The Docker daemon stores the image in the local image store.
- docker build:
- The client sends a
docker build
command along with the Dockerfile to the Docker daemon. - The Docker daemon reads the Dockerfile and builds an image according to the specified instructions.
- The newly built image is stored in the local image store.
- docker run:
- The client sends a
docker run
command to the Docker daemon. - The Docker daemon creates a container from the specified image.
- The container runs as an isolated instance on the Docker host.
This architecture allows for consistent and reproducible application deployments, leveraging containerization technology to ensure that applications run the same way regardless of the environment.
Docker Container Architecture
The image illustrates the Docker container architecture, detailing how Docker containers interact with the underlying system.
Components:
Docker Containers:
- Docker Container 1, 2, and 3: Each container encapsulates an application and its dependencies. The diagram shows:
- App A, App B, App C: These are the individual applications running within each container.
- Bins/Libs: These are the binaries and libraries required by each application. They are specific to the application within the container and do not interfere with other containers.
Docker Engine:
- This is the core part of Docker, responsible for managing containers. It provides the necessary runtime environment for containers to run and ensures that containers are isolated from each other. The Docker engine interacts with the host operating system to allocate resources for containers.
Host Operating System:
- The operating system running on the host machine where the Docker engine is installed. It provides the kernel which is shared among the containers, ensuring efficiency and resource isolation.
Infrastructure:
- This refers to the underlying hardware or virtualized resources (CPU, memory, storage, network) on which the host operating system runs.
Dokerfile
A Dockerfile
is a script containing a series of instructions on how to build a Docker image. Each instruction in a Dockerfile
represents a step in the image-building process, specifying what the resulting image should look like. The Dockerfile
syntax is simple and declarative, making it easy to understand and modify.
Dockerfile workflow
Key Components of a Dockerfile:
- FROM: Specifies the base image to start from. This is usually an official image from Docker Hub, like
ubuntu
,alpine
, ornode
.
FROM ubuntu:20.04
- RUN: Executes a command in the container during the image-building process. Commonly used to install software packages or dependencies.
RUN apt-get update && apt-get install -y nginx
- COPY or ADD: Copies files and directories from the host machine to the image.
COPY
is preferred for its simplicity, whileADD
has additional features like auto-extracting tar files.
COPY . /app
- WORKDIR: Sets the working directory for any subsequent instructions.
WORKDIR /app
- CMD: Specifies the default command to run when a container is started from the image. There can only be one
CMD
instruction in aDockerfile
, and it can be overridden by passing arguments todocker run
.
CMD ["nginx", "-g", "daemon off;"]
- ENTRYPOINT: Configures a container to run as an executable. It is similar to
CMD
but cannot be overridden bydocker run
.
ENTRYPOINT ["nginx"]
- ENV: Sets environment variables.
ENV ENVIRONMENT=production
- EXPOSE: Informs Docker that the container listens on the specified network ports at runtime. This is a documentation feature and does not actually publish the port.
EXPOSE 80
- VOLUME: Creates a mount point with the specified path and marks it as holding externally mounted volumes from the host or other containers.
VOLUME ["/data"]
- USER: Sets the user name or UID to use when running the image and for any subsequent
RUN
,CMD
, andENTRYPOINT
instructions.USER nginx
Example Dockerfile:
Here’s an example Dockerfile
for a simple Node.js application:
# Use the official Node.js image as the base image FROM node:14 # Set the working directory in the container WORKDIR /app # Copy package.json and package-lock.json to the working directory COPY package*.json ./ # Install dependencies RUN npm install # Copy the rest of the application code to the working directory COPY . . # Expose the application port EXPOSE 3000 # Define the command to run the application CMD ["node", "app.js"]
Building an Image with a Dockerfile:
To build a Docker image using a Dockerfile
, you use the docker build
command. Navigate to the directory containing the Dockerfile
and run:
docker build -t my-node-app .
This command builds the image and tags it as my-node-app
.
Running a Container from the Image:
To run a container from the image you just built, use the docker run
command:
docker run -p 3000:3000 my-node-app
This command runs the container, mapping port 3000 of the host to port 3000 of the container.
Commonly used Dockerfile commands
To execute the instructions defined in a Dockerfile
and work with Docker images and containers, you use various Docker commands from the command line. Here are the key Docker commands and how to use them:
docker build:
- Builds a Docker image from a
Dockerfile
. - Example:
$docker build -t my-app:latest .
- Explanation: This command builds an image named
my-app
with the taglatest
using theDockerfile
in the current directory (.
).
docker images:
- Lists all Docker images on your local system.
- Example:
$docker images
- Explanation: This command displays a list of all images along with their repository names, tags, image IDs, creation dates, and sizes.
docker run:
- Runs a container from a specified Docker image.
- Example:
$docker run -d -p 8080:80 my-app:latest
- Explanation: This command runs a container from the
my-app:latest
image, maps port 80 in the container to port 8080 on the host, and runs the container in detached mode (-d
). - $docker run –name <container_name> <image_name>
- Explanation : This command runs a container from the image and assign the provided name to the container
docker ps:
- Lists all running containers.
- Example:
$docker ps
- Explanation: This command displays a list of all running containers along with their container IDs, image names, command run, creation time, status, ports, and names.
docker stop:
- Stops a running container.
- Example:
$docker stop my-container
- Explanation: This command stops the container named
my-container
.
docker rm:
- Removes a stopped container.
- Example:
$docker rm my-container
- Explanation: This command removes the container named
my-container
. The container must be stopped before it can be removed.
docker rmi:
- Removes a Docker image.
- Example:
$docker rmi my-app:latest
- Explanation: This command removes the
my-app:latest
image from the local Docker image store.
docker pull:
- Downloads an image from a Docker registry (e.g., Docker Hub).
- Example:
$docker pull nginx:latest
- Explanation: This command pulls the latest version of the
nginx
image from Docker Hub.
docker exec:
- Executes a command in a running container.
- Example:
$docker exec -it my-container /bin/bash
- Explanation: This command opens an interactive terminal (
-it
) inside the running container namedmy-container
and runs/bin/bash
.
docker logs:
- Fetches the logs of a container.
- Example:
$docker logs my-container
- Explanation: This command retrieves the logs for the container named
my-container
.
Example Workflow:
- Build an Image:
$docker build -t my-app:latest .
- Run a Container:
$docker run -d -p 8080:80 my-app:latest
- List Running Containers:
$docker ps
- Stop a Container:
$docker stop my-container
- Remove a Container:
$docker rm my-container
- Remove an Image:
$docker rmi my-app:latest
These commands cover the basic operations you will perform when working with Docker to build, run, manage, and delete Docker images and containers.
RUN vs CMD vs ENTRYPOINT
RUN command is to execute some commands while creating or building an image from the Dockerfile.We can have any number of RUN instructions in a Dockerfile and each create a separate layer.
Example:
RUN apt-get update
RUN apt-get install curl
While it’s perfectly valid to have multiple RUN
instructions, it’s generally a good practice to minimize the number of layers in your Docker image to keep the image size smaller and reduce complexity. You can achieve this by chaining commands together in a single RUN
instruction using &&
:
FROM ubuntu:latest
#Update the package list, install curl and git, and clean up in a single RUN instruction
RUN apt-get update && \
apt-get install -y curl git && \
apt-get clean && rm -rf /var/lib/apt/lists/*
CMD instruction specifies the default command to run when a container is started from the image.
If we have some CMD instruction in Dockerfile and while running container from CLI if we provide any other CMD commands then the command provided in the CLI will get execute and the CMD command provided in the Dockerfile will be ignored. Here is an example Dockerfile
FROM ubuntu:latest
# Default command specified in the Dockerfile
CMD ["echo", "This is the default command"]
Building an image
$ docker build -t my-image .
Running the Container Without Overriding CMD
$ docker run my-image
Running the Container With Overriding CMD
$ docker run my-image echo "This is the overridden command"
When you run docker run my-image
, the default CMD
specified in the Dockerfile (CMD ["echo", "This is the default command"]
) is executed.When you run docker run my-image echo "This is the overridden command"
, the command echo "This is the overridden command"
provided in the docker run
command overrides the default CMD
instruction.
If there are multiple CMD
instructions, only the last one will be used. The previous ones will be overridden.
Here is an example to illustrate this:
FROM ubuntu:latest
# First CMD instruction
CMD ["echo", "Hello, World!"]
# Second CMD instruction
CMD ["echo", "This is the final command"]
In this example, the container will only execute the command echo "This is the final command"
when it starts. The first CMD
instruction is overridden and ignored.
To execute multiple commands, you should use a script that the CMD
instruction calls, or you can use a shell form of CMD
to chain commands together:
FROM ubuntu:latest
# Copy a script into the container
COPY run.sh /usr/local/bin/run.sh
# Make the script executable
RUN chmod +x /usr/local/bin/run.sh
# Use the script in the CMD instruction
CMD ["/usr/local/bin/run.sh"]
Here is an example run.sh
script:
#!/bin/bash
echo "Hello, World!"
echo "This is the final command"
Using Shell Form of CMD
FROM ubuntu:latest
# Use shell form to run multiple commands
CMD echo "Hello, World!" && echo "This is the final command"
The ENTRYPOINT
instruction in a Dockerfile allows you to configure a container to run as an executable. Unlike CMD
, which provides default arguments to the entrypoint of the container, ENTRYPOINT
sets the command and parameters that cannot be overridden from the docker run
command’s arguments, unless you use the --entrypoint
flag.
Basic Syntax
There are two forms of the ENTRYPOINT
instruction: exec form and shell form.
Exec Form
The exec form is the preferred form as it does not invoke a shell and allows the PID of the command to remain as PID 1 of the container.
ENTRYPOINT ["executable", "param1", "param2"]
Shell Form
The shell form invokes a shell to run the command, which can be useful for simple cases.
ENTRYPOINT command param1 param2
Example
Consider the following Dockerfile:
FROM ubuntu:latest
# Install curl
RUN apt-get update && apt-get install -y curl
# Set the entrypoint
ENTRYPOINT ["curl"]
# Default arguments for curl
CMD ["--help"]
Behavior
- Default Execution:
docker build -t my-curl-image . docker run my-curl-image
Output:Usage: curl [options...] ...
In this case, theENTRYPOINT
iscurl
, and the default argument provided byCMD
is--help
. So, when the container starts, it runscurl --help
. - Overriding CMD:
docker run my-curl-image https://www.example.com
Output:HTML content of www.example.com
Here, theENTRYPOINT
is stillcurl
, but the argumenthttps://www.example.com
provided in thedocker run
command overrides the defaultCMD
argument--help
. - Overriding ENTRYPOINT:
docker run --entrypoint ls my-curl-image -l
Output:total 56 drwxr-xr-x 1 root root 4096 Jul 9 10:00 . drwxr-xr-x 1 root root 4096 Jul 9 10:00 .. -rwxr-xr-x 1 root root 0 Jul 9 10:00 Dockerfile ...
Here, the--entrypoint
flag overrides theENTRYPOINT
instruction in the Dockerfile. The container runsls -l
instead ofcurl --help
.
Combining ENTRYPOINT and CMD
Combining ENTRYPOINT
and CMD
allows you to set default commands and arguments that can be overridden:
FROM ubuntu:latest
# Install curl
RUN apt-get update && apt-get install -y curl
# Set the entrypoint
ENTRYPOINT ["curl"]
# Default arguments for curl
CMD ["https://www.example.com"]
In this example, running docker run my-curl-image
will execute curl https://www.example.com
, but you can override the URL by providing a different argument in the docker run
command.
Summary
ENTRYPOINT
sets the command to run when the container starts and is not easily overridden.CMD
provides default arguments to theENTRYPOINT
command or sets a default command ifENTRYPOINT
is not specified.- Use
--entrypoint
to override theENTRYPOINT
in adocker run
command. - The combination of
ENTRYPOINT
andCMD
allows you to set a default executable and default arguments that can be overridden when necessary.
COPY vs ADD
In Docker, both COPY
and ADD
instructions are used to copy files and directories from the host machine into the Docker image. However, there are some differences between the two in terms of functionality and use cases.
COPY
Functionality: COPY
is a straightforward instruction used to copy files and directories from the host machine to the Docker image.
Syntax: COPY <src> <dest>
COPY myfile.txt /app/
COPY mydir/ /app/mydir/
ADD
Functionality: ADD
can do everything COPY
does but with additional features:
- URL Support: It can download files from a URL.
- Archive Extraction: If the source is a compressed file (like
.tar
,.gz
,.bz2
,.xz
), it will automatically extract the content into the destination directory.
Syntax: ADD <src> <dest>
ADD myfile.txt /app/
ADD mydir/ /app/mydir/
ADD https://example.com/file.tar.gz /app/
ADD myarchive.tar.gz /app/
Best Practices
- Use
COPY
when you only need to copy files and directories. It is more straightforward and makes it clear that you are just copying files without any additional behavior. - Use
ADD
if you need the additional functionality of downloading from URLs or automatically extracting archives.
FROM ubuntu:latest
# Use COPY to copy local files and directories
COPY myfile.txt /app/
COPY mydir/ /app/mydir/
# Use ADD to download a file from a URL
ADD https://example.com/file.tar.gz /app/
# Use ADD to automatically extract an archive
ADD myarchive.tar.gz /app/
docker-compose
- Docker Compose is a tool for defining and running(managing) multi-container applications.
- docker compose will build an image on top of base image and will run/start the container.
- Base image can be an image from the Docker-Hub directly or custom Image built on top of base image available in docker hub by using Dockerfile.
- It is not a replacement for Docker.
- We can create a docker compose file with different names like docker-compose.dev.yml, docker-compose.test.yml(we can use yaml or yml).
docker-compose workflow
docker compose commands
docker-compose
is a tool for defining and running multi-container Docker applications. Here are some key docker-compose
commands along with explanations on how to use them:
Key docker-compose
Commands:
docker compose up:
- Builds, (re)creates, starts, and attaches to containers for a service defined in
docker-compose.yml
. - Example:
$ docker compose up
- Explanation: This command starts all the services defined in the
docker-compose.yml
file. If the images are not built, it will build them first. - To run in detached mode (in the background):
$ docker-compose up -d - If docker-compose have different name(ex- docker-compose.dev.yml) than docker-compose.yml then to run such compose file
$docker compose -f docker-compose.dev.yml up -d
-f stands for file name
docker compose down:
- Stops and removes containers created by
docker-compose up
. - Example:
$ docker compose down
- Explanation: This command stops and removes the containers defined in the
docker-compose.yml
file. - $ docker compose down –rmi all -v
- Explanation: This command stops and removes the containers, images and volume defined in the
docker-compose.yml
file.
docker compose build:
- Builds or rebuilds services.
- Example:
$ docker compose build
- Explanation: This command builds the services defined in the
docker-compose.yml
file.
docker compose stop:
- Stops running containers without removing them.
- Example:
$ docker compose stop
- Explanation: This command stops the services defined in the
docker-compose.yml
file without removing the containers.
docker compose start:
- Starts existing containers for a service.
- Example:
$ docker compose start
- Explanation: This command starts the stopped services defined in the
docker-compose.yml
file.
docker compose restart:
- Restarts running containers.
- Example:
$ docker compose restart
- Explanation: This command restarts the running services defined in the
docker-compose.yml
file.
docker compose ps:
- Lists containers.
- Example:
$ docker compose ps
- Explanation: This command shows the status of the containers defined in the
docker-compose.yml
file.
docker compose logs:
- Views output from containers.
- Example:
$ docker compose logs
- Explanation: This command retrieves and shows the logs from the containers defined in the
docker-compose.yml
file. - To view logs for a specific service:
$ docker compose logs <service-name>
docker compose exec:
- Executes a command in a running container.
- Example:
$ docker compose exec <service-name> <command>
- Explanation: This command runs a command in a running container for a specified service.
- Example to open a bash shell in a service:
$ docker compose exec web /bin/bash
docker compose pull:
- Pulls service images.
- Example:
$ docker compose pull
- Explanation: This command pulls the images for the services defined in the
docker-compose.yml
file from a registry.
Example docker-compose.yml
File:
Here’s an example of a simple docker-compose.yml
file to run a web application and a database:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_USER: example
POSTGRES_PASSWORD: example
Example Workflow:
- Start the Services:
$ docker compose up
- Run the Services in the Background:
$ docker compose up -d
- View the Status of the Services:
$ docker compose ps
- View Logs:
$ docker compose logs
- Stop the Services:
$ docker compose stop
- Start Stopped Services:
$ docker compose start
- Restart the Services:
$ docker compose restart
- Stop and Remove the Services:
$ docker compose down
By using these docker-compose
commands, you can efficiently manage multi-container Docker applications, facilitating development, testing, and deployment workflows.
Some special commands
docker system prune -a
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache
# Remove stopped containers
$docker container prune
# Remove unused networks
$docker network prune
# Remove unused volumes
$docker volume prune
# Remove unused images
$docker image prune -a
docker inspect
This docker inspect command is used to retrieve detailed information about Docker objects such as containers, images, volumes, networks, exposed ports and more. The output is typically in JSON format, providing comprehensive details.
$ docker inspect <image_name>
.dockerignore
A .dockerignore
file is used to specify which files and directories should be ignored by the Docker build process when creating an image. This can help reduce the size of the context sent to the Docker daemon and speed up the build process.
Here’s an example of a .dockerignore
file and an explanation of some common patterns you might include:
# Ignore node_modules directory
node_modules
# Ignore all log files
*.log
# Ignore build artifacts
dist/
build/
# Ignore environment files
.env
.env.local
# Ignore temporary files and directories
*.tmp
*.swp
*.bak
.DS_Store
# Ignore Git and other VCS files
.git
.gitignore
Explanation of Patterns
- node_modules: Excludes the
node_modules
directory, which is common in Node.js projects. *.log
: Excludes all log files.- dist/, build/: Excludes build artifacts that are generated during the build process.
- .env, .env.local: Excludes environment variable files.
*.tmp
,*.swp
,*.bak
, .DS_Store: Excludes various temporary and backup files, as well as macOS-specific files.- .git, .gitignore: Excludes Git repository files and the
.gitignore
file itself.
Using the .dockerignore File
To use a .dockerignore
file, simply place it in the root directory of your build context (the directory containing your Dockerfile
). Docker will automatically recognize and use it to filter out the specified files and directories during the build process.
Example Directory Structure
Here’s an example of a project directory structure with a .dockerignore
file:
my-project/
├── .dockerignore
├── Dockerfile
├── .git/
├── node_modules/
├── src/
│ └── index.js
├── dist/
│ └── bundle.js
├── .env
└── package.json
In this example, the .dockerignore
file will prevent the node_modules
directory, dist
directory, .env
file, and other specified files from being included in the Docker build context.
Docker volume
Docker volumes are a way to persist data generated by and used by Docker containers. Volumes are managed by Docker and are stored outside the container’s filesystem, making them more durable and portable.
Basic Volume Commands
Here are some common commands for working with Docker volumes:
- Create a Volume
$docker volume create
Example:
docker volume create my_volume - List Volumes
$docker volume ls
This command lists all volumes managed by Docker. - Inspect a Volume
$docker volume inspect
Example:
$docker volume inspect my_volume
This command provides detailed information about the specified volume in JSON format. - Remove a Volume
$docker volume rm
Example:
$docker volume rm my_volume
This command removes the specified volume. Note that you cannot remove a volume that is in use by a container. - Remove All Unused Volumes
$docker volume prune
This command removes all unused volumes. Docker will prompt you for confirmation before proceeding.
Using Volumes in Docker Containers
When running a container, you can use the `-v` or `--volume` flag to mount a volume to a container.
Example:
$docker run -d -v my_volume:/path/in/container my_image
This command runs a container from the `my_image` image, mounting the `my_volume` volume to `/path/in/container` inside the container.
Named vs. Anonymous Volumes
- Named Volumes:
Volumes that you explicitly create and manage. You specify the volume name when creating or mounting it.
Example:
$docker volume create my_volume
$docker run -d -v my_volume:/data my_image
2. Anonymous Volumes**: Volumes that Docker creates automatically when you use the-v
flag without specifying a volume name. Docker assigns a random name to these volumes.
Example:
$docker run -d -v /data my_image
Docker Compose and Volumes
In Docker Compose, you can define volumes in the `docker-compose.yml` file.
Example `docker-compose.yml`:
version: '3.8'
services:
web:
image: my_image
volumes:
- my_volume:/data
volumes:
my_volume:
In this example, the web
service mounts the my_volume
volume to /data
inside the container. The volumes
section at the bottom defines the named volume.
Example Scenario: Using Volumes
- Create a Volume:
$docker volume create data_volume
- Run a Container with the Volume:
$docker run -d -v data_volume:/app/data my_image - Inspect the Volume:
$docker volume inspect data_volume - List Volumes:
$docker volume ls - Remove the Volume:
$docker volume rm data_volume - Remove All Unused Volumes:
docker volume prune Docker volumes are a powerful feature for data persistence and sharing data between containers. By using volumes, you can ensure your data remains intact even if the container is removed or recreated.
More on docker volume
A Docker volume is a persistent storage mechanism used to store data outside of a container’s writable layer. This allows data to persist even after the container is deleted, making it useful for scenarios where data needs to be shared between containers or retained across container restarts and updates.
There are several types of Docker volumes:
- Named Volumes:
- Created and managed by Docker.
- Stored in a specific location within the Docker storage area on the host.
- Can be easily referenced by name and are useful for persisting data across container lifecycles.
- Anonymous Volumes:
- Automatically created by Docker when a container is started with a volume, but no name is specified.
- Typically used for temporary data that doesn’t need to persist beyond the container’s lifecycle.
- Difficult to manage because they don’t have a name.
- Bind Mounts:
- Directly mount a file or directory from the host filesystem into the container.
- Provide more control over the exact location of the data on the host.
- Useful for scenarios where you need to share configuration files or logs between the host and containers.
- tmpfs Mounts:
- Store data in the host’s memory (RAM) rather than on the filesystem.
- Useful for temporary data that needs to be fast and doesn’t need to persist after the container stops.
- Commonly used for sensitive information or temporary caches.
Here is a brief summary of how to use each type of volume in Docker:
- Named Volume:
docker volume create my-volume
docker run -d --name my-container -v my-volume:/path/in/container my-image
- Anonymous Volume:
docker run -d --name my-container -v /path/in/container my-image
- Bind Mount:
docker run -d --name my-container -v /host/path:/path/in/container my-image
- tmpfs Mount:
docker run -d --name my-container --tmpfs /path/in/container:rw my-image
Each type of volume serves different purposes and provides various levels of persistence, performance, and control, allowing for flexibility in managing data within Docker containers.
docker-compose Profiles
In Docker, “profiles” typically refer to a feature in Docker Compose that allows you to define different sets of services that can be selectively enabled or disabled based on the environment or scenario. This feature helps manage complex applications with multiple services by organizing and controlling which services should be started together under different contexts, such as development, testing, or production.
Docker Compose Profiles
Docker Compose profiles allow you to group services and control their activation using the --profile
flag. This feature was introduced in Docker Compose 1.28.0.
Defining Profiles
You can define profiles in your docker-compose.yml
file using the profiles
key under each service. Here’s an example:
version: '3.9'
services:
web:
image: my-web-app:latest
ports:
- "80:80"
profiles:
- frontend
api:
image: my-api-app:latest
ports:
- "8080:8080"
profiles:
- backend
db:
image: postgres:latest
profiles:
- backend
Using Profiles
To use profiles, you specify them when running docker-compose up
with the --profile
flag:
Start Only the Frontend Profile
docker-compose --profile frontend up
This command will start only the web
service.
Start Only the Backend Profile
docker-compose --profile backend up
This command will start the api
and db
services.
Start Both Profiles
docker-compose --profile frontend --profile backend up
This command will start the web
, api
, and db
services.
Default Behavior
Services without a specified profile are always started unless explicitly excluded with profiles. If no profiles are specified during docker-compose up
, all services (including those without profiles) are started.
Example of Docker Compose with Default Services
version: '3.9'
services:
web:
image: my-web-app:latest
ports:
- "80:80"
profiles:
- frontend
api:
image: my-api-app:latest
ports:
- "8080:8080"
profiles:
- backend
db:
image: postgres:latest
profiles:
- backend
redis:
image: redis:latest
# No profiles, so this service will always start
Running without Specified Profiles
docker-compose up
This command will start all services, including web
, api
, db
, and redis
.
Summary
- Profiles: Group services for different environments or scenarios.
- Defining Profiles: Use the
profiles
key under each service in thedocker-compose.yml
file. - Using Profiles: Specify profiles with the
--profile
flag when runningdocker-compose up
. - Default Behavior: Services without profiles always start unless profiles are specified.
Using profiles in Docker Compose helps manage complex multi-service applications more effectively by allowing selective activation of services based on different contexts or requirements.
docker-compose environment and env_file
In Docker, environment
and env_file
are used to pass environment variables into containers. These can be defined in both Dockerfiles and Docker Compose files.
environment
The environment
instruction is used to specify environment variables directly in the Dockerfile or Docker Compose file.
In Dockerfile
# Example Dockerfile
FROM ubuntu:latest
# Setting environment variables
ENV APP_ENV=production
ENV APP_DEBUG=false
When you build this Dockerfile, the environment variables APP_ENV
and APP_DEBUG
will be set within the resulting image.
In Docker Compose
In a docker-compose.yml
file, you can use the environment
key to specify environment variables for a service.
version: '3.9'
services:
web:
image: my-web-app:latest
environment:
- APP_ENV=production
- APP_DEBUG=false
env_file
The env_file
instruction allows you to specify a file containing environment variables. This is useful for keeping sensitive data out of your version control system or for managing environment variables separately from your Docker Compose configuration.
Environment File (.env)
Create a file named .env
or any custom file:
# .env file
APP_ENV=production
APP_DEBUG=false
DATABASE_URL=mysql://user:password@db:3306/mydatabase
Using env_file
in Docker Compose
Reference the .env
file in your docker-compose.yml
file:
version: '3.9'
services:
web:
image: my-web-app:latest
env_file:
- .env
You can specify multiple env_file
entries if needed:
version: '3.9'
services:
web:
image: my-web-app:latest
env_file:
- .env
- .env.production
Combining environment
and env_file
You can combine both environment
and env_file
in your docker-compose.yml
file. Environment variables specified in environment
will override those in env_file
.
version: '3.9'
services:
web:
image: my-web-app:latest
env_file:
- .env
environment:
- APP_DEBUG=true
In this example, APP_ENV
and DATABASE_URL
will be taken from the .env
file, but APP_DEBUG
will be overridden to true
.
Example of Using Both in Docker Compose
version: '3.9'
services:
web:
image: my-web-app:latest
env_file:
- .env
environment:
- APP_ENV=development
- APP_DEBUG=true
db:
image: postgres:latest
env_file:
- db.env
Summary
environment
: Directly specify environment variables in Dockerfile or Docker Compose.env_file
: Specify a file containing environment variables, keeping them separate from the Docker Compose configuration.- Combining Both: Variables defined in
environment
will override those inenv_file
.
Using environment
and env_file
allows for flexible and secure management of environment variables in your Docker containers.
docker-compose network
Sure, let’s go through an example for each type of Docker network.
Bridge Network
Create a Bridge Network:
docker network create my_bridge
Run Containers and Connect to the Network:
docker run -d --name container1 --network my_bridge nginx
docker run -d --name container2 --network my_bridge nginx
Check Connectivity:
docker exec -it container1 ping container2
Host Network
Run a Container with Host Network:
docker run -d --name container_host --network host nginx
Access the Container:
Since the container is using the host network, you can access it via the host’s IP address and the default port of the service (e.g., NGINX default is port 80):
curl http://localhost
Overlay Network
Initialize Docker Swarm:
docker swarm init
Create an Overlay Network:
docker network create --driver overlay my_overlay
Deploy Services to the Overlay Network:
docker service create --name web1 --network my_overlay nginx
docker service create --name web2 --network my_overlay nginx
Check Connectivity:
Use docker service ps
to find which nodes are running the services, then use Docker exec to test connectivity:
docker exec -it <web1-container-id> ping web2
Macvlan Network
Create a Macvlan Network:
First, determine the parent interface (e.g., eth0
) of your host:
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 my_macvlan
Run a Container on the Macvlan Network:
docker run -d --name container_macvlan --network my_macvlan nginx
Check Connectivity:
Access the container using the IP assigned from the subnet:
docker exec -it container_macvlan ip addr
None Network
Run a Container with None Network:
docker run -d --name container_none --network none nginx
Check Connectivity:
Since the container has no network, it will be isolated:
docker exec -it container_none ifconfig
Summary of Commands
- Bridge Network:
docker network create my_bridge
docker run -d --name container1 --network my_bridge nginx
docker run -d --name container2 --network my_bridge nginx
docker exec -it container1 ping container2
- Host Network:
docker run -d --name container_host --network host nginx
curl http://localhost
- Overlay Network:
docker swarm init
docker network create --driver overlay my_overlay
docker service create --name web1 --network my_overlay nginx
docker service create --name web2 --network my_overlay nginx
docker exec -it <web1-container-id> ping web2
- Macvlan Network:
docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_macvlan
docker run -d --name container_macvlan --network my_macvlan nginx
docker exec -it container_macvlan ip addr
- None Network:
docker run -d --name container_none --network none nginx
docker exec -it container_none ifconfig
These examples cover how to create and use each type of Docker network.
docker-compose depends-on
In Docker Compose, the depends_on
key is used to specify dependencies between services. This ensures that services start in a defined order. However, it’s important to note that depends_on
does not wait for a service to be “ready” (i.e., fully initialized and accepting connections); it only controls the order of startup.
Here’s a simple example demonstrating how to use depends_on
in a Docker Compose file:
Example
Suppose you have a web application (web
) that depends on a database (db
). You want to ensure the database service starts before the web service.
docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:latest
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
web:
image: nginx:latest
depends_on:
- db
Explanation
- db Service:
- Uses the
postgres
image. - Sets environment variables for the PostgreSQL user, password, and database name.
- web Service:
- Uses the
nginx
image. - Specifies that it depends on the
db
service.
When you run docker-compose up
, Docker Compose will start the db
service before the web
service.
Using Wait-for-it Script
To handle the scenario where you need to wait for the database to be fully ready before starting the web service, you can use a script like wait-for-it
or a similar mechanism.
Directory Structure:
.
├── docker-compose.yml
└── wait-for-it.sh
wait-for-it.sh:
This script checks if a service is up and running. You can download it from wait-for-it.sh.
Make sure the script has execution permissions:
chmod +x wait-for-it.sh
Updated docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:latest
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
web:
image: nginx:latest
depends_on:
- db
entrypoint: ["./wait-for-it.sh", "db:5432", "--", "nginx", "-g", "daemon off;"]
volumes:
- ./wait-for-it.sh:/wait-for-it.sh
Explanation
- entrypoint:
- Uses the
wait-for-it.sh
script to wait for thedb
service to be ready on port 5432. - Once the
db
service is ready, it starts the NGINX service.
- volumes:
- Mounts the
wait-for-it.sh
script inside the container so it can be executed.
Running the Compose File
To start the services with the dependencies managed properly, run:
docker-compose up
This ensures that the web
service will wait until the db
service is fully ready before starting.
Bonus Notes:
docker build commands with some combinations
When building an image from a Dockerfile using the docker build
command, you can combine various instructions and options to customize the build process. Here are some common combinations:
Basic Build Command
docker build -t myimage:latest .
-t myimage:latest
: Tags the built image with the namemyimage
and the taglatest
..
: Specifies the build context as the current directory (where the Dockerfile is located).
Using a Different Dockerfile
docker build -t myimage:latest -f Dockerfile.prod .
-f Dockerfile.prod
: Specifies a different Dockerfile (Dockerfile.prod
) instead of the defaultDockerfile
.
Building with Build Arguments
docker build -t myimage:latest --build-arg ENV=production .
--build-arg ENV=production
: Passes a build argument (ENV
) with a value (production
) to the Dockerfile.
Building without Cache
docker build --no-cache -t myimage:latest .
--no-cache
: Forces the build to not use any cached layers during the build process.
Multi-Stage Builds
docker build -t myapp:latest --target build-stage .
--target build-stage
: Specifies a specific build stage (build-stage
) to build from a multi-stage Dockerfile.
Building for a Specific Platform
docker build -t myimage:latest --platform linux/amd64 .
--platform linux/amd64
: Specifies building the image for a specific platform (linux/amd64
).
Build with Labels
docker build -t myimage:latest --label version=1.0 --label maintainer="user@example.com" .
--label version=1.0 --label maintainer="user@example.com"
: Adds metadata labels (version
andmaintainer
) to the image.
Example Dockerfile for Testing
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ARG ENV
ENV ENV ${ENV:-development}
# Run app.py when the container launches
CMD ["python", "app.py"]
Running the Build Command
To execute any of these combinations, navigate to the directory containing your Dockerfile and run the respective docker build
command. Adjust the options and arguments based on your specific requirements and Dockerfile structure. Each combination offers flexibility and customization options for building Docker images tailored to different use cases and environments.
docker run command with some combinations of instructions
Certainly! When running Docker containers using docker run
, you can combine various instructions and options to customize how the container behaves and interacts with its environment. Here are several different sets of instructions you can use:
1. Basic Container Execution
docker run -it --rm myimage:latest
-it
: Opens an interactive terminal session.--rm
: Automatically removes the container when it stops.
2. Detached Mode with Port Mapping
docker run -d -p 8080:80 --name mycontainer myimage:latest
-d
: Detaches the container and runs it in the background.-p 8080:80
: Maps port8080
on the host to port80
inside the container.--name mycontainer
: Assigns a name (mycontainer
) to the container.
3. Environment Variables and Volume Mounting
docker run -e ENV=production -v /host/path:/container/path myimage:latest
-e ENV=production
: Sets an environment variableENV
with a value ofproduction
inside the container.-v /host/path:/container/path
: Mounts a volume from the host (/host/path
) to the container (/container/path
).
4. Working Directory and Network Configuration
docker run -w /app --network my-network myimage:latest
-w /app
: Sets the working directory inside the container to/app
.--network my-network
: Attaches the container to a Docker network namedmy-network
.
5. Interactive Shell with Specific User
docker run -it --user 1000:1000 myimage:latest bash
-it
: Starts an interactive session with a pseudo-TTY.--user 1000:1000
: Specifies the UID and GID (1000:1000
) for the container user.bash
: Overrides the default command (CMD
) and starts a bash shell.
6. Resource Limits and Restart Policy
docker run --memory=512m --cpus=2 --restart=always myimage:latest
--memory=512m
: Limits the memory usage to512m
(512 megabytes).--cpus=2
: Limits the container to use a maximum of 2 CPU cores.--restart=always
: Configures the container to always restart if it stops unexpectedly.
7. Read-Only File System and Health Check
docker run --read-only --health-cmd='curl -f http://localhost/health || exit 1' myimage:latest
--read-only
: Mounts the container’s root filesystem as read-only.--health-cmd
: Defines a command (curl -f http://localhost/health || exit 1
) to check the container’s health status.
8. Logging and Labels
docker run --log-driver=syslog --label version=1.0 myimage:latest
--log-driver=syslog
: Specifies the logging driver to use (syslog
) for container logs.--label version=1.0
: Adds a label (version=1.0
) to the container metadata.
9. Multi-Container Networking and Security Options
docker run --network-alias db-server --cap-add SYS_ADMIN myimage:latest
--network-alias db-server
: Adds a network alias (db-server
) for the container.--cap-add SYS_ADMIN
: Adds Linux capabilities (SYS_ADMIN
) to the container.
10. CPU and Memory Profiles
docker run --cpu-quota=50000 --memory-reservation=256m myimage:latest
--cpu-quota=50000
: Sets the CPU quota to 50,000 microseconds.--memory-reservation=256m
: Reserves a minimum of 256 megabytes of memory for the container.
11. Run container interactively with default name and custom name
docker run -it <image_name> bash
docker run
: Starts a new container.-it
: Runs the container interactively (allocates a pseudo-TTY).<image_name>
: The name of the Docker image you want to run.bash
: Overrides the default command specified in the Dockerfile and starts a bash shell session inside the container.
docker run -it --name <new_container_name> <images_name> bash
docker run
: Starts a new container.-it
: Runs the container interactively (allocates a pseudo-TTY).--name <container_name>
: Specifies the name you want to give to the container.<image_name>
: The name of the Docker image you want to run.bash
: Overrides the default command specified in the Dockerfile and starts a bash shell session inside the container.
Explanation:
- Options: Modify the behavior and configuration of the container.
- Arguments: Specify additional parameters such as environment variables, volume mounts, and network settings.
- Commands: Override the default command specified in the Dockerfile (
CMD
) to execute specific actions or services within the container.
By combining these instructions and options, you can create Docker containers with specific behaviors, resource constraints, and configurations tailored to different deployment scenarios and operational requirements. Adjust these examples based on your application’s needs and Dockerfile configurations.
All instructions together
Certainly! You can combine multiple instructions and options in a single docker run
command to create a container with a comprehensive set of configurations. Here’s an example that includes various instructions:
docker run -itd --name mycontainer -p 8080:80 -e ENV=production -v /host/path:/container/path -w /app --network my-network --user 1000:1000 --memory=512m --cpus=2 --restart=always --log-driver=syslog --health-cmd='curl -f http://localhost/health || exit 1' --read-only --network-alias db-server --cap-add SYS_ADMIN myimage:latest bash
Explanation:
-itd
: Starts the container in interactive mode with a pseudo-TTY (-it
) and runs it in detached mode (-d
).--name mycontainer
: Assigns the namemycontainer
to the container.-p 8080:80
: Maps port8080
on the host to port80
inside the container.-e ENV=production
: Sets an environment variableENV
with a value ofproduction
inside the container.-v /host/path:/container/path
: Mounts a volume from the host (/host/path
) to the container (/container/path
).-w /app
: Sets the working directory inside the container to/app
.--network my-network
: Attaches the container to a Docker network namedmy-network
.--user 1000:1000
: Specifies the UID and GID (1000:1000
) for the container user.--memory=512m
: Limits the memory usage to512m
(512 megabytes).--cpus=2
: Limits the container to use a maximum of 2 CPU cores.--restart=always
: Configures the container to always restart if it stops unexpectedly.--log-driver=syslog
: Specifies the logging driver to use (syslog
) for container logs.--health-cmd='curl -f http://localhost/health || exit 1'
: Defines a health check command to verify the container’s health status.--read-only
: Mounts the container’s root filesystem as read-only.--network-alias db-server
: Adds a network alias (db-server
) for the container.--cap-add SYS_ADMIN
: Adds Linux capabilities (SYS_ADMIN
) to the container.myimage:latest bash
: Overrides the default command (CMD
) specified in the Dockerfile and starts a bash shell (bash
) inside the container.
This command creates a Docker container (mycontainer
) from the myimage:latest
image with a comprehensive set of configurations including interactive mode, detached mode, port mapping, environment variables, volume mounts, network settings, user specification, resource limits, restart policy, logging configuration, health checks, read-only filesystem, network aliasing, Linux capabilities, and command override.
Adjust the options (-p
, -e
, -v
, etc.) and values (mycontainer
, 8080
, /host/path
, my-network
, etc.) based on your specific application requirements and Dockerfile configurations. This flexibility allows you to customize the container’s behavior and environment for various deployment scenarios.