Tutorial: Docker Basics

Learn the fundamentals of containerization with Docker. Package your applications and dependencies easily.

By Upingi Team / Tutorial Level: Beginner

Why Learn Docker?

Docker simplifies development and deployment by packaging applications and their environments into portable containers. This ensures consistency across different machines and stages (development, testing, production).

Understanding Docker is essential for modern software development workflows, DevOps practices, and efficient application deployment.

Prerequisites

  • Install Docker Desktop: Download and install Docker Desktop for your OS from the official Docker website.
  • Basic Command Line Familiarity: Understanding basic terminal commands is helpful.

Let's dive into containerization!

Chapter 1: Understanding Images & Containers

Docker revolves around images (blueprints) and containers (running instances of images).

  1. Pull an Image: Fetch a pre-built image from Docker Hub. Run `docker pull hello-world`.
  2. List Images: See the images you have locally: `docker images`.
  3. Run a Container: Create and start a container from an image: `docker run hello-world`. This executes the default command in the container.
  4. List Running Containers: View currently active containers: `docker ps`.
  5. List All Containers: View all containers, including stopped ones: `docker ps -a`.
  6. Remove a Container: Clean up stopped containers: `docker rm `.
  7. Remove an Image: Remove an image you no longer need: `docker rmi hello-world`.

You've successfully pulled an image and run your first container!

Chapter 2: Creating a Dockerfile

To package your own applications, you create a `Dockerfile`. This is a text file containing instructions Docker uses to build an image.

Here's a breakdown of common instructions for a simple Node.js app (create a file named `Dockerfile`):

# Use an official Node.js runtime as a parent image
FROM node:18-alpine

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json
COPY package*.json ./

# Install app dependencies
RUN npm install

# Bundle app source inside the Docker image
COPY . .

# Make port 8080 available to the world outside this container
EXPOSE 8080

# Define the command to run your app
CMD [ "node", "server.js" ]
  • `FROM`: Specifies the base image to use.
  • `WORKDIR`: Sets the working directory for subsequent instructions.
  • `COPY`: Copies files or directories from your local machine into the container image.
  • `RUN`: Executes commands (like installing dependencies) during the image build process.
  • `EXPOSE`: Informs Docker that the container will listen on the specified network ports at runtime (doesn't actually publish the port).
  • `CMD`: Provides the default command to execute when a container starts from this image.

Build the image: Navigate to the directory containing your `Dockerfile` and run `docker build -t my-node-app .` (The `-t` tags the image with a name, and `.` indicates the build context is the current directory).

Run your custom container: Start a container from your new image: `docker run -p 4000:8080 -d my-node-app`. The `-p 4000:8080` maps port 4000 on your host to port 8080 in the container, and `-d` runs it in detached mode (background).

Conclusion & Next Steps

You've taken your first steps with Docker, understanding images, containers, and the basic commands to manage them. This foundation allows you to start containerizing applications.

Next steps could include:

  • Learning about Docker Compose for multi-container applications.
  • Exploring Docker volumes for persistent data.
  • Understanding Docker networking concepts.
  • Pushing your custom images to Docker Hub or other registries.