

Picture by Writer | Ideogram
# Introduction
You simply pushed your Python app to manufacturing, and immediately every little thing breaks. The app labored completely in your laptop computer, handed all assessments in CI, however now it is throwing mysterious import errors in manufacturing. Sound acquainted? Or perhaps you are onboarding a brand new developer who spends three days simply attempting to get your undertaking working regionally. They’re on Home windows, you developed on Mac, the manufacturing server runs Ubuntu, and someway everybody has totally different Python variations and conflicting package deal installations.
We have all been there, frantically debugging environment-specific points as an alternative of constructing options. Docker solves this mess by packaging your whole software atmosphere right into a container that runs identically in every single place. No extra “works on my machine” excuses. No extra spending weekends debugging deployment points. This text introduces you to Docker and the way you should use Docker to simplify software growth. You will additionally discover ways to containerize a easy Python software utilizing Docker.
🔗 Hyperlink to the code on GitHub
# How Docker Works and Why You Want It
Consider Docker as analogous to transport containers, however on your code. Whenever you containerize a Python app, you are not simply packaging your code. You are packaging your complete runtime atmosphere: the particular Python model, all of your dependencies, system libraries, atmosphere variables, and even the working system your app expects.
The end result? Your app runs the identical method in your laptop computer, your colleague’s Home windows machine, the staging server, and manufacturing. Each time. However how do you try this?
Nicely, while you’re containerizing Python apps with Docker, you do the next. You package deal your app into a transportable artifact known as an “picture”. Then, you begin “containers” — working cases of photographs — and run your functions within the containerized atmosphere.
# Constructing a Python Net API
As an alternative of beginning with toy examples, let’s containerize a sensible Python software. We’ll construct a easy FastAPI-based todo API (with Uvicorn because the ASGI server) that demonstrates the patterns you may use in actual initiatives, and use Pydantic for information validation.
In your undertaking listing, create a necessities.txt file:
fastapi==0.116.1
uvicorn[standard]==0.35.0
pydantic==2.11.7
Now let’s create the fundamental app construction:
# app.py
from fastapi import FastAPI
from pydantic import BaseModel
from typing import Checklist
import os
app = FastAPI(title="Todo API")
todos = []
next_id = 1
Add information fashions:
class TodoCreate(BaseModel):
title: str
accomplished: bool = False
class Todo(BaseModel):
id: int
title: str
accomplished: bool
Create a well being examine endpoint:
@app.get("https://www.kdnuggets.com/")
def health_check():
return {
"standing": "wholesome",
"atmosphere": os.getenv("ENVIRONMENT", "growth"),
"python_version": os.getenv("PYTHON_VERSION", "unknown")
}
Add the core todo performance:
@app.get("/todos", response_model=Checklist[Todo])
def list_todos():
return todos
@app.submit("/todos", response_model=Todo)
def create_todo(todo_data: TodoCreate):
world next_id
new_todo = Todo(
id=next_id,
title=todo_data.title,
accomplished=todo_data.accomplished
)
todos.append(new_todo)
next_id += 1
return new_todo
@app.delete("/todos/{todo_id}")
def delete_todo(todo_id: int):
world todos
todos = [t for t in todos if t.id != todo_id]
return {"message": "Todo deleted"}
Lastly, add the server startup code:
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
If you happen to run this regionally with pip set up -r necessities.txt && python app.py
, you may have an API working regionally. Now let’s proceed to containerize the appliance.
# Writing Your First Dockerfile
You may have your app, you have got a listing of necessities, and the particular atmosphere on your app to run. So how do you go from these disparate elements into one Docker picture that comprises each your code and dependencies? You possibly can specify this by writing a Dockerfile on your software.
Consider it as a recipe to construct a picture from the totally different elements of your undertaking. Create a Dockerfile in your undertaking listing (no extension).
# Begin with a base Python picture:
FROM python:3.11-slim
# Set atmosphere variables:
ENV PYTHONDONTWRITEBYTECODE=1
PYTHONUNBUFFERED=1
ENVIRONMENT=manufacturing
PYTHON_VERSION=3.11
# Arrange the working listing:
WORKDIR /app
# Set up dependencies (this order is essential for caching):
COPY necessities.txt .
RUN pip set up --no-cache-dir -r necessities.txt
# Copy your software code:
COPY . .
# Expose the port and set the startup command:
EXPOSE 8000
CMD ["python", "app.py"]
This Dockerfile builds a Python internet software container. It makes use of Python 3.11 (slim model) picture as the bottom, units up a working listing, installs dependencies from necessities.txt, copies the app code, exposes port 8000, and runs the appliance with python app.py
. The construction follows greatest practices by putting in dependencies earlier than copying code to make use of Docker’s layer caching.
# Constructing and Working Your First Container
Now let’s construct and run our containerized software:
# Construct the Docker picture
docker construct -t my-todo-app .
# Run the container
docker run -p 8000:8000 my-todo-app
Whenever you run docker construct
, you may see that every line in your Dockerfile is constructed as a layer. The primary construct would possibly take a bit as Docker downloads the bottom Python picture and installs your dependencies.
⚠️ Use
docker buildx construct
to construct a picture from the directions within the Dockerfile utilizing BuildKit.
The -t my-todo-app
flag tags your picture with a greater title as an alternative of a random hash. The -p 8000:8000
half maps port 8000 contained in the container to port 8000 in your host machine.
You possibly can go to http://localhost:8000
to see in case your API is working inside a container. The identical container will run identically on any machine that has Docker put in.
# Important Docker Instructions for Every day Use
Listed below are the Docker instructions you may use most frequently:
# Construct a picture
docker construct -t myapp .
# Run a container within the background
docker run -d -p 8000:8000 --name myapp-container myapp
# View working containers
docker ps
# View container logs
docker logs myapp-container
# Get a shell inside a working container
docker exec -it myapp-container /bin/sh
# Cease and take away containers
docker cease myapp-container
docker rm myapp-container
# Clear up unused containers, networks, photographs
docker system prune
# Some Docker Finest Practices That Matter
After working with Docker in manufacturing, listed here are the practices that really make a distinction.
All the time use particular model tags for base photographs:
# As an alternative of this
FROM python:3.11
# Use this
FROM python:3.11.7-slim
Create a .dockerignore file to exclude pointless recordsdata:
__pycache__
*.pyc
.git
.pytest_cache
node_modules
.venv
.env
README.md
Preserve your photographs lean by cleansing up package deal managers:
RUN apt-get replace && apt-get set up -y --no-install-recommends
build-essential
&& rm -rf /var/lib/apt/lists/*
All the time run containers as non-root customers in manufacturing.
# Wrapping Up
This tutorial lined the basics, however Docker’s ecosystem is huge. Listed below are the subsequent areas to discover. For manufacturing deployments, find out about container orchestration platforms like Kubernetes or cloud-specific companies like AWS Elastic Container Service (ECS), Google Cloud Run, or Azure Container Cases.
Discover Docker’s security measures, together with secrets and techniques administration, picture scanning, and rootless Docker. Find out about optimizing Docker photographs for quicker builds and smaller sizes. Arrange automated build-and-deploy pipelines utilizing steady integration/steady supply (CI/CD) methods resembling GitHub Actions and GitLab CI.
Glad studying!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embody DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and occasional! At the moment, she’s engaged on studying and sharing her data with the developer group by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates partaking useful resource overviews and coding tutorials.