Docker container is the technology that enabled the success of cloud native applications. Within one file you declare all the dependencies that your application needs to run. This is especially useful when scaling the application or moving to a different provider or server environment. You can be certain that after an operating system update on your production server, your application will still run. Or so I thought.
It worked on the old prod! It works on my machine!
When I first got in touch with Docker Containers back in 2015, I never could have dreamed where this technology will be used 10 years from then. In this document I compiled the pitfalls I stumbled upon.
Tldr; Be cautious on what you include in the container image and set proper resource limits when running in production.
Docker allows to specify multiple build stages with different base images. This enables you to use the full build chain when compiling/building the application and a streamlined base images for running the application.
# [1] Build Stage
FROM node:18 AS builder
WORKDIR /build
COPY src .
# Install build tool chain
RUN npm ci --also=dev
# Build the application
RUN npm run build
# [2] Dependency Stage
FROM node:18 AS dependencies
WORKDIR /dep
COPY ./package.json /
COPY ./package-lock.json /
# Install all dependencies to /dep/node_modules
RUN npm ci
# [3] Runtime Stage
FROM gcr.io/distroless/nodejs18
WORKDIR /app
# Copy built artifact
COPY /build/build /app/build
# Copy runtime dependencies
COPY /dep/node_modules /app/node_modules
CMD ["build/server.js"] In this example, we see an image with three stages. In stage [1], the application is build and stored in /build/build. In stage [2] the runtime dependencies are downloaded and stored in /build/node_modules. Both stages are used only at build time. And finally, stage [3] describes the resulting image where the dependencies and build artifacts are copied from the preceding stages.
To verify what files are present in your final image, you can use dive. With the commans dive $(docker build -q .) you can inspect a local image as well.
When deploying a container to a production environment, you should alway define the resources you expect your application to require. This is so to prevent OOM events from stopping your application. This also helps with the noisy neighbor problem.
You can do that by supplying the memory and cpus arguments to docker rum: docker run --memory=50M --cpus=1 image:version. Or you can set the limits and reservations directly in your docker compose file.
version: "3"
services:
application:
image: image:version
deploy:
resources:
limits:
cpus: '1'
memory: 50M
reservations:
cpus: '0.1'
memory: 20M Now you are ready to create your first containerized application.