Chapter 1: What I did
I have been building my own project for 3-4 weeks now, and I just realized something after I launched my website.
The tech stack is built on Next.js, Node.js, Strapi, OpenAI, MongoDB, etc., and I am using Docker to containerize it.
Initially, I didn't pay much attention as I wanted to launch it ASAP, but now, when I am focusing on the storage consumption of the Docker images, then its off the charts.
My Next.JS Docker image is of size 3.01 GB
Yes, it's hilarious. Because I am doing it the wrong way.
The real culprit (it's me, I guess ):
My initial Dockerfile of Next.js looks like this
# Use official Node image FROM node:22 # Create app directory WORKDIR /app # Copy package files first COPY package*.json ./ # Install dependencies RUN npm install # Copy the rest of the project COPY . . # Build the Next.js app RUN npm run build # Expose port EXPOSE 3000 # Start the app CMD ["npm", "start"]
You can clearly see the issue here:
- The default node:22 image is Debian-based and large (~1GB+).
RUN npm installinstalls dependencies and devDependencies in production, and you don't need devDependencies in prod.RUN npm run buildis building and running in the same container.
How does it matter in production
It matters because the container image size directly affects speed, cost, security, and scalability - especially for real SaaS apps like the ones I am building.
Every time I push to GitHub, CI builds a Docker image, pushes to ECR, EC2/Lambda pulls the image
The entire image must be transferred.
If the image is 1GB+, then it means slow push + slow pull And in production, that means Slower releases, longer downtime during deploy, and slower blue/green swaps
With this image size (3 GB), if I try to scale to 10 containers in the future, that means 30GB of pull.
Huh, it's expensive.
The Solution
Yeah, its same Next.js app, no code changes, nothing else, just the change of Dockerfile, and the image size is reduced to 219 MB from 3.1 GB.
The real solution is this
# -------- Base Image -------- FROM node:20-alpine AS base WORKDIR /app # Install dependencies needed for some npm packages RUN apk add --no-cache libc6-compat # -------- Dependencies Stage -------- FROM base AS deps COPY package.json package-lock.json* ./ RUN npm ci # -------- Builder Stage -------- FROM base AS builder COPY --from=deps /app/node_modules ./node_modules COPY . . # Build Next.js app RUN npm run build # -------- Production Stage -------- FROM node:20-alpine AS runner WORKDIR /app ENV NODE_ENV=production # Create non-root user RUN addgroup -S nextjs && adduser -S nextjs -G nextjs # Copy standalone build COPY --from=builder /app/.next/standalone ./ COPY --from=builder /app/.next/static ./.next/static COPY --from=builder /app/public ./public USER nextjs EXPOSE 3000 ENV PORT=3000 CMD ["node", "server.js"]
- Smaller Base image
FROM node:20-alpine - Multi-stage build, because I separated the dependencies stage, the builder stage, and the production stage, which means that build tools will never reach the production image.
COPY --from=builder /app/.next/standalone ./This is HUGE, standalone contains only required runtime files, only required production dependencies, which removed unused packages, devDependecies and even unnecessary Node modules- For security, I used `RUN addgroup -S nextjs && adduser -S nextjs -G nextjs USER nextjs. The old Dockerfile was running as root, but the new one runs as a limited user. So, if someone eill exploits your app, they don't get root access.
Real impact in Production
If traffic spike, the old Docker will spin up the container slowly, while the new one will be faster. The old Docker had more CVE's, but the new one has fewer.
Let me know if you have used a different approach for docker.
