Introduction: Why CI/CD is Essential for Your SMB
CI/CD (Continuous Integration/Continuous Deployment) automates testing and deployment of your applications. For an SMB, this means fewer human errors, faster releases, and a team that can focus on development rather than repetitive tasks.
GitHub Actions and Docker form the perfect combination for small IT teams: GitHub Actions offers 2,000 free minutes per month for private repositories, and Docker ensures your application runs the same way everywhere.
This tutorial is for developers, CTOs, and technical managers at SMBs who want to industrialize their deployments without investing in complex tools. We’ll build a complete pipeline that automatically tests, builds, and deploys your application with every push to the main branch.
Technical Prerequisites and Initial Setup
Before starting, make sure you have:
- A GitHub account with a repository containing your source code
- Docker and Docker Compose installed on your local machine
- Basic knowledge of Git and command line
- A deployment server accessible via SSH (OVH VPS, Coolify, or other)
Step 1: Dockerize Your Application
The first step is creating an optimized Dockerfile. Here’s an example for a Node.js application with multi-stage build:
# Stage 1: BuildFROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./RUN npm ci --only=production
COPY . .RUN npm run build
# Stage 2: ProductionFROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modulesCOPY --from=builder /app/dist ./distCOPY --from=builder /app/package.json ./
ENV NODE_ENV=productionEXPOSE 3000
CMD ["node", "dist/index.js"]Multi-stage builds reduce the final image size by keeping only the files needed for execution. Here, we go from ~1.2GB to ~150MB.
Next, create a docker-compose.yml file for the local environment:
version: '3.8'
services: app: build: . ports: - "3000:3000" environment: - DATABASE_URL=${DATABASE_URL} - API_KEY=${API_KEY} volumes: - ./logs:/app/logs restart: unless-stopped
postgres: image: postgres:16-alpine environment: - POSTGRES_PASSWORD=${DB_PASSWORD} volumes: - postgres_data:/var/lib/postgresql/data
volumes: postgres_data:Test your configuration locally:
docker-compose up --buildFor environment variables, create a .env.example file with dummy values that you’ll commit, and a local .env file ignored by Git.
Step 2: Create Your First GitHub Actions Workflow
GitHub Actions uses YAML files in the .github/workflows folder. Create the file .github/workflows/deploy.yml:
name: CI/CD Pipeline
on: push: branches: [ main ] pull_request: branches: [ main ] workflow_dispatch:
env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }}
jobs: build: runs-on: ubuntu-latest permissions: contents: read packages: write
steps: - name: Checkout code uses: actions/checkout@v4
- name: Set up Docker Buildx uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry uses: docker/login-action@v3 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata id: meta uses: docker/metadata-action@v5 with: images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} tags: | type=ref,event=branch type=sha,prefix={{branch}}-
- name: Build and push Docker image uses: docker/build-push-action@v5 with: context: . push: true tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} cache-from: type=gha cache-to: type=gha,mode=maxThis workflow triggers on three events:
- Push to the
mainbranch - Pull request to
main - Manual trigger via the GitHub interface
The build job uses GitHub Container Registry (free and integrated) to store your Docker images. The tag includes the commit SHA to trace each version.
workflow_dispatch allows you to manually trigger the pipeline from GitHub’s Actions tab, useful for exceptional deployments. Step 3: Automate Testing and Docker Image Building
Let’s add a test job before the build. Modify your workflow:
jobs: test: runs-on: ubuntu-latest
steps: - name: Checkout code uses: actions/checkout@v4
- name: Set up Node.js uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm'
- name: Install dependencies run: npm ci
- name: Run linter run: npm run lint
- name: Run tests run: npm test
- name: Check build run: npm run build
build: needs: test runs-on: ubuntu-latest # ... rest of build jobThe needs: test ensures the build only runs if tests pass. The npm cache (cache: 'npm') speeds up installations.
For dynamic Docker tags, the docker/metadata-action configuration automatically generates:
main-abc1234for a commit on mainpr-42for a pull request
The GitHub Actions cache system (cache-from/cache-to: type=gha) reuses Docker layers between builds. On an average project, this reduces build time from 5 minutes to 1 minute.
- name: Build and push Docker image uses: docker/build-push-action@v5 with: context: . push: ${{ github.event_name != 'pull_request' }} tags: ${{ steps.meta.outputs.tags }} cache-from: type=gha cache-to: type=gha,mode=maxmode=max cache stores all intermediate layers, not just the final result. It uses more storage but is much faster. On failure, GitHub Actions automatically sends an email notification. You can also configure Slack:
- name: Notify on failure if: failure() uses: 8398a7/action-slack@v3 with: status: ${{ job.status }} webhook_url: ${{ secrets.SLACK_WEBHOOK }}Step 4: Automatically Deploy to Your Server
For deployment, we add a job that connects to the server via SSH and restarts the containers. First, add these secrets in GitHub settings (Settings > Secrets and variables > Actions):
SSH_HOST: IP or domain of your serverSSH_USER: SSH user (typicallyrootordeploy)SSH_KEY: SSH private key (generate withssh-keygen -t ed25519)
Here’s the deployment job:
deploy: needs: build runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps: - name: Deploy to production uses: appleboy/ssh-action@v1.0.0 with: host: ${{ secrets.SSH_HOST }} username: ${{ secrets.SSH_USER }} key: ${{ secrets.SSH_KEY }} script: | cd /opt/app
# Login to GitHub Container Registry echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
# Pull latest image docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:main-${{ github.sha }}
# Update docker-compose to use new image export IMAGE_TAG=main-${{ github.sha }}
# Zero-downtime deployment docker-compose up -d --no-deps --build app
# Cleanup old images docker image prune -af --filter "until=72h"This approach ensures zero-downtime deployment: Docker Compose starts the new container before stopping the old one.
For automatic rollback on failure, add a healthcheck:
script: | cd /opt/app
# Save current version CURRENT_VERSION=$(docker-compose images -q app)
# Deploy new version docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:main-${{ github.sha }} export IMAGE_TAG=main-${{ github.sha }} docker-compose up -d --no-deps app
# Wait and check health sleep 10 if ! curl -f http://localhost:3000/health; then echo "Health check failed, rolling back..." docker tag $CURRENT_VERSION app:latest docker-compose up -d --no-deps app exit 1 fi
echo "Deployment successful"/health endpoint that returns a 200 status when everything is working correctly. On the server, create a docker-compose.prod.yml file:
version: '3.8'
services: app: image: ghcr.io/your-org/your-app:${IMAGE_TAG} ports: - "3000:3000" environment: - NODE_ENV=production - DATABASE_URL=${DATABASE_URL} restart: unless-stopped healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 30s timeout: 10s retries: 3Conclusion and Advanced Optimizations
You now have a complete CI/CD pipeline that automatically tests, builds, and deploys your application. With every push to main, GitHub Actions:
- Runs tests and linting
- Builds the Docker image with a unique tag
- Pushes the image to GitHub Container Registry
- Connects to the server and deploys the new version
- Verifies application health
Costs are controlled: GitHub Actions offers 2,000 free minutes per month for private repositories (unlimited for public). For an SMB with 20 deployments per month at 3 minutes each, you stay well within the free tier.
In the next article, we’ll explore how to add monitoring with Prometheus and Grafana, centralize logs with Loki, and manage multi-environment deployments (staging, production) with reusable workflows.
Additional resources: