"But it worked on my machine.." - Deploying Your Backend with Docker and Github Actions to AWS

"But it worked on my machine.." - Deploying Your Backend with Docker and Github Actions to AWS

Featured on Hashnode

Imagine that you’re creating a web application for your side project. You’ve setup the database connection, structured your app using MVC or similar patterns, and successfully connected the frontend with the backend. Then you build the project and hit “run”.


So your app is running locally, what’s next?

Well, you could stop there. But why not let the world see your project? That’s where deployments techniques come in. Deploying your project to production means allowing the world to access your project through the internet. Sounds simple and nice, right?

Turns out, it’s not that simple. People use different browsers with differen versions, their machines run on different operating systems, and even for two similar machines one might have a different internal security configuration from the other that just makes running a project much more difficult than it should be.

“It worked on my machine” — The case for containerization tools

Ever heard of the phrase? It’s such a hassle to find that your project just wont run on a user device after all that work you did. Managing different environments for a single app can be tiresome, and the possible combinations can be too much.

That’s where containerization tools come in. Some of them you may have heard of: like Docker, and some you may not: like Podman or LXD. But in any case, they are tools used to decouple the app from their execution environment.

Tools such as Docker, specifically, use what’s called a container , a piece of software that packages all its code and dependencies to run applications independent of the platform they’re in. You can create a Docker image that contains the Ubuntu Linux software, Python 3.10, and FastAPI and run them on any machine, whether they run on Linux, macOS, or Windows. No more “it works on my machine”!

Cloud Computing for deploying applications

Okay so containers are great, but how do they help me in deploying my app to the internet? The answer: cloud computing. You can containerize your application into a Docker image and utilize services from the cloud provider of your choice. On AWS, for example, you can create an EC2 (Compute) Instance, pull your Docker image from a container registry, and then run the container locally on the virtual machine. With some extra Security Group and Networking configurations, you just got yourself your first app deployment!

Now let’s talk about how to containerize and deploy your app in the cloud.

Docker — Containerize your project

Once your project is ready for deployment, you can turn it into a docker image. Make sure that you’ve installed the Docker daemon (or Docker Desktop) installed on your machine first.

On Ubuntu, you can run these commands:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Verify its running
sudo docker run hello-world

Next, you’ll need to build the Docker image:

docker build -t my-image-name .

This builds your app and assigns a tag (name) my-image-name

And if you need to run the app (or just try it out), you can run the container on detached mode:

docker run -d --name my-container-name -p 8080:8080 "my-image-name"

This command creates a container from my-image-name, assigns a name my-container-name, and binds the environment port 8080 to container port 8080, and runs it on detached mode (you can continue to use the terminal CLI).

Tip: Use sudo before commands (if on mac/linux) or add a User Group if you find yourself having docker.sock or permission denied errors.

AWS ECR — Where you park your Docker containers

After building Docker images, you’ll need a place to store them. The place where we store Docker images is called a Container Registry. Cloud providers such as AWS provide their own, managed Container Registry namely Elastic Container Registry (ECR). To push your images to ECR, do the following steps:

AWS ECR

  1. Ensure that you have an AWS account, and have created a repository in the ECR module.

  2. Login into your AWS account using aws-cli or use aws-actions/configure-aws-credentials@v4 if you’re using Github Actions for deployment. Note: aws ecr get-login-password … for private repositories. For public repositories it is advised to use the us-east-1 region.
    $ aws ecr-public get-login-password — region us-east-1 | docker login — username AWS — password-stdin public.ecr.aws

  3. Then push an existing Docker image to ECR
    docker push public.ecr.aws/<your-registry-alias>/<your-image-name>.
    -> If you’re using private repositories then you need to push to your private repository URL.

  4. The image should be visible on the ECR dashboard.

AWS EC2 — Invisible computers running in the Cloud

Software needs machines to run them. In this next step you’re going to create virtual machines, ssh into them, pull and then run the Docker images to produce an active deployment of your app.

AWS EC2

  1. Create an EC2 instance on your AWS console, you can pick any spec but it’s recommended to use Ubuntu 22.04 for OS.

  2. Perform SSH into your virtual machine. Click the Connect button to use AWS Shell, or SSH from your terminal into the machine and input the credentials (private key).

  3. Install docker and aws-cli on your Ubuntu machine

  4. Pull latest image from your Container Registry using docker pullpublic.ecr.aws/abcdefg/my-image-name:latest

  5. Run docker image using the docker run -d --name my-container -p 8080:8080 "public.ecr.aws/abcdefg/my-image-name:latest"

  6. Run docker ps or sudo docker ps to see the running container.

Are we done? Well not quite. You still need to configure your EC2 machine to accept Ingress connections from outside.

  1. Setup the EC2 security group to allow all Inbound Connections on port 22 (SSH), 443 (HTTPS), 80 (HTTP), and other ports such as 5432 if you’re running a postgres service on your VM.

  2. Ensure that the VPC it’s connected in does not restrict connections with the outside network (for hobby projects you can use default-vpc, as this might be tricky if you’re starting out — lots of guides if you’re interested in going deep into VPCs)

  3. Ensure that the VM passes all status checks, and the AWS region is not down

  4. Run logs if a problem persists and you don’t know how to fix them: sudo docker logs --tail 50 --follow timestamps my-container-name

Github Actions — Automating your deployment process

Now you have yourself a real deployment! But doing this every time a push or merge happens can be tiring. That’s why building a CI/CD pipeline to automate the job will save up your time.

You can refer to a sample deployment script:

name: Sample Deployment - by vlecture

on:
  push:
    branches:
      - dev

jobs:
  Dev-Deployment:
    name: Sample
    runs-on: ubuntu-22.04
    permissions:
      id-token: write # This is required for requesting the JWT
      contents: read # This is required for actions/checkout@v2

    env:
      DEV_CONFIG_ENV: ...
      PUB_REGISTRY: ...
      CONTAINER_IMAGE_NAME: ...
      AWS_REGION: ...
      IMAGE_TAG: ${{ github.sha }}

    steps:
      - name: Checkout Sources
        uses: actions/checkout@v4


      - name: Configure AWS Credentials 
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-region: ${{ env.AWS_REGION }}
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          role-duration-seconds: ...
          role-session-name: ...
          role-skip-session-tagging: ...

      - name: Login to AWS ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2
        with:
          mask-password: <bool>
          registry-type: <public|None>

      - name: Build, tag, and push image to AWS ECR
        env:
          DEV_ECR_REGISTRY: ...
          DEV_ECR_REPOSITORY: ...
        run: |
          docker build -t $DEV_ECR_REGISTRY/$DEV_ECR_REPOSITORY:${{ env.IMAGE_TAG }} .
          docker tag "$DEV_ECR_REGISTRY/$DEV_ECR_REPOSITORY:${{ env.IMAGE_TAG }}" "$DEV_ECR_REGISTRY/$DEV_ECR_REPOSITORY:latest"

          docker push $DEV_ECR_REGISTRY/$DEV_ECR_REPOSITORY:${{ env.IMAGE_TAG }}
          docker push $DEV_ECR_REGISTRY/$DEV_ECR_REPOSITORY:latest

      # us-west-2 for running container on VM
      - name: Configure AWS Credentials 2
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-region: ${{ env.AWS_REGION }}
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          role-duration-seconds: ...
          role-session-name: ...
          role-skip-session-tagging: ...

      - name: Pull latest image from ECR and run container
        uses: appleboy/ssh-action@v0.1.9
        env:
          DEV_ECR_REGISTRY: ${{ env.PUB_REGISTRY }}
          DEV_ECR_REPOSITORY: ${{ env.CONTAINER_IMAGE_NAME}}
        with:
          host: ${{ secrets.DEV_SSH_HOST }}
          username: ${{ secrets.DEV_SSH_USER }}
          key: ${{ secrets.DEV_SSH_PRIVATEKEY }}
          port: ${{ secrets.DEV_SSH_PORT }}
          debug: true
          envs: DEV_CONFIG_ENV,AWS_SECRET_ACCESS_KEY,AWS_ACCESS_KEY_ID,DEV_ECR_REGISTRY,DEV_ECR_REPOSITORY
          script: |
            mkdir -pv ./app
            cd ./app
            echo $DEV_CONFIG_ENV | tr ' ' '\n' > .env


            sudo apt-get update
            sudo aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws

            sudo docker stop ${{ env.CONTAINER_IMAGE_NAME }} || true
            sudo docker rm -f ${{ env.CONTAINER_IMAGE_NAME }} || true

            sudo docker rmi -f ${{ env.DEV_ECR_REGISTRY }}/${{ env.CONTAINER_IMAGE_NAME }}:latest || true
            sudo docker pull "${{ env.DEV_ECR_REGISTRY }}/${{ env.DEV_ECR_REPOSITORY }}:latest"

            sudo docker run -d --name ${{ env.CONTAINER_IMAGE_NAME }} \
              --restart always \
              -v "$(pwd)/.env:/app/.env:ro" \
              -p "8080:8080" \
              "${{ env.DEV_ECR_REGISTRY }}/${{ env.DEV_ECR_REPOSITORY }}:latest"

Summary

Congrats! You now have a working deployment for your application. No more “it worked on my machine” surprises. Creating a deployment strategy and pipeline is essential to improve developer experience and also boost your team’s productivity.