← Back home

Add an AWS Application Load Balancer to ECS Fargate via GitHub Actions (No More IP Chasing)

In Part 2 we automated deployments with GitHub Actions — push to main, get a new build live. The problem: every new task gets a new public IP. Which means updating your clients with a new address after every single deploy.

Today we fix that permanently. By the end of this post, you’ll have one stable DNS name that never changes, no matter how many times you deploy.

This is Part 3 of the series:

📺 Prefer to watch? Full walkthrough on YouTube


Why Tasks Keep Getting New IPs

Every time GitHub Actions registers a new task definition and forces a deployment, ECS spins up a fresh Fargate task. That task gets assigned a new public IP by AWS. There’s no way around this — it’s just how Fargate networking works.

The fix is straightforward: put an Application Load Balancer (ALB) in front of the tasks. The ALB gets a DNS name that stays the same forever. Tasks come and go behind it, their IPs change, and nothing on the outside ever notices.


What We’re Adding

Here’s the updated architecture:

git push to main

GitHub Actions workflow

Step 1: Build + push image to ECR        (unchanged)

Step 2: Resolve VPC and subnets          (new)

Step 3: Configure security groups        (new)

Step 4: ALB, target group, listener      (new)

Step 5: Register task definition         (moved from deploy step)

Step 6: Deploy ECS service               (updated — now wires ALB)

Step 7: Verify + print stable URL        (updated — same URL every time)

Three new pieces need to work together for the ALB to function:

Target group — tracks which Fargate tasks are healthy and ready. The ALB uses this to know where to send traffic.

Listener — the routing rule on the ALB. It says: “when a request comes in on port 80, forward it to this target group.” Without the listener, the ALB and target group exist independently and do nothing.

Load balancer — sits on the public internet, takes traffic on port 80, and hands it off through the listener to the target group to your containers.


The Workflow, Step by Step

Step 1 — Build and Push (Unchanged)

Nothing changes here. Build the image, push to ECR, move on. Check Part 2 if you need a refresher.


Step 2 — Resolve VPC and Subnets

Before creating an ALB, you need to know which network it lives in and which subnets are available.

- name: Resolve VPC and subnets
  id: network
  run: |
    VPC_ID=$(aws ec2 describe-vpcs \
      --filters "Name=isDefault,Values=true" \
      --query 'Vpcs[0].VpcId' --output text)

    # Get ALL default subnets — ALB requires at least 2 AZs
    SUBNET_IDS=$(aws ec2 describe-subnets \
      --filters "Name=vpc-id,Values=$VPC_ID" "Name=defaultForAz,Values=true" \
      --query 'Subnets[*].SubnetId' --output text | tr '\t' ',')

    echo "vpc_id=$VPC_ID" >> $GITHUB_OUTPUT
    echo "subnet_ids=$SUBNET_IDS" >> $GITHUB_OUTPUT

The key thing here: we’re grabbing all the default subnets, not just one. The ALB has a hard requirement — it must span at least two availability zones. AWS will reject the request outright if you only give it one subnet. Grabbing them dynamically means this works regardless of which region you’re in.


Step 3 — Security Groups

This step creates two security groups with a specific relationship between them.

- name: Configure security groups
  id: security
  env:
    VPC_ID: ${{ steps.network.outputs.vpc_id }}
  run: |
    # --- ALB security group (public-facing, port 80) ---
    ALB_SG_ID=$(aws ec2 describe-security-groups \
      --filters "Name=group-name,Values=gif-app-alb-sg" \
                "Name=vpc-id,Values=$VPC_ID" \
      --query 'SecurityGroups[0].GroupId' --output text 2>/dev/null)

    if [ "$ALB_SG_ID" == "None" ] || [ -z "$ALB_SG_ID" ]; then
      ALB_SG_ID=$(aws ec2 create-security-group \
        --group-name gif-app-alb-sg \
        --description "ALB public security group" \
        --vpc-id $VPC_ID \
        --query 'GroupId' --output text)

      aws ec2 authorize-security-group-ingress \
        --group-id $ALB_SG_ID \
        --protocol tcp --port 80 --cidr 0.0.0.0/0
    fi

    # --- ECS task security group (only accepts traffic FROM the ALB) ---
    TASK_SG_ID=$(aws ec2 describe-security-groups \
      --filters "Name=group-name,Values=gif-app-task-sg" \
                "Name=vpc-id,Values=$VPC_ID" \
      --query 'SecurityGroups[0].GroupId' --output text 2>/dev/null)

    if [ "$TASK_SG_ID" == "None" ] || [ -z "$TASK_SG_ID" ]; then
      TASK_SG_ID=$(aws ec2 create-security-group \
        --group-name gif-app-task-sg \
        --description "ECS task security group" \
        --vpc-id $VPC_ID \
        --query 'GroupId' --output text)

      # Source is the ALB security group — not a CIDR range
      aws ec2 authorize-security-group-ingress \
        --group-id $TASK_SG_ID \
        --protocol tcp --port 5000 \
        --source-group $ALB_SG_ID
    fi

    echo "alb_sg_id=$ALB_SG_ID" >> $GITHUB_OUTPUT
    echo "task_sg_id=$TASK_SG_ID" >> $GITHUB_OUTPUT

The check-before-create pattern on both groups makes this idempotent — safe to run on every deploy without creating duplicate resources. If a group exists, reuse it. If not, create it.

The relationship between the two groups is the important part:


Step 4 — ALB, Target Group, and Listener

Three resources, built in order. Each depends on the one before it.

- name: Setup ALB, target group, and listener
  id: alb
  env:
    VPC_ID: ${{ steps.network.outputs.vpc_id }}
    SUBNET_IDS: ${{ steps.network.outputs.subnet_ids }}
    ALB_SG_ID: ${{ steps.security.outputs.alb_sg_id }}
  run: |
    # --- Target group ---
    TG_ARN=$(aws elbv2 describe-target-groups \
      --names gif-app-tg \
      --query 'TargetGroups[0].TargetGroupArn' --output text 2>/dev/null)

    if [ "$TG_ARN" == "None" ] || [ -z "$TG_ARN" ]; then
      TG_ARN=$(aws elbv2 create-target-group \
        --name gif-app-tg \
        --protocol HTTP \
        --port 5000 \
        --vpc-id $VPC_ID \
        --target-type ip \
        --health-check-path / \
        --healthy-threshold-count 2 \
        --unhealthy-threshold-count 3 \
        --query 'TargetGroups[0].TargetGroupArn' --output text)
    fi

    # --- Load balancer ---
    ALB_ARN=$(aws elbv2 describe-load-balancers \
      --names gif-app-alb \
      --query 'LoadBalancers[0].LoadBalancerArn' --output text 2>/dev/null)

    if [ "$ALB_ARN" == "None" ] || [ -z "$ALB_ARN" ]; then
      # Convert comma-separated subnets to space-separated for CLI
      SUBNET_LIST=$(echo $SUBNET_IDS | tr ',' ' ')

      ALB_ARN=$(aws elbv2 create-load-balancer \
        --name gif-app-alb \
        --subnets $SUBNET_LIST \
        --security-groups $ALB_SG_ID \
        --scheme internet-facing \
        --type application \
        --query 'LoadBalancers[0].LoadBalancerArn' --output text)
    fi

    # --- Listener ---
    LISTENER_ARN=$(aws elbv2 describe-listeners \
      --load-balancer-arn $ALB_ARN \
      --query 'Listeners[?Port==`80`].ListenerArn' --output text 2>/dev/null)

    if [ -z "$LISTENER_ARN" ]; then
      aws elbv2 create-listener \
        --load-balancer-arn $ALB_ARN \
        --protocol HTTP \
        --port 80 \
        --default-actions Type=forward,TargetGroupArn=$TG_ARN
    fi

    ALB_DNS=$(aws elbv2 describe-load-balancers \
      --load-balancer-arns $ALB_ARN \
      --query 'LoadBalancers[0].DNSName' --output text)

    echo "tg_arn=$TG_ARN" >> $GITHUB_OUTPUT
    echo "alb_dns=$ALB_DNS" >> $GITHUB_OUTPUT

On the target group: --target-type ip is required for Fargate. Fargate tasks don’t run on EC2 instances you own — AWS registers them by their private IP directly. If you use instance instead, your tasks will spin up and the ALB will never know they exist.

The health check settings (healthy-threshold-count 2, unhealthy-threshold-count 3) give containers time to start without getting yanked, while still catching ones that are genuinely broken.

On the load balancer: We pass all the subnets from Step 2 — that’s why we grabbed all of them. scheme internet-facing gives it a public DNS name rather than keeping it internal to the VPC.

On the listener: This is the piece that connects the ALB to the target group. Without it, both resources exist in the same account but have no idea about each other. The listener is the routing rule that makes them a system.


Step 5 — Register Task Definition

- name: Register task definition
  id: task-def
  env:
    ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
    ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
  run: |
    TASK_DEF_ARN=$(aws ecs register-task-definition \
      --family gif-app \
      --network-mode awsvpc \
      --requires-compatibilities FARGATE \
      --cpu "256" --memory "512" \
      --execution-role-arn arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/ecsTaskExecutionRole \
      --container-definitions "[{
        \"name\": \"gif-app\",
        \"image\": \"$ECR_REGISTRY/$ECR_REPOSITORY:latest\",
        \"portMappings\": [{\"containerPort\": 5000, \"protocol\": \"tcp\"}],
        \"essential\": true
      }]" \
      --query 'taskDefinition.taskDefinitionArn' --output text)

    echo "task_def_arn=$TASK_DEF_ARN" >> $GITHUB_OUTPUT

Every deploy registers a new revision. AWS keeps the full history, so you can always roll back to a previous version. We query back the exact ARN of the revision we just created — ECS needs the full ARN to know which specific revision to run.


Step 6 — Deploy ECS Service

Same update-or-create pattern from Part 2, with one critical addition: --load-balancers on the create path.

- name: Deploy ECS service
  env:
    ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
    ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
    TASK_DEF_ARN: ${{ steps.task-def.outputs.task_def_arn }}
    TG_ARN: ${{ steps.alb.outputs.tg_arn }}
    TASK_SG_ID: ${{ steps.security.outputs.task_sg_id }}
    SUBNET_IDS: ${{ steps.network.outputs.subnet_ids }}
  run: |
    SUBNET_LIST=$(echo $SUBNET_IDS | tr ',' ' ')

    SERVICE_STATUS=$(aws ecs describe-services \
      --cluster $ECS_CLUSTER --services $ECS_SERVICE \
      --query 'services[0].status' --output text 2>/dev/null)

    if [ "$SERVICE_STATUS" == "ACTIVE" ]; then
      aws ecs update-service \
        --cluster $ECS_CLUSTER \
        --service $ECS_SERVICE \
        --task-definition $TASK_DEF_ARN \
        --force-new-deployment
    else
      aws ecs create-service \
        --cluster $ECS_CLUSTER \
        --service-name $ECS_SERVICE \
        --task-definition $TASK_DEF_ARN \
        --desired-count 1 \
        --launch-type FARGATE \
        --network-configuration "awsvpcConfiguration={
          subnets=[$SUBNET_IDS],
          securityGroups=[$TASK_SG_ID],
          assignPublicIp=ENABLED
        }" \
        --load-balancers "targetGroupArn=$TG_ARN,containerName=gif-app,containerPort=5000"
    fi

⚠️ The gotcha that trips everyone up: You can only attach a load balancer to an ECS service at creation time. update-service accepts the --load-balancers flag and silently ignores it. If you already have an existing service from Part 2, you have to delete it and let this step recreate it with the ALB wired in. There’s no way around this.


Step 7 — Verify Deployment

- name: Verify deployment
  env:
    ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
    ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
    ALB_DNS: ${{ steps.alb.outputs.alb_dns }}
  run: |
    aws ecs wait services-stable \
      --cluster $ECS_CLUSTER \
      --services $ECS_SERVICE

    echo "✅ App is live at: http://$ALB_DNS"

wait services-stable blocks the workflow until all desired tasks are running and passing health checks. Without this, the workflow would mark itself successful the moment the deploy command was sent — even if the container is still starting up or silently crashing.

The URL printed here is the stable one. It doesn’t change when you deploy. It doesn’t change when your tasks restart. It stays the same for as long as the load balancer exists. Bookmark it and share it with clients.


Complete Workflow File

name: Deploy to AWS ECS

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build and push image to ECR
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:latest .
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest

      - name: Resolve VPC and subnets
        id: network
        run: |
          VPC_ID=$(aws ec2 describe-vpcs \
            --filters "Name=isDefault,Values=true" \
            --query 'Vpcs[0].VpcId' --output text)
          SUBNET_IDS=$(aws ec2 describe-subnets \
            --filters "Name=vpc-id,Values=$VPC_ID" "Name=defaultForAz,Values=true" \
            --query 'Subnets[*].SubnetId' --output text | tr '\t' ',')
          echo "vpc_id=$VPC_ID" >> $GITHUB_OUTPUT
          echo "subnet_ids=$SUBNET_IDS" >> $GITHUB_OUTPUT

      - name: Configure security groups
        id: security
        env:
          VPC_ID: ${{ steps.network.outputs.vpc_id }}
        run: |
          ALB_SG_ID=$(aws ec2 describe-security-groups \
            --filters "Name=group-name,Values=gif-app-alb-sg" "Name=vpc-id,Values=$VPC_ID" \
            --query 'SecurityGroups[0].GroupId' --output text 2>/dev/null)
          if [ "$ALB_SG_ID" == "None" ] || [ -z "$ALB_SG_ID" ]; then
            ALB_SG_ID=$(aws ec2 create-security-group \
              --group-name gif-app-alb-sg --description "ALB public security group" \
              --vpc-id $VPC_ID --query 'GroupId' --output text)
            aws ec2 authorize-security-group-ingress \
              --group-id $ALB_SG_ID --protocol tcp --port 80 --cidr 0.0.0.0/0
          fi
          TASK_SG_ID=$(aws ec2 describe-security-groups \
            --filters "Name=group-name,Values=gif-app-task-sg" "Name=vpc-id,Values=$VPC_ID" \
            --query 'SecurityGroups[0].GroupId' --output text 2>/dev/null)
          if [ "$TASK_SG_ID" == "None" ] || [ -z "$TASK_SG_ID" ]; then
            TASK_SG_ID=$(aws ec2 create-security-group \
              --group-name gif-app-task-sg --description "ECS task security group" \
              --vpc-id $VPC_ID --query 'GroupId' --output text)
            aws ec2 authorize-security-group-ingress \
              --group-id $TASK_SG_ID --protocol tcp --port 5000 --source-group $ALB_SG_ID
          fi
          echo "alb_sg_id=$ALB_SG_ID" >> $GITHUB_OUTPUT
          echo "task_sg_id=$TASK_SG_ID" >> $GITHUB_OUTPUT

      - name: Setup ALB, target group, and listener
        id: alb
        env:
          VPC_ID: ${{ steps.network.outputs.vpc_id }}
          SUBNET_IDS: ${{ steps.network.outputs.subnet_ids }}
          ALB_SG_ID: ${{ steps.security.outputs.alb_sg_id }}
        run: |
          TG_ARN=$(aws elbv2 describe-target-groups \
            --names gif-app-tg \
            --query 'TargetGroups[0].TargetGroupArn' --output text 2>/dev/null)
          if [ "$TG_ARN" == "None" ] || [ -z "$TG_ARN" ]; then
            TG_ARN=$(aws elbv2 create-target-group \
              --name gif-app-tg --protocol HTTP --port 5000 \
              --vpc-id $VPC_ID --target-type ip --health-check-path / \
              --healthy-threshold-count 2 --unhealthy-threshold-count 3 \
              --query 'TargetGroups[0].TargetGroupArn' --output text)
          fi
          ALB_ARN=$(aws elbv2 describe-load-balancers \
            --names gif-app-alb \
            --query 'LoadBalancers[0].LoadBalancerArn' --output text 2>/dev/null)
          if [ "$ALB_ARN" == "None" ] || [ -z "$ALB_ARN" ]; then
            SUBNET_LIST=$(echo $SUBNET_IDS | tr ',' ' ')
            ALB_ARN=$(aws elbv2 create-load-balancer \
              --name gif-app-alb --subnets $SUBNET_LIST \
              --security-groups $ALB_SG_ID --scheme internet-facing --type application \
              --query 'LoadBalancers[0].LoadBalancerArn' --output text)
          fi
          LISTENER_ARN=$(aws elbv2 describe-listeners \
            --load-balancer-arn $ALB_ARN \
            --query 'Listeners[?Port==`80`].ListenerArn' --output text 2>/dev/null)
          if [ -z "$LISTENER_ARN" ]; then
            aws elbv2 create-listener \
              --load-balancer-arn $ALB_ARN --protocol HTTP --port 80 \
              --default-actions Type=forward,TargetGroupArn=$TG_ARN
          fi
          ALB_DNS=$(aws elbv2 describe-load-balancers \
            --load-balancer-arns $ALB_ARN \
            --query 'LoadBalancers[0].DNSName' --output text)
          echo "tg_arn=$TG_ARN" >> $GITHUB_OUTPUT
          echo "alb_dns=$ALB_DNS" >> $GITHUB_OUTPUT

      - name: Register task definition
        id: task-def
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
        run: |
          TASK_DEF_ARN=$(aws ecs register-task-definition \
            --family gif-app --network-mode awsvpc \
            --requires-compatibilities FARGATE --cpu "256" --memory "512" \
            --execution-role-arn arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/ecsTaskExecutionRole \
            --container-definitions "[{
              \"name\": \"gif-app\",
              \"image\": \"$ECR_REGISTRY/$ECR_REPOSITORY:latest\",
              \"portMappings\": [{\"containerPort\": 5000, \"protocol\": \"tcp\"}],
              \"essential\": true
            }]" \
            --query 'taskDefinition.taskDefinitionArn' --output text)
          echo "task_def_arn=$TASK_DEF_ARN" >> $GITHUB_OUTPUT

      - name: Deploy ECS service
        env:
          ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
          ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
          TASK_DEF_ARN: ${{ steps.task-def.outputs.task_def_arn }}
          TG_ARN: ${{ steps.alb.outputs.tg_arn }}
          TASK_SG_ID: ${{ steps.security.outputs.task_sg_id }}
          SUBNET_IDS: ${{ steps.network.outputs.subnet_ids }}
        run: |
          SERVICE_STATUS=$(aws ecs describe-services \
            --cluster $ECS_CLUSTER --services $ECS_SERVICE \
            --query 'services[0].status' --output text 2>/dev/null)
          if [ "$SERVICE_STATUS" == "ACTIVE" ]; then
            aws ecs update-service \
              --cluster $ECS_CLUSTER --service $ECS_SERVICE \
              --task-definition $TASK_DEF_ARN --force-new-deployment
          else
            aws ecs create-service \
              --cluster $ECS_CLUSTER --service-name $ECS_SERVICE \
              --task-definition $TASK_DEF_ARN --desired-count 1 \
              --launch-type FARGATE \
              --network-configuration "awsvpcConfiguration={
                subnets=[$SUBNET_IDS],
                securityGroups=[$TASK_SG_ID],
                assignPublicIp=ENABLED
              }" \
              --load-balancers "targetGroupArn=$TG_ARN,containerName=gif-app,containerPort=5000"
          fi

      - name: Verify deployment
        env:
          ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
          ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
          ALB_DNS: ${{ steps.alb.outputs.alb_dns }}
        run: |
          aws ecs wait services-stable \
            --cluster $ECS_CLUSTER --services $ECS_SERVICE
          echo "✅ App is live at: http://$ALB_DNS"

Testing the Full Loop

Make a change, push it, and watch the Actions tab. When the verify step prints the URL, open it in the browser. Same address as last time. That’s the whole point.

git add .
git commit -m "test stable URL"
git push

One Important Note on Costs

The ALB costs money just to exist — roughly $16–20/month depending on region, even with zero traffic. Everything else in this stack (Fargate tasks, ECR storage) is either free tier or negligible at demo scale.

When you’re done testing, delete the load balancer to avoid charges. The workflow will recreate it on the next push.

# Find and delete the ALB
aws elbv2 delete-load-balancer --load-balancer-arn <your-alb-arn>

# Clean up the listener and target group too
aws elbv2 delete-target-group --target-group-arn <your-tg-arn>

What You’ve Built Across the Series

Part 1 — manual ECS cluster, ECR repo, task definition, and your first container deployment.

Part 2 — GitHub Actions pipeline that automates build, push, and deploy on every git push.

Part 3 — Application Load Balancer giving you a permanent DNS name that survives task restarts and redeployments.

The same pattern scales further from here — HTTPS with ACM, a custom domain in Route 53, autoscaling policies. But for getting a proof of concept in front of clients quickly and iterating on it without ops overhead, this stack does the job.

Questions or ran into something unexpected? Drop them in the comments.