In Part 1 we got a containerised app running on AWS ECS manually — build the image, push to ECR, force a redeployment in the console. It works, but every update is four or five manual steps.
This post automates all of that. By the end, a git push to main will build your image, push it to ECR, and redeploy your ECS service — no console clicks required.
Quick heads up: this setup is for getting an alpha or MVP in front of clients fast. Not a production-grade pipeline. If you haven’t seen Part 1 yet, start there — this builds directly on it.
📺 Prefer to watch? Full walkthrough on YouTube
How the CI/CD Pipeline Works
git push to main
↓
GitHub Actions workflow triggers
↓
Step 1: Build Docker image → push to ECR
↓
Step 2: Deploy to AWS ECS
├── if service exists → force update
└── if not → create task definition + service
↓
Step 3: Verify deployment → grab public IP
GitHub Actions is essentially a hosted script runner. You write a YAML file describing the sequence of commands, commit it to your repo, and GitHub runs it on their infrastructure every time you push. Same commands you ran manually in Part 1 — just automated.
Step 1: Create a GitHub Repository
If you don’t have one yet, create a new repo on GitHub. Then initialise git in your project directory and connect it:
git init
git remote add origin https://github.com/your-username/your-repo.git
git add .
git commit -m "initial commit"
git push -u origin main
Verify the repo on GitHub shows your project files before moving on.
Step 2: Create the Workflow File
GitHub Actions looks for workflow files inside .github/workflows/. Create that directory and add a YAML file:
mkdir -p .github/workflows
touch .github/workflows/deploy.yml
Start with a skeleton — placeholders first so you can verify the structure before adding real commands. Think of a workflow as a sequence of terminal commands grouped into steps. Same order you’d run them manually.
name: Deploy to AWS ECS
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build image
run: echo "build step placeholder"
- name: Deploy to ECS
run: echo "deploy step placeholder"
- name: Verify deployment
run: echo "verify step placeholder"
Commit and push this to trigger your first workflow run:
git add .github/workflows/deploy.yml
git commit -m "add workflow skeleton"
git push
Go to the Actions tab in your GitHub repo and confirm the workflow ran and the echo messages appear. Once that’s working, start filling in the real commands.
Step 3: Add AWS Credentials as GitHub Secrets
The workflow needs your AWS credentials to push to ECR and interact with ECS. Never hardcode credentials in YAML files — use GitHub Secrets instead.
Go to your repo → Settings → Security → Secrets and variables → Actions and add:
| Secret name | Value |
|---|---|
AWS_ACCESS_KEY_ID | Your IAM access key ID |
AWS_SECRET_ACCESS_KEY | Your IAM secret access key |
AWS_REGION | e.g. ap-southeast-1 |
ECR_REPOSITORY | Your ECR repo URI |
ECS_CLUSTER | Your cluster name |
ECS_SERVICE | Your service name |
Step 4: Fill In the Build Step
Replace the build placeholder with the actual commands. These are the same ones from Part 1 — just moved into the workflow:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
IMAGE_TAG: latest
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
Using an environment variable for the repository URI means you only have to update it in one place if it ever changes. Avoid hardcoding values in commands — it saves you from hunting through YAML later.
Step 5: Fill In the Deploy Step
This is where it gets a bit more involved. The aws ecs update-service command only works if the service already exists. Since we cleared out the old service from Part 1, we need to handle both cases: update if the service exists, create it from scratch if it doesn’t.
- name: Deploy to AWS ECS
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
AWS_REGION: ${{ secrets.AWS_REGION }}
IMAGE_TAG: latest
run: |
IMAGE_URI="$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
# Register a new task definition revision with the updated image
TASK_DEF_ARN=$(aws ecs register-task-definition \
--family gif-app \
--network-mode awsvpc \
--requires-compatibilities FARGATE \
--cpu "256" \
--memory "512" \
--container-definitions "[{
\"name\": \"gif-app\",
\"image\": \"$IMAGE_URI\",
\"portMappings\": [{\"containerPort\": 5000, \"protocol\": \"tcp\"}],
\"essential\": true
}]" \
--query 'taskDefinition.taskDefinitionArn' \
--output text)
# Handle security group — reuse if it exists, create if it doesn't
SG_ID=$(aws ec2 describe-security-groups \
--filters "Name=group-name,Values=gif-app-sg" \
--query 'SecurityGroups[0].GroupId' \
--output text 2>/dev/null)
if [ "$SG_ID" == "None" ] || [ -z "$SG_ID" ]; then
VPC_ID=$(aws ec2 describe-vpcs \
--filters "Name=isDefault,Values=true" \
--query 'Vpcs[0].VpcId' --output text)
SG_ID=$(aws ec2 create-security-group \
--group-name gif-app-sg \
--description "gif-app security group" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $SG_ID \
--protocol tcp --port 5000 --cidr 0.0.0.0/0
fi
SUBNET_ID=$(aws ec2 describe-subnets \
--filters "Name=defaultForAz,Values=true" \
--query 'Subnets[0].SubnetId' --output text)
# Update existing service or create a new one
SERVICE_STATUS=$(aws ecs describe-services \
--cluster $ECS_CLUSTER \
--services $ECS_SERVICE \
--query 'services[0].status' \
--output text 2>/dev/null)
if [ "$SERVICE_STATUS" == "ACTIVE" ]; then
aws ecs update-service \
--cluster $ECS_CLUSTER \
--service $ECS_SERVICE \
--task-definition $TASK_DEF_ARN \
--force-new-deployment
else
aws ecs create-service \
--cluster $ECS_CLUSTER \
--service-name $ECS_SERVICE \
--task-definition $TASK_DEF_ARN \
--desired-count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={
subnets=[$SUBNET_ID],
securityGroups=[$SG_ID],
assignPublicIp=ENABLED
}"
fi
A few things worth calling out here:
Task definition registration — every deploy registers a new revision with the updated image URI. This is what triggers ECS to pull the new container.
Security group check — the script checks if the security group already exists before trying to create it. Without this, leftover resources from a previous run will cause the workflow to fail.
Service check — describe-services tells us if the service is ACTIVE. If it is, we update it. If not, we create it fresh.
Step 6: Add the Verify Step
This last step waits for the deployment to stabilise, then grabs the public IP so you don’t have to go hunting for it in the console:
- name: Verify deployment
env:
ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
AWS_REGION: ${{ secrets.AWS_REGION }}
run: |
# Wait for service to stabilise
aws ecs wait services-stable \
--cluster $ECS_CLUSTER \
--services $ECS_SERVICE
# Grab the running task's public IP
TASK_ARN=$(aws ecs list-tasks \
--cluster $ECS_CLUSTER \
--service-name $ECS_SERVICE \
--query 'taskArns[0]' --output text)
ENI_ID=$(aws ecs describe-tasks \
--cluster $ECS_CLUSTER \
--tasks $TASK_ARN \
--query 'tasks[0].attachments[0].details[?name==`networkInterfaceId`].value' \
--output text)
PUBLIC_IP=$(aws ec2 describe-network-interfaces \
--network-interface-ids $ENI_ID \
--query 'NetworkInterfaces[0].Association.PublicIp' \
--output text)
echo "App is live at: http://$PUBLIC_IP:5000"
The wait services-stable command blocks the workflow until ECS confirms the new task is healthy. The IP printed at the end is clickable directly from the Actions run log.
Full Workflow File
Here’s the complete deploy.yml for reference:
name: Deploy to AWS ECS
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push image to ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
IMAGE_TAG: latest
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- name: Deploy to AWS ECS
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
AWS_REGION: ${{ secrets.AWS_REGION }}
IMAGE_TAG: latest
run: |
IMAGE_URI="$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
TASK_DEF_ARN=$(aws ecs register-task-definition \
--family gif-app \
--network-mode awsvpc \
--requires-compatibilities FARGATE \
--cpu "256" --memory "512" \
--container-definitions "[{
\"name\": \"gif-app\",
\"image\": \"$IMAGE_URI\",
\"portMappings\": [{\"containerPort\": 5000, \"protocol\": \"tcp\"}],
\"essential\": true
}]" \
--query 'taskDefinition.taskDefinitionArn' --output text)
SG_ID=$(aws ec2 describe-security-groups \
--filters "Name=group-name,Values=gif-app-sg" \
--query 'SecurityGroups[0].GroupId' --output text 2>/dev/null)
if [ "$SG_ID" == "None" ] || [ -z "$SG_ID" ]; then
VPC_ID=$(aws ec2 describe-vpcs \
--filters "Name=isDefault,Values=true" \
--query 'Vpcs[0].VpcId' --output text)
SG_ID=$(aws ec2 create-security-group \
--group-name gif-app-sg \
--description "gif-app security group" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $SG_ID \
--protocol tcp --port 5000 --cidr 0.0.0.0/0
fi
SUBNET_ID=$(aws ec2 describe-subnets \
--filters "Name=defaultForAz,Values=true" \
--query 'Subnets[0].SubnetId' --output text)
SERVICE_STATUS=$(aws ecs describe-services \
--cluster $ECS_CLUSTER --services $ECS_SERVICE \
--query 'services[0].status' --output text 2>/dev/null)
if [ "$SERVICE_STATUS" == "ACTIVE" ]; then
aws ecs update-service \
--cluster $ECS_CLUSTER --service $ECS_SERVICE \
--task-definition $TASK_DEF_ARN --force-new-deployment
else
aws ecs create-service \
--cluster $ECS_CLUSTER --service-name $ECS_SERVICE \
--task-definition $TASK_DEF_ARN --desired-count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={
subnets=[$SUBNET_ID],
securityGroups=[$SG_ID],
assignPublicIp=ENABLED
}"
fi
- name: Verify deployment
env:
ECS_CLUSTER: ${{ secrets.ECS_CLUSTER }}
ECS_SERVICE: ${{ secrets.ECS_SERVICE }}
run: |
aws ecs wait services-stable \
--cluster $ECS_CLUSTER --services $ECS_SERVICE
TASK_ARN=$(aws ecs list-tasks \
--cluster $ECS_CLUSTER --service-name $ECS_SERVICE \
--query 'taskArns[0]' --output text)
ENI_ID=$(aws ecs describe-tasks \
--cluster $ECS_CLUSTER --tasks $TASK_ARN \
--query 'tasks[0].attachments[0].details[?name==`networkInterfaceId`].value' \
--output text)
PUBLIC_IP=$(aws ec2 describe-network-interfaces \
--network-interface-ids $ENI_ID \
--query 'NetworkInterfaces[0].Association.PublicIp' --output text)
echo "App is live at: http://$PUBLIC_IP:5000"
Testing the Full Loop
Make a code change — swap out a gif, update some text, anything visible. Commit and push:
git add .
git commit -m "update gif list"
git push
Watch the Actions tab. The workflow will run automatically, and the verify step will print the IP when it’s done. Open it in the browser and confirm your change is live.
What You’ve Built
A git push now handles everything that used to be manual:
- Docker image built and tagged
- Image pushed to your private ECR registry
- ECS task definition updated with the new image
- Service redeployed (created if it doesn’t exist yet)
- Public IP printed when it’s stable
What’s Next
Part 3 replaces the manual cluster and ECR setup from Part 1 with Pulumi — so the entire infrastructure is defined in code, repeatable, and version-controlled. No more clicking around the AWS console to set things up.
Questions or ran into issues with the workflow? Drop them in the comments.