Skip to main content

Overview

The deployment pipeline flows:
Bitbucket (source) → AWS CodeBuild → ECR (images) → ECS (deployment)
There’s also a basic Bitbucket Pipelines config for direct testing.

Bitbucket Pipelines

File: bitbucket-pipelines.yml Simple CI pipeline that runs on all branches:
image: php:7.1.3

pipelines:
  default:
    - step:
        script:
          - apt-get update && apt-get install -y unzip
          - composer install
          - vendor/bin/phpunit
        caches:
          - composer

AWS CodeBuild — Staging

File: buildspec-staging.yml
1

Pre-build

  1. Downloads .env from s3://tutorbloc-app/env-configurations/staging/.env
  2. Downloads APNS-Key.p8 from S3 (push notification certificate)
  3. Downloads AAACertificateServices.crt from S3 (SSL certificate)
  4. Logs in to ECR
  5. Pulls BlackBox Docker image from ECR
2

Build

  1. Logs in to Docker Hub (for base images)
  2. Runs docker-compose up -d (builds and starts all services)
3

Post-build

  1. Waits for mysql-db-test to be ready (120s timeout via Dockerize)
  2. Runs database migrations on test database:
    php artisan migrate --database=mysql_test --force
    
  3. Runs full test suite:
    vendor/bin/phpunit
    
  4. Tags Docker images:
    • tutorbloc.app:staging and tutorbloc.app:staging-{BUILD_NUMBER}
    • tutorbloc.app.nginx:staging and tutorbloc.app.nginx:staging-{BUILD_NUMBER}
    • redis:staging
  5. Pushes all images to ECR
Artifacts: appspec.yaml, staging-taskdef.json

AWS CodeBuild — Production

File: buildspec-production.yml Same structure as staging with key differences:
AspectStagingProduction
APP_ENVstagingproduction
Runs migrationsYes (test DB)No
Runs testsYesNo
Image tagsstaging-*production-*
Artifactsstaging-taskdef.jsonproduction-taskdef.json
Production builds skip tests entirely. The assumption is that staging tests have already passed. Production also skips test database migrations.
Potential bug: buildspec-production.yml downloads APNS key and SSL certificate from the staging config path, not production. This may be intentional (shared certs) or a copy-paste error. Confirm with team.

ECS deployment

App spec

File: appspec.yaml
version: 0.0
Resources:
  - TargetService:
      Type: AWS::ECS::Service
      Properties:
        TaskDefinition: <TASK_DEFINITION>
        LoadBalancerInfo:
          ContainerName: "nginx"
          ContainerPort: 80
The load balancer routes to the Nginx container on port 80.

Task definitions

Production (production-taskdef.json):
  • 3 containers: app, nginx, blackbox
  • All with 128MB memory reservation
  • Bridge networking mode
  • Environment loaded from S3
  • CloudWatch logging to /ecs/production-app-ec2
Staging (staging-taskdef.json):
  • Same structure
  • Nginx exposes ports 80 AND 443 (vs only 80 in production)
  • CloudWatch logging to /ecs/staging-app-ec2

Container startup order

  1. blackbox starts first (no dependencies)
  2. app starts and links to blackbox
    • Runs php artisan migrate --force
    • Starts cron daemon
    • Starts PHP-FPM
  3. nginx starts and links to app
    • Proxies HTTP requests to app:9000

ECR images

Registry: 170904582664.dkr.ecr.eu-west-2.amazonaws.com

Images:
├── tutorbloc.app          → PHP-FPM application
├── tutorbloc.app.nginx    → Nginx web server
├── tutorbloc.blackbox     → Distance calculation service
└── redis                  → Redis cache
Each image is tagged with:
  • {environment} — latest for that environment (e.g., production)
  • {environment}-{build_number} — specific build (e.g., production-42)

Legacy: Elastic Beanstalk

File: Dockerrun.aws.json An older single-container EB config exists:
{
  "AWSEBDockerrunVersion": "1",
  "Image": { "Name": "tbjeremiah/tutorbloc.app:staging" },
  "Ports": [{ "ContainerPort": "9000" }]
}
Also includes .ebextensions/01_server.config with Nginx configuration. This appears to be a legacy setup — current deployment uses ECS.