Ephemeral PR Preview Environments with Pulumi & GitHub Actions
On-demand isolated environments for every pull request with automatic cleanup
Key Metrics
The Challenge
Development teams faced significant friction in the review and QA process:
- Shared staging environments created conflicts when multiple PRs needed testing simultaneously
- Manual environment setup took hours and was error-prone
- Stale environments accumulated, wasting cloud resources and budget
- Delayed feedback loops slowed down feature delivery and increased context-switching costs
- Stakeholders couldn't preview features until they reached a shared environment
The Solution
Architecture Overview
Built an automated system using GitHub Actions and Pulumi that provisions complete, isolated Kubernetes environments on-demand for each pull request.
1) Trigger Mechanism
Preview environments are triggered via repository dispatch events, allowing flexible integration with the main application repository's CI pipeline.
The workflow:
- Listens for
deploy-previewevents with the PR branch name as payload - Extracts the environment name from the branch/PR reference
- Checks if namespace already exists (supports re-deploys on PR updates)
2) Infrastructure Provisioning
Each preview environment gets:
- Dedicated Kubernetes namespace
- Complete application stack (API, workers, frontend apps)
- Isolated database schemas
- Unique ingress routes with predictable URLs
- Secrets synced from AWS Secrets Manager
Flow
Configure secure network access (VPN) for private cluster communication.
Authenticate to AWS using OIDC (no long-lived credentials).
Update kubeconfig for EKS cluster access.
Check if namespace exists—if yes, perform rolling restart; if no, proceed with fresh deployment.
Run Pulumi
upcommand with PR-specific stack name using S3 backend for state.Annotate External Secrets to force sync of application secrets.
Verify pod health and collect ingress URLs.
Send Slack notification with all preview URLs.
3) Predictable URL Pattern
Each service gets a consistent, shareable URL following the pattern:
{service}-{pr-name}.preview.example.com
Examples: api-pr-123.preview.example.com, app-pr-123.preview.example.com
4) Automatic Cleanup
On PR close or manual trigger:
Validate the environment exists before attempting destruction.
Run Pulumi
destroyto remove all Kubernetes resources.Force-delete associated AWS Secrets Manager entries.
Send Slack notification confirming cleanup completion.
5) State Management
- Pulumi stacks stored in S3 with encryption
- Each PR gets its own stack:
{org}/preview/{pr-name} - State automatically removed on destroy with
remove: trueflag
Technologies Used
- GitHub Actions (CI/CD orchestration)
- Pulumi (Infrastructure as Code)
- AWS EKS (Kubernetes cluster)
- AWS Secrets Manager (credentials)
- Helm (application deployments)
- VPN connector (secure network access)
- Slack (notifications)
- OIDC (keyless AWS authentication)
Results Achieved
- Zero environment conflicts: Each PR operates in complete isolation
- Self-service previews: Developers trigger environments without DevOps intervention
- Faster reviews: Reviewers access live environments directly from PR comments
- No stale resources: Automatic cleanup prevents cloud waste
- Stakeholder visibility: Product managers and QA preview features before merge
Key Metrics
- Deployment Time: ~3–5 minutes
- Developer Velocity: 40% faster review cycles
Key Learnings
- Use OIDC for GitHub-to-AWS authentication to eliminate credential rotation overhead
- Implement namespace existence checks to support both fresh deploys and updates
- Force-sync External Secrets after namespace creation to avoid stale credentials
- Include comprehensive Slack notifications for visibility across the team
- Store Pulumi state in S3 with stack-per-PR isolation for clean teardown