Skip to content
Bisman.
Navigate
External
ProjectsProjects
Bisman.
Home›Case Studies›Ephemeral PR Preview Environments with Pulumi & GitHub Actions
CI/CD★ Featured

Ephemeral PR Preview Environments with Pulumi & GitHub Actions

On-demand isolated environments for every pull request with automatic cleanup

Key Metrics

~3–5 minutes
Deployment Time
40% faster review cycles
Developer Velocity

The Challenge

Development teams faced significant friction in the review and QA process:

  • Shared staging environments created conflicts when multiple PRs needed testing simultaneously
  • Manual environment setup took hours and was error-prone
  • Stale environments accumulated, wasting cloud resources and budget
  • Delayed feedback loops slowed down feature delivery and increased context-switching costs
  • Stakeholders couldn't preview features until they reached a shared environment

The Solution

Architecture Overview

Built an automated system using GitHub Actions and Pulumi that provisions complete, isolated Kubernetes environments on-demand for each pull request.

1) Trigger Mechanism

Preview environments are triggered via repository dispatch events, allowing flexible integration with the main application repository's CI pipeline.

The workflow:

  • Listens for deploy-preview events with the PR branch name as payload
  • Extracts the environment name from the branch/PR reference
  • Checks if namespace already exists (supports re-deploys on PR updates)

2) Infrastructure Provisioning

Each preview environment gets:

  • Dedicated Kubernetes namespace
  • Complete application stack (API, workers, frontend apps)
  • Isolated database schemas
  • Unique ingress routes with predictable URLs
  • Secrets synced from AWS Secrets Manager

Flow

  1. Configure secure network access (VPN) for private cluster communication.

  2. Authenticate to AWS using OIDC (no long-lived credentials).

  3. Update kubeconfig for EKS cluster access.

  4. Check if namespace exists—if yes, perform rolling restart; if no, proceed with fresh deployment.

  5. Run Pulumi up command with PR-specific stack name using S3 backend for state.

  6. Annotate External Secrets to force sync of application secrets.

  7. Verify pod health and collect ingress URLs.

  8. Send Slack notification with all preview URLs.

3) Predictable URL Pattern

Each service gets a consistent, shareable URL following the pattern:

  • {service}-{pr-name}.preview.example.com

Examples: api-pr-123.preview.example.com, app-pr-123.preview.example.com

4) Automatic Cleanup

On PR close or manual trigger:

  1. Validate the environment exists before attempting destruction.

  2. Run Pulumi destroy to remove all Kubernetes resources.

  3. Force-delete associated AWS Secrets Manager entries.

  4. Send Slack notification confirming cleanup completion.

5) State Management

  • Pulumi stacks stored in S3 with encryption
  • Each PR gets its own stack: {org}/preview/{pr-name}
  • State automatically removed on destroy with remove: true flag

Technologies Used

  • GitHub Actions (CI/CD orchestration)
  • Pulumi (Infrastructure as Code)
  • AWS EKS (Kubernetes cluster)
  • AWS Secrets Manager (credentials)
  • Helm (application deployments)
  • VPN connector (secure network access)
  • Slack (notifications)
  • OIDC (keyless AWS authentication)

Results Achieved

  • Zero environment conflicts: Each PR operates in complete isolation
  • Self-service previews: Developers trigger environments without DevOps intervention
  • Faster reviews: Reviewers access live environments directly from PR comments
  • No stale resources: Automatic cleanup prevents cloud waste
  • Stakeholder visibility: Product managers and QA preview features before merge

Key Metrics

  • Deployment Time: ~3–5 minutes
  • Developer Velocity: 40% faster review cycles

Key Learnings

  • Use OIDC for GitHub-to-AWS authentication to eliminate credential rotation overhead
  • Implement namespace existence checks to support both fresh deploys and updates
  • Force-sync External Secrets after namespace creation to avoid stale credentials
  • Include comprehensive Slack notifications for visibility across the team
  • Store Pulumi state in S3 with stack-per-PR isolation for clean teardown

Technologies & Tools

GitHub ActionsPulumiKubernetesEKSPreview EnvironmentsCI/CDInfrastructure as CodeAutomation
← Back to All Case StudiesDiscuss Your Project →

© 2026 Bisman Singh. Built with passion for DevOps and automation.

Navigation

  • Home
  • About
  • Publications
  • Contact

About Sections

  • Experience
  • Tooling
  • Certifications
  • Education

Resources

  • Case Studies
  • Technical Guides