Cloud Architecture Playbook

Zero-Downtime Monolith Migration to AWS EKS

Author: Focus20 Cloud Engineering Team
Focus: AWS EKS, .NET Core, Blue/Green Deployment
Read Time: 15 Min

Migrating a mission-critical, multi-tenant enterprise application spanning terabytes of data from an on-premise Windows Server environment to a cloud-native Kubernetes architecture on AWS is incredibly risky. This is our playbook for executing the Strangler Fig pattern with absolute zero downtime.

1. The Architecture Paradox

A standard "lift-and-shift" migration solves hardware obsolescence but inherits all the technical debt of the legacy application. To truly leverage the cloud—and pave the way for Agentic AI integrations—the architecture must be decomposed into microservices. However, you cannot pause a live SaaS business for six months to rewrite the codebase.

The solution is the Strangler Fig Pattern executed inside an Amazon Elastic Kubernetes Service (EKS) cluster, utilizing a sophisticated API Gateway routing strategy.

2. Target State EKS Architecture

Our target state completely removes the dependency on Windows VMs. We leverage containerized Linux nodes running .NET Core microservices, managed entirely by AWS EKS.

graph TD Client[Client Application] ---|External HTTPS| R53[Route 53 DNS] R53 --- ALB[Application Load Balancer] subgraph "AWS EKS Cluster" ALB --- IG[Ingress Controller / NGINX] subgraph "Legacy Monolith Namespace" IG ---|/api/v1/*| MonoService[Monolith Service POD] end subgraph "Modern Microservices Namespace" IG ---|/api/v2/auth| AuthService[Auth Microservice POD] IG ---|/api/v2/billing| BillService[Billing Microservice POD] end end MonoService ---|Read/Write| DB[(On-Premise SQL Server)] AuthService ---|Read/Write| Aurora[(Amazon Aurora PostgreSQL)] BillService ---|Read/Write| Aurora

3. The Migration Execution Phases

Phase 1: The Networking Bridge

Before moving compute, we must bridge the data gap. We establish an AWS Direct Connect or Site-to-Site VPN to link the on-premise datacenter with the AWS VPC. This allows cloud services to securely query the legacy on-premise database with sub-10ms latency.

Phase 2: The API Gateway Facade

All client traffic is repointed from the legacy load balancer to an AWS Application Load Balancer (ALB) acting as an Ingress into the EKS cluster. Initially, 100% of the routes simply proxy traffic back to the on-premise monolith over the secure tunnel. The user experiences no change.

# Nginx Ingress Configuration (Initial State)
apiVersion: networking.k8s.io/v1
kind: Ingress
...
paths:
- path: /
pathType: Prefix
backend:
service:
name: legacy-monolith-external-service

Phase 3: Service-by-Service Decoupling (The Strangler)

Engineering isolates a specific domain—for example, the "Authentication" module. They rewrite this module as a containerized .NET Core microservice, deploy it to EKS, and point it to a new AWS Aurora database schema.

We then update the EKS Ingress Controller to securely split traffic. Requests to `/api/v2/auth` are routed to the new microservice, while all other traffic (`/*`) continues falling back to the legacy monolith.

Phase 4: Data Synchronization

Since the legacy system and the new microservices will coexist for months, data must stay in sync. We deploy AWS Database Migration Service (DMS) utilizing Change Data Capture (CDC). Any updates made to the legacy SQL Server are streamed in real-time to the new Aurora instances, ensuring full data integrity during the parallel run.

4. Business Value Delivered

By executing this phased strangler pattern via EKS, the enterprise achieved:

Ready to modernize your infrastructure?

Request a Cloud Assessment