Migrating a mission-critical, multi-tenant enterprise application from Azure App Services to Amazon EKS isn't just a lift-and-shift—it's an opportunity for deep architectural modernization. At Focus20, we execute these migrations with zero downtime, shifting from fragile monoliths to agile, isolated microservices while driving down total cost of ownership by up to 45%.
1. The Legacy Azure Architecture
Many enterprise applications built in the mid-2010s share a similar evolutionary footprint. In this specific playbook, we address a common scenario: a massive .NET Framework monolith hosted on Azure App Services, utilizing Azure SQL Database (elastic pools), and Blob Storage for document retention.
The primary pain points driving the migration typically involve scale limitations. As multi-tenant data sets grow, Azure App Services often suffer from "noisy neighbor" problems during peak traffic hours, and Azure SQL Elastic Pools can become prohibitively expensive at high DTU thresholds. Furthermore, deploying a massive monolith means an hour-long CI/CD pipeline and frequent rollback catastrophes.
Strangler Fig Modernization
We do not pause feature development for a 6-month big bang rewrite. We migrate incrementally. We route traffic dynamically, peeling off specific high-compute endpoints into containerized microservices hosted on AWS EKS, while leaving the legacy monolithic core running until it handles 0% of traffic.
2. Modernizing the Core to .NET 8 / Core
Before migrating the compute workloads, the .NET Framework codebase (often 4.7.2 or 4.8) must be upgraded. While AWS natively supports .NET Framework on Windows EC2 containers, running Windows containers on EKS carries significant licensing costs and image size penalties overhead.
By utilizing tools like AWS Porting Assistant for .NET and Microsoft's Upgrade Assistant, we systematically refactor the application to .NET 8, allowing the application to be completely cross-platform. This unlocks the ability to build Linux Alpine-based Docker images, reducing the image size from 6GB+ down to ~150MB, significantly accelerating EKS pod horizontal scaling times.
3. Target AWS Architecture: The EKS Plane
Our target architecture on AWS revolves around Kubernetes (EKS) for unparalleled orchestration. EKS allows us to precisely allocate CPU/RAM requests per microservice and utilize Karpenter for lightning-fast Node autoscaling when traffic spikes.
In the diagram above, an AWS Application Load Balancer (ALB) acts as the routing layer for the Strangler pattern. By manipulating listener rules, we route specific API paths incrementally to the newly containerized EKS services. If an error rate spikes on a new endpoint, we simply toggle the ALB rule back to the Azure endpoint—instant rollback.
4. Database Migration: Azure SQL to Amazon Aurora
Data gravity is the hardest challenge in multi-cloud migration. We strictly avoid bulk exports which require massive downtime windows.
Instead, we utilize the AWS Database Migration Service (DMS). We establish an IPsec VPN tunnel between the Azure VNet and the AWS VPC. DMS connects directly to the Azure SQL instance, performs an initial full load to Amazon Aurora, and then spins up Change Data Capture (CDC) listening to the SQL Server transaction logs.
Source: Azure SQL (Primary)
Target: Amazon Aurora PostgreSQL (Replica)
Replication Lag: 112ms
State: ONGOING_REPLICATION
With CDC actively mirroring every insert/update in near real-time, the data in AWS Aurora is continuously updated. When cutover day arrives, the "downtime" is simply halting writes on Azure, waiting ~100ms for the final logs to replicate, and pointing the Route53 DNS to the AWS ALB.
5. Securing the perimeter: Identity & Access
Multi-tenant SaaS products require strict logical isolation. In the legacy architecture, secrets were often stored in Azure Key Vault or worse, application settings. In the modernized EKS environment, we rely on IAM Roles for Service Accounts (IRSA).
Instead of injecting AWS credentials directly into the .NET containers, we associate an AWS IAM Role directly with the Kubernetes Service Account. The .NET pods transparently request temporary credential tokens from the AWS metadata service.
- The 'Billing' pod is the only application allowed to query the Billing RDS tables.
- The 'Documents' pod is the only application permitted an S3 GetObject policy for the storage buckets.
- The 'Auth' pod handles all Cognito validations.