Skip to content

Multi-Region

This guide covers deploying Rack Gateway across multiple AWS regions for geographic distribution, compliance requirements, or disaster recovery.

ScenarioRecommendation
Data residency requirementsDeploy in required regions
Disaster recoveryActive-passive with replication
Global team distributionRegional gateways per team location
Compliance (GDPR, data sovereignty)Dedicated EU deployment

Each region operates independently with its own gateway and database:

Use when:

  • Data must not cross regional boundaries
  • Each region manages separate infrastructure
  • GDPR or similar compliance requirements

Primary region handles traffic; secondary is standby:

Use when:

  • High availability requirements
  • RTO/RPO targets need geographic redundancy
  • Cost-effective DR solution needed
terraform/
├── modules/
│ └── rack_gateway/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
├── environments/
│ ├── us/
│ │ ├── main.tf
│ │ ├── providers.tf
│ │ └── terraform.tfvars
│ └── eu/
│ ├── main.tf
│ ├── providers.tf
│ └── terraform.tfvars
└── global/
└── dns/
└── main.tf
environments/us/providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "myorg-terraform-state"
key = "rack-gateway/us/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
}
}
provider "aws" {
region = "us-east-1"
default_tags {
tags = {
Project = "rack-gateway"
Environment = "production"
Region = "us"
}
}
}
environments/eu/providers.tf
provider "aws" {
region = "eu-west-1"
default_tags {
tags = {
Project = "rack-gateway"
Environment = "production"
Region = "eu"
}
}
}
environments/us/main.tf
module "rack_gateway" {
source = "../../modules/rack_gateway"
environment = "production"
region = "us"
vpc_id = data.aws_vpc.main.id
private_subnet_ids = data.aws_subnets.private.ids
db_instance_class = "db.t3.medium"
db_password = var.db_password
enable_audit_anchoring = true
worm_retention_days = 400
domain = "gateway-us.example.com"
rack_alias = "us"
tags = {
Region = "us"
}
}
environments/eu/main.tf
module "rack_gateway" {
source = "../../modules/rack_gateway"
environment = "production"
region = "eu"
vpc_id = data.aws_vpc.main.id
private_subnet_ids = data.aws_subnets.private.ids
db_instance_class = "db.t3.medium"
db_password = var.db_password_eu
enable_audit_anchoring = true
worm_retention_days = 400
domain = "gateway-eu.example.com"
rack_alias = "eu"
tags = {
Region = "eu"
}
}
# In DR region
resource "aws_db_instance" "replica" {
provider = aws.dr
identifier = "rack-gateway-dr"
replicate_source_db = aws_db_instance.primary.arn
instance_class = "db.t3.medium"
# Replica-specific settings
storage_encrypted = true
kms_key_id = aws_kms_key.database_dr.arn
vpc_security_group_ids = [aws_security_group.rds_dr.id]
db_subnet_group_name = aws_db_subnet_group.dr.name
# Backup settings for replica
backup_retention_period = 7
tags = {
Name = "rack-gateway-dr-replica"
Role = "replica"
}
}
Terminal window
# During DR event
aws rds promote-read-replica \
--db-instance-identifier rack-gateway-dr \
--region us-west-2

See S3 WORM Storage for detailed replication configuration.

resource "aws_s3_bucket_replication_configuration" "audit_anchors" {
bucket = aws_s3_bucket.audit_anchors.id
role = aws_iam_role.replication.arn
rule {
id = "replicate-to-dr"
status = "Enabled"
destination {
bucket = aws_s3_bucket.audit_anchors_dr.arn
replication_time {
status = "Enabled"
time {
minutes = 15
}
}
}
}
}
resource "aws_route53_health_check" "gateway_us" {
fqdn = "gateway-us.example.com"
port = 443
type = "HTTPS"
resource_path = "/api/v1/health"
failure_threshold = 3
request_interval = 30
tags = {
Name = "rack-gateway-us-health"
}
}
resource "aws_route53_health_check" "gateway_eu" {
fqdn = "gateway-eu.example.com"
port = 443
type = "HTTPS"
resource_path = "/api/v1/health"
failure_threshold = 3
request_interval = 30
tags = {
Name = "rack-gateway-eu-health"
}
}

Route users to nearest healthy region:

resource "aws_route53_record" "gateway_us" {
zone_id = aws_route53_zone.main.zone_id
name = "gateway.example.com"
type = "A"
alias {
name = aws_lb.gateway_us.dns_name
zone_id = aws_lb.gateway_us.zone_id
evaluate_target_health = true
}
set_identifier = "us"
latency_routing_policy {
region = "us-east-1"
}
health_check_id = aws_route53_health_check.gateway_us.id
}
resource "aws_route53_record" "gateway_eu" {
zone_id = aws_route53_zone.main.zone_id
name = "gateway.example.com"
type = "A"
alias {
name = aws_lb.gateway_eu.dns_name
zone_id = aws_lb.gateway_eu.zone_id
evaluate_target_health = true
}
set_identifier = "eu"
latency_routing_policy {
region = "eu-west-1"
}
health_check_id = aws_route53_health_check.gateway_eu.id
}

Primary/DR configuration:

resource "aws_route53_record" "gateway_primary" {
zone_id = aws_route53_zone.main.zone_id
name = "gateway.example.com"
type = "A"
alias {
name = aws_lb.gateway_primary.dns_name
zone_id = aws_lb.gateway_primary.zone_id
evaluate_target_health = true
}
set_identifier = "primary"
failover_routing_policy {
type = "PRIMARY"
}
health_check_id = aws_route53_health_check.gateway_primary.id
}
resource "aws_route53_record" "gateway_dr" {
zone_id = aws_route53_zone.main.zone_id
name = "gateway.example.com"
type = "A"
alias {
name = aws_lb.gateway_dr.dns_name
zone_id = aws_lb.gateway_dr.zone_id
evaluate_target_health = true
}
set_identifier = "secondary"
failover_routing_policy {
type = "SECONDARY"
}
}

Configure the CLI to handle multiple regional gateways:

~/.config/rack-gateway/config.json
{
"racks": {
"us": {
"gateway_url": "https://gateway-us.example.com",
"session_token": "..."
},
"eu": {
"gateway_url": "https://gateway-eu.example.com",
"session_token": "..."
}
},
"current_rack": "us"
}

Usage:

Terminal window
# Work with US rack
rack-gateway --rack us convox apps
# Work with EU rack
rack-gateway --rack eu convox apps
# Switch default rack
rack-gateway rack eu

For independently deployed gateways, these items may need synchronization:

ItemSync MethodFrequency
UsersOAuth (automatic)On login
RBAC rolesManual or automationAs needed
API tokensRegion-specificN/A
Audit logsS3 replicationContinuous

Users authenticate via Google OAuth, so they’re automatically provisioned on first login to any region. Roles can be:

  1. Manual - Admins assign roles in each region
  2. Automated - Script syncs roles via API
  3. SSO-based - Derive roles from Google Groups
#!/bin/bash
# Sync admin users across regions
ADMINS="admin1@example.com,admin2@example.com"
for REGION in us eu; do
rack-gateway --rack $REGION admin users sync --admins "$ADMINS"
done
MetricTargetAchieved By
RPO15 minutesS3 replication, RDS replica
RTO30 minutesAutomated failover
  1. Detect failure

    Route 53 health checks automatically detect failure.

  2. Verify replication

    Terminal window
    # Check RDS replica lag
    aws rds describe-db-instances \
    --db-instance-identifier rack-gateway-dr \
    --query 'DBInstances[0].ReadReplicaSourceDBInstanceIdentifier'
  3. Promote RDS replica

    Terminal window
    aws rds promote-read-replica \
    --db-instance-identifier rack-gateway-dr \
    --region us-west-2
  4. Update gateway configuration

    Point DR gateway to promoted database.

  5. Verify DR gateway

    Terminal window
    curl https://gateway-dr.example.com/api/v1/health
  6. DNS failover

    Automatic if using Route 53 health checks, or manual:

    Terminal window
    aws route53 change-resource-record-sets ...

After primary is restored:

  1. Set up new RDS replication (primary → DR)
  2. Sync any data created during outage
  3. Verify primary health
  4. Failback DNS routing
ComponentSingle RegionMulti-Region
RDS$50-200/mo$100-400/mo
S3 (with CRR)~$0/mo~$0/mo
Route 53 health checks-$1.50/mo
Data transfer-Variable
  • Use separate VPCs per region
  • Region-specific KMS keys
  • Independent IAM roles
  • Same Terraform modules across regions
  • Consistent naming conventions
  • Automated deployment pipelines
  • CloudWatch dashboards per region
  • Cross-region metrics comparison
  • Unified alerting
  • Regular DR drills
  • Failover automation tests
  • Replication lag monitoring