Amazon EVS • VCF 5.x • NSX-T • Terraform

Enterprise SDDC on AWS.
Done right, the first time.

The complete prerequisite guide for deploying VMware Cloud Foundation on Amazon Elastic VMware Service — with production-ready Terraform automation.

Read the Guide View on GitHub
11 Phases Covered
10 VLANs Mapped
256-Core Min License
i4i.metal Bare Metal
VCF 5.x Compatible

This guide covers every prerequisite required before submitting an Amazon EVS environment creation request. Complete all 11 phases in order — EVS will validate most of these during bring-up and fail with an error if any are missing.

PhasePrerequisitePre-Create Required?
1AWS Account & Business Support PlanYes
2IAM Permissions & Service-Linked RoleYes
3VPC & IP PlanningYes
4Route 53 Private Zone & Inbound ResolversYes
5DHCP Options SetYes
6Service Access SubnetYes
7VPC Route Server & BGP (AWS side)Yes
8VLAN IP Planning (10 VLANs)Plan only — EVS creates
9On-Demand Capacity Reservation (ODCR)Yes
10VCF Licensing (256-core key + vSAN key)Procure before bring-up
11EC2 Key Pair & Service QuotasYes

1 Phase 1: AWS Account & Support Plan

Amazon EVS enforces a hard requirement on your AWS support tier. Environment creation will fail if this is not in place.

1.1 Required Support Plan

Your AWS account must be enrolled in AWS Business Support or higher (Enterprise On-Ramp or Enterprise Support). Developer Support is not sufficient. Verify your plan at AWS Console › Support › Support Plans.

1.2 Supported AWS Regions & Availability Zones

Amazon EVS is available in select regions. All EVS resources — VPC, subnets, hosts, and the Route Server — must reside in the same single AZ. EVS does not support multi-AZ deployments within a single environment. Choose your target AZ before proceeding and plan all subnets accordingly.

2 Phase 2: IAM Permissions & Service-Linked Role

EVS uses two distinct IAM constructs: a service-linked role that EVS manages automatically, and an identity-based policy that must be attached to the user or role initiating the deployment.

2.1 EVS Service-Linked Role

AWS automatically creates the AWSServiceRoleForEVS service-linked role on your first EVS operation. This role allows the EVS service to create and manage EC2 instances, network interfaces, subnets, and Secrets Manager secrets on your behalf. You cannot edit its permissions, but the deploying user must have permission to create it if it does not yet exist:

iam-evs-slr-policy.json — Allow SLR CreationIAM
# Attach to the user/role that will create the EVS environment
{
  "Effect": "Allow",
  "Action": "iam:CreateServiceLinkedRole",
  "Resource": "arn:aws:iam::*:role/aws-service-role/evs.amazonaws.com/AWSServiceRoleForEVS",
  "Condition": {
    "StringLike": { "iam:AWSServiceName": "evs.amazonaws.com" }
  }
}

2.2 User / Role Permissions

The principal creating the EVS environment needs the following permissions at minimum. Scope resource ARNs as tightly as practical for your environment.

ServiceRequired Actions
EVSevs:*
EC2DescribeVpcs, DescribeSubnets, DescribeInstances, RunInstances, TerminateInstances, CreateSubnet, DeleteSubnet, CreateNetworkInterface, DeleteNetworkInterface, DescribeCapacityReservations
Secrets ManagerCreateSecret, UpdateSecret, DeleteSecret, DescribeSecret
KMSDescribeKey, ListAliases (if using customer-managed keys)
IAMCreateServiceLinkedRole (scoped to EVS SLR ARN)
Best Practice: Do not use root credentials to deploy EVS. Create a dedicated IAM role or user with least-privilege permissions scoped to the deployment account and region.

3 Phase 3: VPC & IP Planning

3.1 Create the EVS VPC

Create a dedicated VPC for the EVS environment. Key requirements:

SettingRequirement
CIDR BlockMinimum /22 (1,024 addresses). Recommended /20 or larger for room to grow. Must be RFC 1918 private space. Cannot be changed after EVS deployment.
IPv6Not supported
DNS HostnamesMust be enabled
DNS SupportMust be enabled
TenancyDefault (dedicated tenancy is not supported)

3.2 Enable DNS on the VPC

Both DNS attributes must be enabled or EVS environment creation will fail:

AWS CLI — Enable VPC DNS AttributesCLI
aws ec2 modify-vpc-attribute --vpc-id vpc-xxxxxxxxxxxxxxxxx --enable-dns-hostnames
aws ec2 modify-vpc-attribute --vpc-id vpc-xxxxxxxxxxxxxxxxx --enable-dns-support

3.3 IP Address Planning

Plan all CIDRs before any deployment. Overlapping ranges will cause EVS bring-up to fail and cannot be corrected without redeployment. The table below shows the recommended layout for a 10.0.0.0/16 VPC:

Subnet / VLANPurposePre-Create?Example CIDR
Service Access SubnetRoute Server endpoints, DNS Resolver ENIsYes10.0.0.0/24
Host Management VLANESXi host management interfacesNo — EVS creates10.0.10.0/24
Management VM VLANvCenter, SDDC Manager, NSX Managers, NSX EdgesNo — EVS creates10.0.11.0/24
vMotion VLANLive VM migrationNo — EVS creates10.0.12.0/24
vSAN VLANStorage trafficNo — EVS creates10.0.13.0/24
Host VTEP VLANNSX host tunnel endpoints (Geneve)No — EVS creates10.0.14.0/24
Edge VTEP VLANNSX Edge tunnel endpoints (Geneve)No — EVS creates10.0.15.0/24
NSX Uplink VLANTier-0 north-south BGP peeringNo — EVS creates10.0.16.0/24
HCX Uplink VLAN(Optional) HCX-IX & HCX-NE trafficNo — EVS creates10.0.20.0/28
Expansion VLAN 1Reserved for future VCF segmentsNo — EVS creates10.0.21.0/24
Expansion VLAN 2Reserved for future VCF segmentsNo — EVS creates10.0.22.0/24
Critical: Only the Service Access Subnet is pre-created by you. All 10 VLAN subnets are automatically provisioned by EVS during bring-up using the CIDRs you specify in the CreateEnvironment request. Reserve these ranges now and do not create overlapping subnets in the VPC.

4 Phase 4: Route 53 Private Zone & Inbound Resolvers

VCF is extremely sensitive to DNS. EVS validates every DNS record during bring-up and will fail with a specific error for each missing or incorrect record. Complete this phase fully before creating the DHCP Options Set or the EVS environment.

4.1 Create the Route 53 Private Hosted Zone

Create a private hosted zone in Route 53 and associate it with the EVS VPC. Choose a domain that matches the domain_name you will put in the DHCP Options Set (e.g., evs.vcloudone.local). This domain must be unique within the VPC — do not reuse an existing zone.

4.2 Create Two Inbound Resolver Endpoints

Route 53 Resolver inbound endpoints allow VCF components running in the EVS VLANs to reach the private hosted zone. You must create two endpoints — each receives its own ENI and a dedicated IP address from the Service Access Subnet. These two IPs become the primary and secondary DNS servers in your DHCP Options Set.

Navigate to Route 53 › Resolver › Inbound endpoints › Create inbound endpoint and configure:

FieldResolver 1Resolver 2
Nameevs-dns-resolver-1evs-dns-resolver-2
VPCEVS VPCEVS VPC
Security GroupAllow UDP/TCP port 53 inbound from VPC CIDR (e.g. 10.0.0.0/16)
IP Address (AZ)10.0.0.10 — Service Access Subnet10.0.0.11 — Service Access Subnet

Note both IP addresses. They are used directly as domain-name-servers in the DHCP Options Set in Phase 5.

4.3 Create DNS Records — A and PTR for All Components

All forward (A) and reverse (PTR) records must exist before EVS bring-up. Use IPs from your Management VM VLAN (10.0.11.0/24) for appliances and Host Management VLAN (10.0.10.0/24) for ESXi hosts. Create the following records in the private hosted zone:

ComponentForward A RecordReverse PTR IPVLAN
Cloud Buildercloud-builder.evs.vcloudone.local10.0.10.9Host Mgmt
ESXi Host 01esxi-01.evs.vcloudone.local10.0.10.21Host Mgmt
ESXi Host 02esxi-02.evs.vcloudone.local10.0.10.22Host Mgmt
ESXi Host 03esxi-03.evs.vcloudone.local10.0.10.23Host Mgmt
ESXi Host 04esxi-04.evs.vcloudone.local10.0.10.24Host Mgmt
SDDC Managersddc-manager.evs.vcloudone.local10.0.11.10Mgmt VM
vCenter Servervcenter.evs.vcloudone.local10.0.11.11Mgmt VM
NSX Manager 01nsx-mgr-01.evs.vcloudone.local10.0.11.12Mgmt VM
NSX Manager 02nsx-mgr-02.evs.vcloudone.local10.0.11.13Mgmt VM
NSX Manager 03nsx-mgr-03.evs.vcloudone.local10.0.11.14Mgmt VM
NSX Edge 01nsx-edge-01.evs.vcloudone.local10.0.11.21Mgmt VM
NSX Edge 02nsx-edge-02.evs.vcloudone.local10.0.11.22Mgmt VM
HCX Manager(Optional) hcx.evs.vcloudone.local10.0.20.10HCX Uplink

Also create reverse lookup (PTR) zones for each subnet: 10.0.10.in-addr.arpa (Host Mgmt) and 11.0.10.in-addr.arpa (Mgmt VM), associated with the EVS VPC.

DNS Validation: EVS performs DNS resolution checks for every record listed above during environment creation. Missing A records, missing PTR records, or records that resolve to wrong IPs will cause bring-up to fail with a specific validation error. Do not proceed to Phase 5 until all records resolve correctly.

5 Phase 5: DHCP Options Set

The DHCP Options Set tells EVS-managed hosts how to resolve DNS and synchronize time. It must be associated with the VPC before environment creation — not after.

5.1 Create and Associate the DHCP Options Set

Navigate to VPC › DHCP Option Sets › Create DHCP options set and configure:

FieldValueNotes
Domain nameevs.vcloudone.localMust exactly match the Route 53 private hosted zone name
Domain name servers10.0.0.10, 10.0.0.11The two inbound resolver IPs from Phase 4.2 — primary first
NTP servers169.254.169.123AWS Time Sync Service — link-local, no internet required

After creating the set, associate it with the EVS VPC: select the VPC › Actions › Edit DHCP options set › choose the new set.

Why two DNS servers? EVS validates that both primary and secondary DNS IPs are reachable from the VPC before proceeding with bring-up. A single DNS server is not sufficient — if it fails, VCF has no fallback and bring-up will stall.

6 Phase 6: Service Access Subnet

6.1 Create the Service Access Subnet

This is the only subnet you pre-create before EVS bring-up. All 10 VLAN subnets are created automatically by EVS. The Service Access Subnet hosts the Route Server endpoints and the Route 53 Resolver ENIs.

SettingRequirement
VPCThe EVS VPC
CIDR size/24 recommended (minimum /28 for Route Server endpoints alone)
Availability ZoneSame AZ as the planned EVS environment
Route TableMust have an explicit subnet association — not just the implicit main route table. This is required for BGP route propagation in Phase 7.
DNS reachabilityMust be able to reach the DNS resolver IPs (which are also in this subnet)

7 Phase 7: VPC Route Server & BGP

The VPC Route Server is the BGP bridge between the AWS underlay and the NSX overlay. You configure the AWS side entirely before bring-up. NSX Tier-0 BGP is automatically configured by EVS during environment creation — you do not need to touch NSX for this.

7.1 Create the VPC Route Server

Navigate to VPC › Route Servers › Create route server:

FieldValueNotes
Amazon Side ASNe.g. 65100Must be in private range 64512–65534. Must differ from NSX Edge ASN (default 65000).
Persist RoutesEnabledPreserves routes for 1–5 min after BGP session drops; prevents route flaps during maintenance
SNS NotificationsRecommendedAlerts on BGP state changes in production

Wait for Route Server status to reach Available before proceeding.

7.2 Associate with the Service Access Subnet

Select the Route Server › Associations › Associate route server. Select the Service Access Subnet created in Phase 6. AWS will automatically deploy two Route Server endpoints into that subnet. Note the IP address of each endpoint — EVS uses these as the BGP neighbor addresses when it auto-configures the NSX Tier-0 gateway during bring-up.

7.3 Create Two Route Server Peers

Navigate to Route server peers › Create route server peer. Create one peer per NSX Edge node. The peer address must be an IP from the NSX Uplink VLAN CIDR (e.g., 10.0.16.0/24) — these IPs will be assigned to NSX Edge uplink interfaces during EVS bring-up.

FieldPeer 1 (Edge 01)Peer 2 (Edge 02)
Route Server EndpointEndpoint 1 IDEndpoint 2 ID
Peer Address10.0.16.10 (NSX Uplink VLAN)10.0.16.11 (NSX Uplink VLAN)
Peer ASN65000 (NSX default)65000 (NSX default)
Liveness DetectionBGP KeepaliveBGP Keepalive
Important: Use BGP Keepalive only. BFD (Bidirectional Forwarding Detection) is not supported for Amazon EVS and will cause undefined session behavior.

7.4 Enable Route Propagation

Select the Route Server › Propagations › Create propagation. Select the Service Access Subnet route table and any workload subnet route tables that need to receive NSX overlay routes.

Common failure: Every propagated route table must have an explicit subnet association. Route tables that only have the implicit main-route-table association will silently fail to receive BGP routes. Verify explicit associations exist before proceeding.

7.5 Network ACL Requirements for BGP

Ensure NACLs on the Service Access Subnet and NSX Uplink VLAN subnets allow BGP traffic. NACLs are stateless — both directions must be explicitly permitted.

DirectionProtocolPortsSource / Destination
InboundTCP179 (BGP)NSX Uplink VLAN CIDR
InboundTCP49152–65535 (ephemeral)NSX Uplink VLAN CIDR
OutboundTCP179 (BGP)NSX Uplink VLAN CIDR
OutboundTCP49152–65535 (ephemeral)NSX Uplink VLAN CIDR
NSX Tier-0 BGP is auto-configured. EVS automatically configures the NSX Tier-0 gateway BGP neighbors using the Route Server endpoint IPs and the peer ASN you specified. You do not need to log into NSX Manager to configure BGP — this is handled entirely during EVS bring-up.

8 Phase 8: VLAN Subnet Sizing

You do not create these subnets — you provide the CIDRs in the EVS CreateEnvironment API call. Plan them now against your VPC CIDR to ensure zero overlap. All VLANs accept /28 to /24 except HCX Uplink which must be exactly /28.

VLANDescriptionAllowed SizeExample CIDR
Host ManagementESXi host management interfaces/28 – /2410.0.10.0/24
Management VMvCenter, SDDC Manager, NSX Managers, Edges/28 – /2410.0.11.0/24
vMotionLive VM migration/28 – /2410.0.12.0/24
vSANStorage traffic/28 – /2410.0.13.0/24
Host VTEPNSX host tunnel endpoints/28 – /2410.0.14.0/24
Edge VTEPNSX Edge tunnel endpoints/28 – /2410.0.15.0/24
NSX UplinkTier-0 BGP peering (source IPs for Route Server peers)/28 – /2410.0.16.0/24
HCX Uplink(Optional) HCX-IX & HCX-NEExactly /2810.0.20.0/28
Expansion VLAN 1Future workload segments/28 – /2410.0.21.0/24
Expansion VLAN 2Future workload segments/28 – /2410.0.22.0/24

9 Phase 9: On-Demand Capacity Reservation (ODCR)

Amazon EVS uses i4i.metal bare-metal instances exclusively. Because bare-metal capacity can be constrained in specific AZs, you must create an On-Demand Capacity Reservation (ODCR) before deployment to guarantee availability.

9.1 Create the Capacity Reservation

SettingValue
Instance typei4i.metal (128 vCPUs, up to 30 TB NVMe per host)
PlatformLinux/UNIX
QuantityMinimum 4 (match your planned cluster size; max 16)
Availability ZoneMust be the same AZ as the EVS environment
Reservation typeTargeted (recommended for security; open also works)
End dateNo end date (or set beyond your planned deployment window)
AWS CLI — Create ODCR for 4 EVS HostsCLI
aws ec2 create-capacity-reservation \
  --availability-zone us-east-1a \
  --instance-type i4i.metal \
  --instance-platform Linux/UNIX \
  --instance-count 4 \
  --tag-specifications 'ResourceType=capacity-reservation,Tags=[{Key=Name,Value=evs-odcr-mgmt}]'

# Verify it reaches 'active' state before proceeding
aws ec2 describe-capacity-reservations \
  --query 'CapacityReservations[*].[CapacityReservationId,State,AvailableInstanceCount]'
EC2 vCPU Quota: Each i4i.metal instance uses 128 vCPUs. A 4-host cluster requires a minimum 512 vCPU quota for Running On-Demand Standard instances. Check and request a quota increase at Service Quotas › Amazon EC2 › Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances before deployment if needed.

10 Phase 10: VCF Licensing

Amazon EVS is a BYOL (Bring Your Own License) service. You must procure VCF licenses from Broadcom before bring-up. Licenses are entered during environment creation and validated by SDDC Manager post-deployment.

10.1 Required License Keys

LicenseMinimumNotes
VCF Solution Key256 cores4 hosts × 64 cores per i4i.metal = 256 core minimum. Covers vSphere 8 Enterprise Plus, NSX, SDDC Manager, vCenter. Each key can only be assigned to one EVS environment.
vSAN License Key110 TiBCovers the vSAN storage capacity of the initial 4-host cluster. Cannot be reused across environments.
Broadcom Site IDRequiredYour organization's Broadcom support portal site ID, used for license validation and entitlement verification.

10.2 License Application Process

License keys are provided as inputs during the CreateEnvironment API call or via the AWS Console environment creation wizard. After EVS bring-up completes:

  • Log into vSphere Client and assign the VCF Solution Key to each host
  • Verify the vSAN key appears in SDDC Manager › Licensing
  • Perpetual vSphere licenses are not supported — VCF subscription keys with portability entitlements only
  • A license with fewer than 256 cores will cause environment creation to fail

11 Phase 11: EC2 Key Pair & Final Checks

11.1 EC2 Key Pair

Create an EC2 key pair in the same region as the EVS environment. This key pair is used for SSH access to ESXi hosts during and after deployment. Navigate to EC2 › Key Pairs › Create key pair. Store the private key securely — it cannot be recovered after creation.

11.2 Pre-Deployment Validation Checklist

CheckVerified?
AWS Business Support (or higher) active
IAM user/role has EVS + EC2 + SLR permissions
VPC created with /22+ CIDR, DNS enabled
Route 53 private hosted zone created and VPC-associated
Two inbound resolver endpoints created (IPs noted)
All A and PTR records created and resolving correctly
DHCP Options Set created with domain, 2 DNS IPs, NTP
DHCP Options Set associated with EVS VPC
Service Access Subnet created with explicit route table association
VPC Route Server in Available state
Two Route Server endpoints in Available state
Two Route Server peers created (NSX Uplink VLAN IPs, BGP Keepalive)
Route propagation enabled on target route tables
NACLs allow TCP 179 and ephemeral ports bidirectionally
All 10 VLAN CIDRs planned, non-overlapping
ODCR created for i4i.metal in correct AZ (state: active)
EC2 vCPU quota ≥ 512 (for 4-host cluster)
VCF Solution Key procured (256+ cores)
vSAN License Key procured (110+ TiB)
Broadcom Site ID available
EC2 key pair created in target region

Automation: Terraform Deployment

The EVS-PreREQ-Terraform kit automates Phases 2–7: IAM policy, VPC, DNS resolvers, Route 53 records, DHCP options set, service access subnet with explicit route table, and Route Server with BGP peers. Phase 9 (ODCR) is included in odcr.tf as commented-out code — uncomment when ready. All defaults match the IP plan in this guide and are production-ready.

File descriptions

File Phase What it creates
variables.tf All configurable values: region, AZ, CIDRs, domain, IPs, ASNs, host maps. Edit this file before applying.
main.tf 2, 3, 4, 5, 6 VPC, service access subnet, explicit route table, DNS security group, Route 53 inbound resolver endpoint (2 IPs), DHCP options set, EVS service-linked role
dns.tf 4 Route 53 private hosted zone, two reverse PTR zones (Host VLAN + Mgmt VLAN), A and PTR records for all VCF components and ESXi hosts
peering.tf 7 VPC Route Server, VPC association, two Route Server endpoints in the service access subnet, two BGP peers (NSX Edge uplink IPs, bgp-keepalive), route propagation to the explicit route table
iam.tf 2 EVSDeploymentPolicy — minimum IAM policy for the user or role that calls CreateEnvironment
odcr.tf 9 On-Demand Capacity Reservation for i4i.metal — all code is commented out. Uncomment when EC2 vCPU quota ≥ 512 and you are ready to reserve capacity.
outputs.tf Prints all resource IDs, IPs, and zone IDs after apply — useful for cross-checking against the EVS Console during bring-up
providers.tf Terraform ≥ 1.3 and AWS provider ≥ 5.84 (required for Route Server resources). No remote state — local backend.

Customizing variables.tf

Open variables.tf and update the following before running terraform apply. The defaults match the IP plan in this guide — change them to match your environment.

variables.tf — values to update for your environmentv2.0
# ── Target region and AZ ──────────────────────────────────────────────────────
# All EVS resources must be in the same AZ.
variable "aws_region" { default = "us-east-1" }   # change to your region
variable "az"         { default = "us-east-1a" } # change to your target AZ

# ── VPC & subnet CIDRs ────────────────────────────────────────────────────────
# vpc_cidr minimum /22 — cannot be changed after EVS deployment.
# service_access_cidr hosts the Route Server endpoints and DNS resolver ENIs.
variable "vpc_cidr"            { default = "10.0.0.0/16"  }
variable "service_access_cidr" { default = "10.0.0.0/24"  }

# ── DNS domain ────────────────────────────────────────────────────────────────
# Must match the Route 53 private hosted zone name exactly.
# All VCF components are registered under this domain.
variable "domain_name" { default = "evs.vcloudone.local" }

# ── Resolver IPs ──────────────────────────────────────────────────────────────
# Two static IPs within service_access_cidr.
# These become the DHCP domain-name-servers — must be stable and ordered.
variable "resolver_ip_1" { default = "10.0.0.10" }  # primary DNS
variable "resolver_ip_2" { default = "10.0.0.11" }  # secondary DNS

# ── BGP ASNs ──────────────────────────────────────────────────────────────────
# route_server_asn: VPC Route Server ASN — must differ from nsx_peer_asn.
# nsx_peer_asn: NSX Tier-0 ASN — default NSX value is 65000.
variable "route_server_asn" { default = 65100 }
variable "nsx_peer_asn"     { default = 65000 }

# ── VCF component IPs ─────────────────────────────────────────────────────────
# Must be in the Management VM VLAN (default 10.0.11.x).
# A and PTR records are created for every entry in this map.
variable "vcf_components" {
  default = {
    "cloud-builder" = "10.0.11.9"
    "sddc-manager"  = "10.0.11.10"
    "vcenter"       = "10.0.11.11"
    "nsx-mgr-01"    = "10.0.11.12"
    "nsx-mgr-02"    = "10.0.11.13"
    "nsx-mgr-03"    = "10.0.11.14"
    "nsx-edge-01"   = "10.0.11.21"
    "nsx-edge-02"   = "10.0.11.22"
  }
}

# ── ESXi host IPs ─────────────────────────────────────────────────────────────
# Must be in the Host Management VLAN (default 10.0.10.x).
# Add or remove entries to match your host count (minimum 4 for EVS).
variable "esxi_hosts" {
  default = {
    "esxi-01" = "10.0.10.21"
    "esxi-02" = "10.0.10.22"
    "esxi-03" = "10.0.10.23"
    "esxi-04" = "10.0.10.24"
  }
}

# ── NSX Edge uplink IPs ───────────────────────────────────────────────────────
# IPs assigned to NSX Edge uplink interfaces during EVS bring-up.
# Must be within the NSX Uplink VLAN (default 10.0.16.x).
# These are configured as Route Server BGP peer addresses.
variable "nsx_edge_peer_ips" {
  default = {
    "edge-01" = "10.0.16.10"
    "edge-02" = "10.0.16.11"
  }
}

Usage

terminalbash
# 1. Clone and enter the repo
git clone https://github.com/gitvcloudone/EVS-PreREQ-Terraform.git
cd EVS-PreREQ-Terraform

# 2. Edit variables.tf with your region, AZ, CIDRs, IPs, and hostnames
#    (see table above — at minimum update aws_region, az, and domain_name)

# 3. Initialize providers (downloads hashicorp/aws ~= 5.84)
terraform init

# 4. Preview what will be created — review before applying
terraform plan

# 5. Deploy (~45 resources, typically completes in 3-5 minutes)
terraform apply

# 6. Review outputs — all resource IDs and IPs are printed
terraform output
ODCR (Phase 9): The odcr.tf file contains a commented-out aws_ec2_capacity_reservation resource for i4i.metal. Uncomment it when your EC2 vCPU On-Demand Standard quota is ≥ 512 and you are ready to reserve capacity before deploying EVS. The reservation uses instance_match_criteria = "targeted" so only your EVS deployment will consume it.

Terraform file reference

main.tf — VPC, Subnet, Route Table, Resolvers, DHCP, SLRv2.0
# Phase 3 — VPC (DNS attributes required by EVS)
resource "aws_vpc" "evs_vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = { Name = "evs-vpc" }
}

# Phase 6 — Service Access Subnet + explicit route table association
# Explicit association is required — propagated route tables without it
# silently fail to receive BGP routes from the Route Server.
resource "aws_subnet" "service_access" {
  vpc_id            = aws_vpc.evs_vpc.id
  cidr_block        = var.service_access_cidr
  availability_zone = var.az
}
resource "aws_route_table" "service_access" {
  vpc_id = aws_vpc.evs_vpc.id
  tags   = { Name = "evs-service-access-rt" }
}
resource "aws_route_table_association" "service_access" {
  subnet_id      = aws_subnet.service_access.id
  route_table_id = aws_route_table.service_access.id
}

# Phase 4 — Route 53 Inbound Resolver Endpoint
# AWS requires a minimum of 2 IPs per resolver endpoint.
# Explicit IPs (not dynamic lookup) ensure stable ordering in DHCP options.
resource "aws_route53_resolver_endpoint" "evs_dns" {
  name               = "evs-dns-resolver"
  direction          = "INBOUND"
  security_group_ids = [aws_security_group.dns_sg.id]
  ip_address { subnet_id = aws_subnet.service_access.id; ip = var.resolver_ip_1 }
  ip_address { subnet_id = aws_subnet.service_access.id; ip = var.resolver_ip_2 }
  tags = { Name = "evs-dns-resolver" }
}

# Phase 5 — DHCP Options Set
# domain_name must match the Route 53 private hosted zone exactly.
# NTP uses AWS Time Sync Service (169.254.169.123) — no internet required.
resource "aws_vpc_dhcp_options" "evs_dhcp" {
  domain_name         = var.domain_name
  domain_name_servers = [var.resolver_ip_1, var.resolver_ip_2]
  ntp_servers         = ["169.254.169.123"]
}
resource "aws_vpc_dhcp_options_association" "evs" {
  vpc_id          = aws_vpc.evs_vpc.id
  dhcp_options_id = aws_vpc_dhcp_options.evs_dhcp.id
}

# Phase 2 — EVS Service-Linked Role
resource "aws_iam_service_linked_role" "evs" {
  aws_service_name = "evs.amazonaws.com"
}

# Phase 9 — On-Demand Capacity Reservation (ODCR) — uncomment to enable
# Requires EC2 vCPU quota ≥ 512 (4 × i4i.metal × 128 vCPUs).
# instance_match_criteria = "targeted" means only launches that explicitly
# reference this reservation ID will consume capacity.
#
# resource "aws_ec2_capacity_reservation" "evs_hosts" {
#   instance_type           = "i4i.metal"
#   instance_platform       = "Linux/UNIX"
#   availability_zone       = var.az
#   instance_count          = 4
#   instance_match_criteria = "targeted"
#   tags                    = { Name = "evs-odcr" }
# }
peering.tf — Route Server, Endpoints, BGP Peers, Propagationv2.0
# Route Server ASN must differ from NSX Tier-0 ASN (default 65000).
# persist_routes prevents route flaps during brief BGP session drops.
resource "aws_vpc_route_server" "evs_rs" {
  amazon_side_asn         = var.route_server_asn  # default 65100
  persist_routes          = "enable"
  persist_routes_duration = 2
}
resource "aws_vpc_route_server_vpc_association" "evs_rs_vpc" {
  route_server_id = aws_vpc_route_server.evs_rs.route_server_id
  vpc_id          = aws_vpc.evs_vpc.id
}

# Two endpoints in the Service Access Subnet.
# EVS reads these endpoint IPs and auto-configures NSX Tier-0 BGP neighbors
# during bring-up — no manual NSX configuration required.
resource "aws_vpc_route_server_endpoint" "ep1" {
  route_server_id = aws_vpc_route_server.evs_rs.route_server_id
  subnet_id       = aws_subnet.service_access.id
  depends_on      = [aws_vpc_route_server_vpc_association.evs_rs_vpc]
}
resource "aws_vpc_route_server_endpoint" "ep2" {
  route_server_id = aws_vpc_route_server.evs_rs.route_server_id
  subnet_id       = aws_subnet.service_access.id
  depends_on      = [aws_vpc_route_server_vpc_association.evs_rs_vpc]
}

# Peer IPs are NSX Edge uplink IPs from the NSX Uplink VLAN (10.0.16.x).
# BFD is not supported for EVS — always use bgp-keepalive.
resource "aws_vpc_route_server_peer" "edge_01" {
  route_server_endpoint_id = aws_vpc_route_server_endpoint.ep1.route_server_endpoint_id
  peer_address             = var.nsx_edge_peer_ips["edge-01"]  # 10.0.16.10
  bgp_options {
    peer_asn                = var.nsx_peer_asn  # 65000
    peer_liveness_detection = "bgp-keepalive"
  }
}
resource "aws_vpc_route_server_peer" "edge_02" {
  route_server_endpoint_id = aws_vpc_route_server_endpoint.ep2.route_server_endpoint_id
  peer_address             = var.nsx_edge_peer_ips["edge-02"]  # 10.0.16.11
  bgp_options {
    peer_asn                = var.nsx_peer_asn
    peer_liveness_detection = "bgp-keepalive"
  }
}

# Propagate NSX overlay routes into the explicitly-associated route table.
resource "aws_vpc_route_server_propagation" "service_access_rt" {
  route_server_id = aws_vpc_route_server.evs_rs.route_server_id
  route_table_id  = aws_route_table.service_access.id
  depends_on      = [aws_route_table_association.service_access]
}
dns.tf — Private Zone, A Records, PTR Records (both VLANs)v2.0
resource "aws_route53_zone" "forward" {
  name = var.domain_name
  vpc { vpc_id = aws_vpc.evs_vpc.id }
}
# Host Management VLAN reverse zone (ESXi hosts — 10.0.10.x)
resource "aws_route53_zone" "host_mgmt_reverse" {
  name = "10.0.10.in-addr.arpa"
  vpc { vpc_id = aws_vpc.evs_vpc.id }
}
# Management VM VLAN reverse zone (appliances — 10.0.11.x)
resource "aws_route53_zone" "mgmt_vm_reverse" {
  name = "11.0.10.in-addr.arpa"
  vpc { vpc_id = aws_vpc.evs_vpc.id }
}

# VCF appliance A + PTR records (cloud-builder, sddc-manager, vcenter,
# nsx-mgr-01/02/03, nsx-edge-01/02) — all in Management VM VLAN (10.0.11.x)
resource "aws_route53_record" "vcf_a" {
  for_each = var.vcf_components
  zone_id  = aws_route53_zone.forward.zone_id
  name     = "${each.key}.${var.domain_name}"
  type     = "A"; ttl = 300; records = [each.value]
}
resource "aws_route53_record" "vcf_ptr" {
  for_each = var.vcf_components
  zone_id  = aws_route53_zone.mgmt_vm_reverse.zone_id
  name     = element(split(".", each.value), 3)  # last octet only
  type     = "PTR"; ttl = 300
  records  = ["${each.key}.${var.domain_name}."]
}

# ESXi host A + PTR records — Host Management VLAN (10.0.10.x)
resource "aws_route53_record" "esxi_a" {
  for_each = var.esxi_hosts
  zone_id  = aws_route53_zone.forward.zone_id
  name     = "${each.key}.${var.domain_name}"
  type     = "A"; ttl = 300; records = [each.value]
}
resource "aws_route53_record" "esxi_ptr" {
  for_each = var.esxi_hosts
  zone_id  = aws_route53_zone.host_mgmt_reverse.zone_id
  name     = element(split(".", each.value), 3)
  type     = "PTR"; ttl = 300
  records  = ["${each.key}.${var.domain_name}."]
}

Automation: Prerequisite Validator

The EVS-PreREQ-Validate script uses boto3 to auto-discover every non-default VPC in your account and validate it against EVS prerequisites — no input required. Run it against a manually-built environment or after running the Terraform kit above to confirm everything is wired correctly end-to-end.

↓ Download .zip View on GitHub →

Prerequisites

  • Python 3.8 or later
  • AWS credentials configured (aws configure, IAM instance role, or environment variables)
  • boto3 (pip install boto3)
  • IAM permissions: ec2:Describe*, iam:ListRoles, iam:GetPolicy, route53:*, route53resolver:ListResolverEndpoints, route53resolver:ListResolverEndpointIpAddresses, servicequotas:GetServiceQuota, support:DescribeSeverityLevels

Installation & usage

terminalbash
# Install boto3 if not already installed
pip install boto3

# Clone the validator
git clone https://github.com/gitvcloudone/EVS-PreREQ-Validate.git
cd EVS-PreREQ-Validate

# Run against default region (from AWS config / environment)
python evs_validate.py

# Run against a specific region
python evs_validate.py --region us-east-1
python evs_validate.py --region eu-west-1

Expected output

The script runs account-level checks first, then iterates every non-default VPC. Each check prints PASS, FAIL, or WARN. A summary with total counts is printed at the end. The exit code is 0 (all pass), 1 (failures), or 2 (credential error).

evs_validate.py — annotated sample outputPython / boto3
================================================================
  Amazon EVS Prerequisite Validator
  Region : us-east-1   Account : 123456789012
================================================================

── Phase 2: IAM
  ✓ PASS  AWSServiceRoleForEVS service-linked role exists
  ✓ PASS  EVSDeploymentPolicy found (arn:aws:iam::123456789012:policy/EVSDeploymentPolicy)
  # FAIL here means the SLR or policy was never created — run the Terraform kit
  # or create them manually before calling CreateEnvironment.

── EC2 vCPU Service Quota
  ✓ PASS  vCPU quota = 512  — meets minimum for 4 × i4i.metal
  # FAIL: quota = 256 — request increase at Service Quotas > Amazon EC2 >
  # "Running On-Demand Standard instances" before deploying EVS.

── AWS Support Plan
  ✓ PASS  Support plan is Business or higher
  # FAIL: Basic/Developer plan — EVS CreateEnvironment will be rejected.
  # Upgrade to Business, Enterprise On-Ramp, or Enterprise.

── VPC Auto-Discovery
    INFO  Found 1 non-default VPC(s)
  # The script skips the default VPC. If you see 0, your VPC hasn't been created.

================================================================
VPC: evs-vpc  (vpc-0f75dd807fc01abb9)  CIDR: 10.0.0.0/16
================================================================

── Phase 3: VPC
  ✓ PASS  CIDR 10.0.0.0/16 meets minimum /22 requirement
  ✓ PASS  DNS hostnames enabled
  ✓ PASS  DNS support enabled

── Phase 5: DHCP Options Set
  ✓ PASS  Domain name = 'evs.vcloudone.local'
  ✓ PASS  DNS servers: 10.0.0.10, 10.0.0.11
  ✓ PASS  NTP = 169.254.169.123 (AWS Time Sync Service)
  # FAIL on domain_name: DHCP domain must match the Route 53 zone exactly.
  # FAIL on DNS servers: printed IPs must match resolver endpoint IPs (see below).
  # FAIL on NTP: 169.254.169.123 must be in the list — no internet required.

── Phase 4: Route 53 Private Zones
  ✓ PASS  Forward zone(s): ['evs.vcloudone.local']
  ✓ PASS  Reverse PTR zone(s): ['11.0.10.in-addr.arpa', '10.0.10.in-addr.arpa']
    INFO  Zone: evs.vcloudone.local  (Z04587343LEL3Z83E8QNP)  records: 14
  ✓ PASS  DHCP domain 'evs.vcloudone.local' matches Route 53 zone
  # FAIL if no forward zone: create the private hosted zone and associate with the VPC.
  # FAIL if no reverse zones: EVS requires PTR zones for both ESXi and Mgmt VLANs.
  # records: 14 = 4 ESXi A/PTR + 8 VCF component A/PTR records — verify count matches.

── Phase 4: Route 53 Inbound Resolver Endpoint
  ✓ PASS  Inbound resolver endpoint(s): ['evs-dns-resolver']
  ✓ PASS    'evs-dns-resolver' — IPs: 10.0.0.11, 10.0.0.10  (status: OPERATIONAL)
  ✓ PASS    Resolver IPs ['10.0.0.11', '10.0.0.10'] are present in DHCP DNS servers
  # FAIL on resolver: create an INBOUND resolver endpoint with ≥ 2 IPs in the VPC.
  # FAIL on cross-check: DHCP domain-name-servers must exactly match resolver IPs.
  # FAIL if status is not OPERATIONAL: resolver endpoint is still provisioning.

── Phase 6: Service Access Subnet & Route Tables
  ✓ PASS  1 subnet(s) across AZs: us-east-1a(1)
  ✓ PASS  Explicit route table(s): ['rtb-0b7c5516a85491e95']
  ✓ PASS  Explicit subnet association(s): ['subnet-003763d6fb9af8fca']
  # FAIL on explicit route table: the main (implicit) VPC route table does not count.
  # A dedicated route table with an explicit subnet association is required so BGP
  # routes propagated by the Route Server are received by this subnet.

── Phase 7: NACL & Security Groups
  ✓ PASS  Custom NACL(s): ['acl-06980e7ddc52a53cb']
  ✓ PASS  Security group(s) with port 53 rule: ['evs-dns-sg', 'default']
  ✓ PASS  Security group(s) with port 179 rule: ['VPCRouteServer-rse-...', 'default']
  # FAIL on port 179: the Route Server auto-creates its own SGs but verify they exist.
  # FAIL on custom NACL: the default VPC NACL does not count — create one explicitly.

── Phase 7: VPC Route Server & BGP
  ✓ PASS  Route Server(s): ['rs-0e8e3f3e2278e29fa']
  ✓ PASS    Route Server rs-0e8e3f3e2278e29fa  ASN 65100  state: available
  ✓ PASS    Endpoints: 2  IPs: ['10.0.0.47', '10.0.0.90']
  ✓ PASS    2 BGP peer(s) configured with bgp-keepalive liveness
  ✓ PASS    Route propagation: 1 route table(s)  ['rtb-0b7c5516a85491e95']
  ✓ PASS    Propagation -> rtb-0b7c5516a85491e95 (explicit route table — correct)
  # FAIL on BGP peers: expected before EVS bring-up — NSX Edge IPs are configured
  #   during CreateEnvironment. This FAIL is normal at pre-deployment time.
  # FAIL on propagation targeting main RT: BGP routes would be silently dropped.
  #   Propagation must point to the explicit route table, not the VPC main table.

── Phase 9: On-Demand Capacity Reservation (ODCR)
  ⚠ WARN  No active i4i.metal ODCR in us-east-1a — create before deploying EVS
  # WARN (not FAIL) — optional until CreateEnvironment is called.
  # Create an ODCR for i4i.metal with instance_match_criteria = "targeted"
  # before deploying to guarantee capacity.

================================================================
SUMMARY  (29 checks across 1 VPC(s))
   27 PASS    0 FAIL    1 WARN

Exit code 0 — environment is ready for EVS deployment.
================================================================