VMware Cloud™ on AWS brings VMware’s enterprise-class SDDC software to the AWS Cloud with optimized access to AWS services. VMware Cloud on AWS integrates our compute, storage and network virtualization products (VMware vSphere®, vSAN™ and NSX®) along with VMware vCenter management, optimized to run on dedicated, elastic, bare-metal AWS infrastructure.
VMware Cloud on AWS is newly available in AWS Asia Pacific (Hong Kong) region. Apart from that, it has been available in AWS US East (N. Virginia), AWS US East (Ohio), AWS US West (N. California), AWS US West (Oregon), AWS Canada (Central), AWS Europe (Frankfurt), AWS Europe (Ireland), AWS Europe (London), AWS Europe (Paris), AWS Asia Pacific (Singapore), AWS Asia Pacific (Sydney), AWS Asia Pacific (Tokyo), AWS Asia Pacific (Mumbai) Region, AWS South America (Sao Paulo), AWS Asia Pacific (Seoul), AWS Europe (Stockholm), AWS Europe (Milan), AWS Asia Pacific (Osaka), AWS GovCloud (US West) and AWS GovCloud (US East) regions. Please note that some regions require customers to explicitly opt-in to link their own AWS account to SDDCs.
Yes. VMware Cloud on AWS SDDC is running directly on AWS elastic bare metal infrastructure, which provides high bandwidth, low latency connectivity to AWS services. Virtual machine workloads can access public API endpoints for AWS services such as AWS Lambda, Amazon Simple Queue Service (SQS), Amazon S3 and Elastic Load Balancing, as well as private resources in the customer's Amazon VPC, such as Amazon EC2, and data and analytics services such as Amazon RDS, Amazon DynamoDB, Amazon Kinesis and Amazon Redshift. You can also now enjoy Amazon Elastic File System (EFS) for fully managed file service to scale the file-based storage automatically to petabyte scale with high availability and durability across multiple availability zones and the newest generation of VPC Endpoints designed to access AWS services while keeping all the traffic within the AWS network.
VMware Cloud on AWS is designed with multiple layers of protection. The service inherits all of the physical and network protections of the AWS infrastructure and adds dedicated compute and storage along with the security capabilities built into vSphere, vSAN and NSX. All data transmitted between your customer site and the service can be encrypted via VPN. All data between the VMware Cloud on AWS service and your SDDCs is encrypted. Data at rest is encrypted. The VMware Cloud on AWS infrastructure is monitored and regularly tested for security vulnerabilities and hardened to enhance security.
The more software-defined you are with VMware technologies on-premises, the more value you can derive out of VMware Cloud on AWS. With this release, we have now expanded support for on-premises vCenter running VMware vSphere® 6.0u3 patch c or later. However, you can still move workloads to and from VMware Cloud on AWS by performing cold migrations of the VMs. No conversion or modification is required. You can also just run VMware Cloud on AWS standalone with only a web browser. Please refer to the VMware Compatibility Guide for more information. (https://www.vmware.com/resources/compatibility/search.php)
VMware Cloud on AWS now supports language and regional format settings in French, Spanish, Korean, Simplified Chinese and Traditional Chinese, in addition to German, Japanese, and English. These languages are supported in the VMware Cloud on AWS Console and in Cloud Service Platform features such as Identity & Access Management, Billing & Subscriptions, and some areas of the Support Center. You can change your display language before you login to the VMware Cloud on AWS console or in your account settings. See How Do I Change My Language and Regional Format for more information
VMware Cloud on AWS infrastructure runs on dedicated, single tenant hosts provided by AWS in a single account. Each host is equivalent to an Amazon EC2 I3.metal instance (2 sockets with 18 cores per socket, 512 GiB RAM, and 15.2 TB Raw SSD storage). Each host is capable of running many VMware Virtual Machines (tens to hundreds depending on their compute, memory and storage requirements). Clusters can range from a minimum of 2 hosts up to a maximum of 16 hosts per cluster. A single VMware vCenter server is deployed per SDDC environment.
Please contact your VMware account team. You can purchase either Subscription Purchasing Program (SPP) credits or Hybrid Purchasing Program (HPP) credits and redeem those credits on the service. Please refer to the following websites for more details on these credit programs: SPP Program Guide HPP Program Guide You can also use your credit card or pay by Invoice for the service.
The following six currencies are now supported on VMware Cloud on AWS: USD, GBP, EURO, JPY, AUD and CNY. You can transact in these currencies and run your workloads in one of the AWS regions where VMware Cloud on AWS is available.
This service is delivered, sold and supported by VMware and you will be charged directly by VMware. You will get a single bill that includes the total charges for using this service, including the VMware SDDC software and the underlying AWS resources. Note that for any AWS resources that you directly provision using an AWS Console or AWS API (i.e., without using VMware management, APIs or orchestration tools), you will be billed directly through your AWS account.
Charges begin when you start consuming VMware Cloud on AWS instances – specifically when you start provisioning your SDDC through the console or the API.
No. you cannot change any parameters in the subscriptions after purchase. Before purchasing please confirm that you select the right region in which your SDDC is or will be deployed.
Flexible subscription is a new subscription type for VMware Cloud on AWS now available in preview. It is available for redemption in the VMware Cloud Console as part of the subscription purchasing flow. The benefit of flexible subscription is that it allows for customers to exchange their VMware Cloud on AWS flexible subscription for any new VMware Cloud on AWS term subscription. By purchasing a flexible subscription, customers are entitled to terminate their existing flexible term subscription (1-year or 3-year commitment) early and utilize the value remaining for a purchase of a new 1-year or 3-year subscription.
Flexible Subscription can be purchased via VMware Cloud console. Please work with your sales team to determine if flexible subscription is right for you. Flexible subscription is currently available via all routes-to-market excluding Managed Service Provider.
Subscriptions with i3.metal and i3en.metal instance type in all VMware Cloud on AWS regions are currently available as flexible subscriptions.
Any upfront paid VMware Cloud on AWS subscription can be exchanged for flexible subscription. A flexible subscription can be exchanged for another flexible or not-flexible subscription type.
The exchange only impacts your financial commitments, there will be no impact on workload. Please note that you might be charged an on-demand rate if you have workloads running that are not covered by your new subscription.
No, you will not get credits back. All leftover value will be applied towards your new subscription purchase.
No. you cannot change any parameters in the subscription after purchase. Before purchasing, please confirm that you select the correct host type and count. You can always purchase additional subscriptions to increase host count.
You have to pay upfront in full for 1-year or 3-year subscriptions or through monthly installments for 1-year and 3-year term commitment.
After you land on the VMware Cloud on AWS Console, you can click on the “subscription” tab in the navigation bar to create a subscription. Once the subscription is created, you can start enjoying the discounted rate for the number of hosts that you purchase. Please note that the subscription is charged upfront or monthly to your payment method.
It takes up to 30 minutes for a subscription to activate. The subscription status will indicate that it is active.
No, by purchasing a subscription you make a financial commitment to VMware. How much of it you end up using is up to you.
Yes, you may purchase additional subscriptions. Each subscription will have its own start and end date, i.e. no co-term.
We look at the number of hosts used in your organization per hour in each region and we subtract the total committed hosts in all your subscriptions for the specific region. The remainder is the overage. Overage usage is billed at on-demand rates per VMware Cloud on AWS pricing. Overages are billed in arrears and will be reflected in your invoice, which you receive after your billing date.
You can use the sizing and assessment tool to size your workloads for VMware Cloud on AWS. The tool enables you to size for factors including storage, compute, memory and IOPS in the logic to provide you with the most optimized server and SDDC recommendation for VMware Cloud on AWS. Once you have completed sizing your workloads, you can calculate your total cost of ownership (TCO) for these workloads and compare it with an on-premises virtual environment. The tool will calculate the number of hosts and clusters required to support your workload to run on a VMware Cloud on AWS SDDC. Try the tool here
You can access the tool without any credentials. However, to complete the TCO, you must register with an email address and use those credentials to log into the tool.
You can create between 1-10 workload profiles to simulate a mixed workload environment. We have included workflows for some common workloads such as VDI, databases and general-purpose workloads to simplify this process.
In addition to the inputs available in the tool, the factors that we consider are: • CPU – CPU headroom in steady state and in failure • IOPS – IOPS per disk group, IO profile, IO amplification • Capacity – Slack space, swap space, deduplication, compression, disk formatting, base 10 to 2 • Others – FTT, N+ = 1, RAID1, RAID5, RAID6
Currently, the tool recommends "Fixed Server" profile based on the i3 and i3en instance types. In the future, as VMware Cloud on AWS supports more instance and profile types, the recommendation will account for this and recommend the most optimized profile and instance type for your environment.
In a real-world deployment, not all VMs run at the same utilization. The resource utilization plan takes this into consideration by ensuring that you allocate different percentages of utilization to groups of VMs running your applications. By using the resource utilization plan (RUP), you can modify the overcommit in the advanced settings tab, located in the additional information section of the workload profile. Modify the values to more closely meet your desired consolidated state, (e.g., changing % VMs value to 100% and run at 80% would mean that you are anticipating a net utilization cluster wide of 80%.
The IO profiles are tied to underlying VMware Cloud on AWS performance data. To get the most optimized performance, select the ratio closest to the ratio that you require.
Cluster settings: • CPU headroom reserved cores in the event of a spike in workload activity to avoid latency. This option allows you to reserve cores in the event of steady state as well as failures. • Host failure scenario is the equivalent of a N+1 scenario where the logic accounts for an additional host for redundancy. Advanced Settings: • Resource utilization plan (RUP): Refer to above question on "resource utilization plan" and how it impacts your sizing exercise.
No, a user is not allowed to change their VMware Cloud on AWS SDDC host type or region of deployment during their 1 or 3 year subscription period.
Please reach out to sales or your customer success representative to ensure you have enough credits for the appropriate 1 or 3-year commitment duration.
The implementation of the upfront $2000 Prepaid Credit is part of our fraud-prevention policy. Any charge incurred by the user is then applied to the hourly on-demand rate for the service or an annual subscription. This Prepaid Credit is waived at VMware's discretion based on the user's current level of engagement with VMware. Users will be notified of any waiver affecting their requirement to have a Prepaid Credit when they are about to deploy their first SDDC.
You will be charged $2000 USD once you deploy your first SDDC. You will not be charged for any subsequent SDDC deployment.
You can use this credit only towards VMware Cloud on AWS usage, the credit will expire after 60 days and is not redeemable with any other VMware cloud service.
You can change your payment method in the CSP portal as described here. Please note that you will be charged on the payment method that was defaulted when the bill was generated.
Please reach out to our support team. See information here about how to access our support team via the VMware Cloud on AWS console.
You can use your personal or corporate Mastercard, Visa, American Express, Discover, JCB or Diners Club credit cards. Please note, however, that Discover, JCB and Diners Club are only supported in certain countries. You may also use a debit card as long as it is Mastercard, Visa or American Express.
Your credit card limit and your payment processor determine the size of your transactions. The maximum amount you can spend in a single transaction is $25,000. For more information about your credit limit, you should contact your issuing bank. More information is available here.
No. 1 or 3 year subscription purchases are not allowed using credit card. Please use another payment method to purchase the 1 or 3 year subscriptions, or request an exception by contacting VMware support.
Seller is a Billing Account for an org. In simpler words, the company that would send the bill to the customer. It indicates which legal entity or person is identified as the Seller of Record for a specific product to the end consumer. The Seller of Record also often assumes the responsibility for accounting for a transaction tax on that particular transaction. Sellers have their own set of commerce attributes that may or may not be unique to that seller such as Payment Method, Terms of Service, Offer catalog, Pricing, Regions, Currencies accepted, and Billing engines with different invoice templates and billing business rules. Available Options as of March 2021: VMware and AWS
More than one org is not needed to support multiple Sellers of Record and it is not encouraged to have more than one org with VMware Cloud on AWS SDDCs.
It is available for any VMware Cloud on AWS commercial customer that has two sellers established. Please consult with your account team prior to setting up and using multiple sellers and have them contact product management resources as necessary.
No, adding a fund and creating a subscription are two separate disjoint activities. Customers shouldn't be in the notion that adding new funds would get translated to subscriptions. They would need to create subscriptions in VMC Console.
No, A subscription can only cover hosts within that seller. Example: If you have 2 SDDCs with 4 hosts each, 1 with VMware, 1 with AWS, and a three-year term subscription for four hosts with VMware as the seller. In that case, the 4 host SDDC with AWS as the seller would be charged on demand.
Please engage with your VMware account sales team, select the appropriate VMware Cloud on AWS subscription from the Partner Pricebook and then initiate your order through the sales team once your reseller agrees to the terms you define. Your end customer decides when they are ready to consume the service and ready to create a Software-Defined Data Center (SDDC).
No. You can pay for the SKUs directly for a designated reseller and end Customer[AP1] . The end customer’s email address will be used to provision the service, and an email invitation will be sent to onboard and start the service.
If you want to start your cloud journey but are not ready to sign a contract, purchase a large volume, or make a significant commitment of time and funds upfront. In that case, you can start small (with a 2-host 1-year subscription) purchased by SKU and scale as needed later.
Similar to purchasing vSphere Advantage+, you can now buy VMware Cloud on AWS by SKU without signing a contract.
The subscription starts once the onboarding email invitation is sent to the distributor’s designated end customer’s email address.
In that case, as an end customer, you start the service with only ten months left on your 1-year subscription. The subscription always starts on the day the onboarding email invitation is sent.
Subscriptions entitle the end-user to a certain number of host hours. They are billed within the first 30 days of the purchase. Host hour usage over the purchased subscription and non-host charges such as data transfer, elastic IP, EBS, vSan, or custom networking configuration charges are billed using a 30-day billing cycle in arrears.
The distributor is the one who will be billed for the subscription they purchased for the designated reseller and end-user pair. The distributor receives the data to bill the reseller, who uses the report to enable billing to the end customer.
The distributor would need to engage with the VMware sales team to sign a Commitment Based Contract (CBC) with VMware. The distributor would need to provide the following details – Type of CBC (VMware Cloud Standalone or VMware Cloud Universal), Reseller & customer details, required product offerings, and CBC term. All the discounts are negotiated upfront between VMware and the distributor and are applicable during the Commitment Based Contract (CBC) tenure. For this new commerce motion, the distributor would need to mention the payment type as “PurchasePay” to the sales team.
Distributors will receive the opportunity to enable a significant volume discount for a specific reseller/end customer combination. The distributor would commit to a budget while allowing the customer the flexibility in consuming what they need when they need it without any renegotiation.
With this new Commerce motion, the distributor is not required to park the money upfront and the distributor needs to pay monthly only for the VMware Cloud offerings purchased by the end customer.
A commitment Based Contract (CBC) has a 1:1:1 relationship between the distributor, reseller, and the end customer. VMware does not support a wholesale model i.e. Distributor cannot sign a single Commitment Based Contract (CBC) and use it across the pool of resellers and end customers. For each new end customer, the distributor shall need to sign a new CBC with VMware for the associated reseller. For “n” distinct end customers, the distributor shall need to sign “n” CBCs with VMware.
We support seamless migration from SKU-based transactional commerce motion to Commitment Based Contracts (CBC). There would neither be any system downtime nor any impact on the customer’s workloads during the migration.
The customer self-serves all the purchases directly from the console. The customer is the owner of the environment/org and can create SDDC, add/remove hosts as well as oversees the Identity & Access Management (IAM)
The customer receives an onboarding email when the Commitment Based Contract (CBC) is signed. The billing starts only when the customer purchases subscriptions or deploys SDDC. The distributor will be charged monthly by VMware based on the associated customer's consumption of VMware Cloud offerings.
No, since the customer has not onboarded, the distributor will not be billed. However, the Commitment Based Contract (CBC) would still be active and the tenure of the CBC would be reduced by 3 months.
The distributor will be billed on the 10th of every month using the proforma process by VMware based on the associated customer's consumption of VMware Cloud offerings. The distributor receives data to then bill the reseller who in turn uses the report to enable billing of the end customer. Distributors and reseller can set up their own prices downstream to get the desired margins. VMware has no visibility into the margins of the distributor or reseller.
To help customers in this crisis situation, VMware is offering a variety of business continuity solutions and special offers. Learn more about it here
VMware Cloud on AWS can help businesses alleviate potential business disruptions in 3 ways:
For a limited time, VMware is offering special offers for business continuity solutions with VMware Cloud on AWS to help our customers to get through this crisis. Please reach out to your VMware sales representatives to discuss your options or talk to an expert
VMware Cloud on AWS has been independently verified to comply with many leading compliance programs, including but not limited to ISO 27001, ISO 27017, ISO 27018, SOC 2, HIPAA, PCI-DSS, OSPAR, IRAP. Check VMware Cloud Trust Center for more information (Please filter for ‘VMware Cloud on AWS’ in Services).
PCI SDDC's are available on the following VMware Cloud on AWS regions: US East (N. Virginia), US West (Oregon), US West (N. California), US East (Ohio), Europe (Milan), Europe (London) , Europe (Frankfurt), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Osaka).
No. The Whitepaper: Migrating PCI Workloads to VMware Cloud on AWS illustrates how the Shared Responsibility Model relates to PCI compliance. The responsibilities are shared between VMware and Customers. VMware is responsible for maintaining PCI compliance of the VMware Cloud on AWS cloud service and cloud platform. Similarly, customer workloads running in VMware Cloud on AWS must pass an entirely separate PCI assessment solely managed by the customer. Customers must hire a Qualified Security Assessor (QSA) to assess and verify their PCI SDDC configuration and must verify that the workloads are PCI compliant.
SDDC upgrades are only available to version 1.14 SDDC's and newer. The new PCI configuration changes cannot be applied to SDDC versions prior to version 1.14 and can only be enabled during the initial provisioning of version 1.14 or newer SDDCs. The new SDDC can be provisioned in a new or existing PCI enabled Org.
PCI SDDCs will have the following major differences from a standard SDDC to prevent non-compliant services from impacting their PCI compliance status:
No, the published pricing for bare metal VMware Cloud on AWS hosts is all that is required from a cost perspective. There are no additional charges for PCI SDDCs.
VMware recommends deploying separate SDDCs for Development, Production, and PCI workloads. This helps limit the PCI audit scope to PCI Production systems and minimize the costs associated with maintaining PCI compliance.
Yes. Just like standard SDDCs provisioned on 1.14 or later, patching and upgrading will be automatically handled by the VMware Operations team via standard lifecycle processes.
Yes, this can be done but not through the VMC console. Please contact VMware Support to make this request.
After the PCI SDDC is enabled via Feature Flag by VMware, the VMC Console will provide the ability for the customer to disable the Networking & Security Tab. After this Tab is disabled, the Local NSX Manager URL and local NSX account credentials to login to the NSX Manager are visible in the Settings Tab.
Customers can use the same connectivity options available to a standard SDDC. You can choose Direct connect, VPN, connected VPC, and transit connect.
A customer would need to perform the following steps:
In-service chat support is available for all features of VMware Cloud on AWS, including hybrid solutions such as vCenter Hybrid Linked Mode and vCenter Cloud Gateway. Chat support is available 24x5 in English across all global regions but is not currently available for on-premises-only solutions.
Yes, please navigate to the left menu in the VMC Console and click “Notification Preferences” to pick and choose which notifications you’d like to receive. Ensure you click “Save Changes” when satisfied with your selections.
For now, these are enabled at the user level. What is meant by that is each user is responsible for setting their own notification preferences and only you have control over those settings. Changes you make within your own VMC Console will not affect other users.
In order to access the Notification Preferences, you must be a part of the associated Org as either an Org Owner or Org User. You must also be assigned one of the following Service Roles:
NSX Cloud Admin
NSX Cloud Auditor
With the new time-bound Single Host SDDC starter configuration, you can now purchase a single host VMware Cloud on AWS environment with the ability to seamlessly scale the number of hosts up within that time period, while still retaining your data. The service life of the Single Host SDDC starter configuration is limited to 60-day intervals. This single host offering applies to customers who want a lower-cost entry point for proving the value of VMware Cloud on AWS in their environments.
Features that do not require more than one host are included in the Single Host SDDC offering, including hybrid operations between on-premises and VMware Cloud on AWS. However, any operations or capabilities that require more than one host would not work. For example, High Availability (HA) and stretched clusters across two AWS AZ. Due to the nature of single host, the FTT=0, meaning that if your host fails, your data would be lost. VMware does not currently offer patching or upgrades to a Single Host SDDC. Single Host SDDC highlights: • Accelerated onboarding • Migration capabilities between on-premises and VMware Cloud on AWS – VMware HCX for large-scale rapid migration, VMware vMotion for live migration and lastly cold migration. • Seamless high-bandwidth, low latency access to native AWS services • Disaster Recovery – Evaluate VMware Site Recovery, the cloud-based DR service optimized for VMware Cloud on AWS. VMware Site Recovery is purchased separately as an add-on service on a per-VM basis. • Expert support – Single Host SDDC receives the same unlimited 24/7 VMware Global Support Services as well as 24/5 live chat support • Hybrid Linked Mode support – Single logical view of on-premises and VMware Cloud on AWS resources • All-Flash vSAN storage – All Flash vSAN configuration, using flash for both caching and capacity, delivers maximum storage performance.
Of course! Please log in to Partner Central for more details. If you are a Technology Alliance Partner, please scroll down to the Third Party Technology Solutions FAQ section.
A Single Host SDDC will be deleted after 60 days. All data on the SDDC will be lost. You can scale up a Single Host SDDC into a 2 host SDDC and retain all your data. A 2 host SDDC is not time-bound.
You can simply click on the "Scale Up" button to scale up to the standard production SDDC service. Your data will be retained. If you want to contact our sales team, please reach out to us via the chat service.
It is possible to defer account linking for Single Host SDDCs for up to 14 days, but it is not possible to scale-up your Single Host SDDC to a four host configuration without connecting to an AWS account.
Single Host SDDC receives the same unlimited 24/7 VMware Global Support Services as well as 24/5 live chat support via the VMware Cloud on AWS Console and via vSphere Client.
There are three payment methods available for the service. You can choose to pay for the service via credit card, by invoice, or you can purchase Subscription Purchasing Program (SPP) credits or Hybrid Purchasing Program (HPP) credits and redeem those credits on the service.
The 2-host cluster capability enables a customer to provision a persistent production cluster with just 2-hosts in VMware Cloud on AWS. Previously a customer needed 3-hosts to spin up a persistent cluster in VMware Cloud on AWS. This offering is a great place to start for customers who do not need the full 3-host Production cluster due to smaller size workloads or wish to prove the value of VMware Cloud on AWS for a longer duration than the Single Host SDDC can offer today.
The cost per host is the same as the 3+ host pricing. For a cluster, this means that the 2-host cluster results in a 33% lower cost of entry with a persistent, full production environment.
The 2-host cluster is available in all commercial global AWS Regions where VMware Cloud on AWS is available today for the Amazon EC2 i3.metal instance type, except in the AWS GovCloud (US-West) region. Please see the FAQs on availability for full details about the regional availability of VMware Cloud on AWS
Features included in the 2-host cluster are the same as a 3+ host Production SDDC, with the exception of Optimized Elastic DRS policies (optimize for cost, optimize for performance and rapid scale-out) and Stretched Clusters.
You may provision as many 2-host clusters as you wish. You can mix an SDDC with a 2-host cluster and 3+ host clusters. However, you cannot have an SDDC with a 2-host cluster and a Single Host SDDC.
The 2-host cluster receives unlimited 24/7 VMware Global Support Services as well as 24/5 live chat support via the VMware Cloud on AWS Console and via vSphere Client.
The 2-host cluster size is full production-ready everywhere it is available and has the same SLA as our 3+ host cluster sizes. Requirements for the current SLA can be found here
The 2-host cluster can be purchased in the same manner as any other SDDC and can be spun up in just hours in a similar fashion to the Single Host SDDC and 3-host SDDC. Once provisioned, it can be scaled up in a matter of minutes to a 3-host SDDC.
Yes, you can. Credit card users cannot create more than one SDDC or add an additional 2-host cluster or a 3-host cluster SDDC. For more details on credit card payments, please look at the “Credit Card Payment” section of the FAQs.
Yes, Managed Service Providers(MSPs) can utilize the 2-host cluster size. The SLA for any organization managed by an MSP is subject to the specific terms between the MSP and the tenant and is not bound by the VMware SLA.
All 2-host cluster SDDCs provisioned in Preview now have full SLA support as well! There is no need to make any changes from your perspective- we’ve done all of the work for you. They are now equal to any 2-host SDDC.
While a 2-node cluster supports the same number of VM’s per host as any other configuration, due to Admission Control, a 2-node cluster can power on no more than 36 workload VMs at a time. This is to ensure vSphere HA will be able to restart any running workload in the event of a failure.
VMware vSphere® vMotion® enables live migration of running (powered on) VMs from your on-premises host to a host in VMware Cloud on AWS with zero downtime for the application (<1sec switchover time), continuous service availability and complete transaction integrity. This feature is now available for VMware Cloud on AWS. Furthermore, by enabling certain advanced configuration, vMotion can be enabled across different vSphere Distributed Switch versions. Requirements include: • AWS Direct Connect (over Private VIF) and NSX Layer 2 VPN must be set-up. It is not supported without either of these. • On-premises vSphere version must be on 6.0u3 or above. • Sustained bandwidth of 250 Mbps or more is required (for optimal performance). • vSphere Distributed Switch versions 5.0/5.5 will not be supported and migration of VMs hosted on 5.0/5.5 will be blocked. Detailed requirements are here
Single VM vMotion: • UI – Hybrid Linked Mode needs to be set-up for orchestrating vMotion via the HTML5 client. • PowerCL – Support via API directly with PowerCLI. Bulk vMotion: • UI – Hybrid Cloud Extension can enable bulk migration through UI. • PowerCLI – Sample scripts here, to allow bulk migration scenarios.
Yes, if you vMotion a VM that has snapshots from/to vSphere 6.5(d), it will fail. Please update to 6.5 U1 to resolve this issue or delete the snapshots.
Yes, encrypted vMotion would simply work out-of-box. No new set-up action is required, as long as the on-premises environment has the feature supported.
Yes, you can vMotion from VMware Cloud on AWS back to on-premises as long as the on-premises hosts are compatible. Enhanced vMotion Compatability (EVC) mode does not work across clusters and there is a possibility that, while in VMware Cloud on AWS, the VM goes through a power cycle and begins running on a new hardware version in VMware Cloud on AWS. In such scenarios, the host on-premises might be on an older version and live migration will not be supported.
EVC is disabled in VMware Cloud on AWS. All hosts in VMware Cloud on AWS are homogeneous and hence a compatibility check is not required.
As the name suggests, per-VM EVC abstracts this setting from a cluster to a VM level. By doing so, the EVC mode now can persist through a power cycle of the VM.
Both. There is an edit setting attribute at a per-VM level that can be changed to set the specific EVC mode. But it can also be automated and set for a batch of VMs via a script that uses the API.
Yes, as of now, all hosts in VMware Cloud on AWS are homogeneous. The per-VM EVC setting comes into play when migrating back from VMware Cloud on AWS to on-premises to ensure there are not compatibility issues.
The VMware HCX service offers bi-directional application landscape mobility and data center extension capabilities between any vSphere version. VMware HCX includes vMotion, bulk migration, high throughput network extension, WAN optimization, traffic engineering, load balancing, automated VPN with strong encryption (Suite B) and secured data center interconnectivity with built-in hybrid abstraction and hybrid interconnects. VMware HCX enables cloud onboarding without retrofitting source infrastructure, supporting migration from vSphere 5.0+ to VMware Cloud on AWS without introducing application risk and complex migration assessments. Learn more here
VMware HCX abstracts vSphere-based on-premises and cloud resources and presents them to the applications as one continuous resource, creating infrastructure hybridity. At the core of this hybridity is a secure, encrypted, high throughput, WAN-optimized, load balanced and traffic engineered interconnect that provides network extension. This allows support for hybrid services, such as app mobility, on top of it. Apps are made oblivious to where they reside over this infrastructure hybridity, making them independent of the hardware and software underneath. Learn more here
Yes. VMware HCX supports multisite interconnect. Here are few use cases: • Consolidate small DCs to VMware Cloud on AWS • Extend to multiple VMware Cloud on AWS with separate geo-locations. Learn more here
VMware HCX supports all capabilities in both NSX-v and NSX-T SDDCs. NSX-T SDDCs also support the ability to leverage the DX Private VIF option for the VMware HCX interconnects. If you are leveraging the Internet and would like to shift your HCX interconnects to the Private VIF option, please reach out to VMware via support to get assistance in switching the interconnect configuration.
It is not required if the destination environment is an HCX-enabled public cloud. NSX is needed if the destination vSphere environment is also private/on-premises. Optionally, NSX can be installed in the source environment to access the NSX Logical Switch Network Extension feature.
VMware HCX was made available in December 2017. This service is now included with your VMware Cloud on AWS subscription. To activate, login to VMware Cloud Services portal at https://cloud.vmware.com and enable HCX for your VMware Cloud on AWS SDDCs. VMware HCX is integrated with vSphere web client so you can use the same management environment for day to day operations.
Cloud Motion with vSphere Replication is a new and innovative way to enable mass migration of workloads from on-premises to VMware on AWS. With Cloud Motion with replication, you can migrate VMs at large scale without any downtime (live).
Previously, there were two ways to migrate with HCX: 1. vMotion-based — vMotion based migration is live (no downtime) but is serial in nature. Due to vSphere concurrency and cross-cloud limitations, only a handful of VMs could be vMotioned. at the same time. While vMotion is a live migration option, it did not support large scale mobility 2. Warm migration — Warm migration is a large-scale migration where VMs can move at scale, but the migration needs a VM reboot. Cloud Motion with vSphere Replication combines the best of both worlds. VMs are replicated to the destination using replication technology, and once the VMs are replicated, the final migration is done via vMotion. This enables large scale migration without the need for reboot. This feature lets you move applications at scale live, without any reboot or reload.
Cloud motion with replication simplifies migration planning and operations in three ways: • Traditionally, you would have to plan for a maintenance window wherein applications would be rebooted. Maintenance windows are fairly tedious to manage and maintain and there is additional complexity when dealing with application reloads/reboots. With Cloud Motion, migrations can be done at scale from source to VMware Cloud on AWS without scheduling any maintenance windows. • Cloud Motion eliminates detailed analysis, dependency mappings and elongated migration planning projects. • Cloud Motion lets you schedule the failover. This enables predictability as to when the application will migrate. In the case of vMotion, there is no predictability since the VMs would move as soon as the vMotion related activities were done. The combination of live migrations at scale with a predictable schedule brings in a paradigm shift in the migration process planning and operations.
The Migration Assessment enables cloud administrators to calculate the capacity and cost required to migrate workloads from private clouds to VMware Cloud on AWS.
VMware Cloud on AWS customers can access the Migration Assessment via Cost Insight through the CSP console. No separate activation for Cost Insight is needed.
VMware vRealize Network Insight Cloud integration to Migration Assessment is optional. This integration provides application dependency visibility and estimated egress costs for moving applications to VMware Cloud on AWS, thereby helping to create a more effective migration plan.
VMware Cloud on AWS Migration experience is a prescriptive step-by-step guide that helps customers through the migration process from on-premises to VMware Cloud on AWS. The migration process is broken down into 3 stages: Plan, Build, Migrate. Each stage is further divided into individual steps that include links to relevant documentation and tools. At the end of all 3 stages, customers will have successfully created an SDDC and migrated workloads from their on-premises infrastructure to the cloud.
VMware Cloud on AWS Migration experience is free. It is a guide that walks you through the process of migrating workloads from your on-premises data center to VMware Cloud on AWS. The tools you use and the infrastructure you consume along the way to create your cloud environment will have their own pricing.
No. VMware Cloud on AWS Migration experience consolidates information about moving workloads to VMware Cloud on AWS and creates a central hub of information and tools. It is intended to make the migration process easier and to save your time, but there is no requirement to use it for migrating to VMware Cloud on AWS
No. VMware Cloud on AWS Migration experience is available to anyone. Users do not need to be logged in or to have a VMware Cloud on AWS account. However, users do need to be logged in to track the progress of their migration. Users will also have to create a VMware Cloud on AWS Organization and log in as they work through the steps required to create an SDDC.
You have several ways to onboard VMs. One way is to use an on-premises content library and publish it to your VMware Cloud on AWS SDDC (which would attach as a subscriber) and either synch on content immediately or on-demand. You can also create a local content library in your VMware Cloud on AWS SDDC and upload your ISOs and OVAs to that repository to use. Third, you can import a template and use PowerCLI to create new VMs in bulk. Fourth, to migrate individual virtual machines from your on-premises vCenter Server to your VMware Cloud on AWS SDDC you can perform a cold migration, with a powered-off virtual machine, or vMotion of a live virtual machine.
VM templates enable consistency and ease of VM content management. You can add a VM template to Content Library, delete it, rename it, update Notes, or create a new VM from it. • To create or add a template to Content Library, select a VM, click Clone, and select an option to clone it into a library as a VM template. Note: the library has to be local (not published). • To create a VM from a VM template in Content Library, simply select a VM template, click New VM from this Template, and follow the steps in a wizard. The wizard is similar to the one that you are familiar with using for OVF templates or outside of Content Library.
You can't add a VM template into a published library, because the synchronization (data distribution) between Published and Subscribed libraries for VM templates is not supported yet. Also, you can't convert a VM template into a VM via Content Libraries; however, the same template with all capabilities is available for you in vCenter Server Inventory/Folders.
The minimum size SDDC that you can create in VMware Cloud on AWS is one host with the Single Host SDDC. However, one host SDDCs have a limited SLA and are not for production use. The smallest production SDDC that we support is three hosts. With our Single Host SDDC starter configuration, you can create single host SDDC environments. For more details, refer to the Single Host SDDC FAQ section.
Yes. Because you only have three hosts, you cannot implement a "RAID 5" SPBM policy. That requires a minimum of four hosts. The only storage redundancy you can choose is RAID 1.
No. Unlike Single Host, a three host SDDC is a full production SDDC. You can simply add a host to scale up just like any production SDDC.
Yes. You can add additional hosts on-demand. You can also remove hosts on-demand down to the minimum of three ESXi hosts.
Multi-cluster support is the ability for SDDC administrators to add additional clusters to an existing SDDC. You are able to create multiple clusters in your SDDC and these will share a common set of management VM's and network.
VMware Cloud on AWS supports a maximum of 20 clusters per SDDC. Your organization may have lower "soft" limits set. If you wish to have your limits raised, please contact your customer success team.
Once the new cluster is provisioned, you can cold migrate or vMotion VMs to this cluster via vCenter the same way you would move VMs on premises.
No. Only additional clusters can be removed. You must have one cluster in your SDDC and this cluster must be the original cluster deployed when the SDDC was created.
VMware Cloud on AWS SDDC must be connected to an AWS account. It is possible to defer account linking for Single Host SDDCs for up to 14 days, but it is not possible to scale-up your Single Host SDDC to a two or more host configuration without connecting to an AWS account.
Establishing a connection to an AWS account creates a unique high-bandwidth, low-latency connection between your SDDC and your AWS resources, and allows consuming AWS services with no cross-AZ charges. By delaying account linking, you will not be able to choose which availability zone (AZ) your SDDC will be deployed in.
Select the newly available region when creating your SDDC. It is that simple. You can provision an SDDC in a newly available region in a similar manner to the way you provision an SDDC in other available regions. The region selector will now have another option for the new region. The SDDCs you create in the new region will appear on your dashboard along with your other SDDCs. Further, you can contain SDDCs from different regions.
No, you use the same endpoints to access the VMware Cloud on AWS API and VMware Cloud on AWS Console regardless of the region your SDDCs are in.
The version of ESXi running on VMware Cloud on AWS is optimized for cloud operations and is compatible with the standard vSphere releases. ESXi running on VMware Cloud on AWS may have a more frequent update cadence so that you can take advantage of regular service enhancements.
There are no plans to offer customer-selectable version options for the underlying infrastructure components. This consistency enables VMware to operate at scale.
Yes, with Hybrid Linked Mode, you can connect your vCenter server running in VMware Cloud on AWS to your on-premises vCenter server to get a single inventory view of both your cloud and on-premises resources.
Compute Policy is a new framework to allow you the flexibility, control, and policy-based automation required to keep up with the demands of your business. The following policies are being introduced: • Simple VM-Host Affinity • VM-VM Anti-affinity • Disable DRS vMotion
Given the granular cluster level at which DRS operates, it becomes difficult to manage, replicate and update the static rules (laid down in the beginning) as the underlying infrastructure grows (number of VMs, hosts, applications). Similarly, the intent (the why and what) for which the rules were created is lost over a period of time. To get around this, Compute Policy provides a higher level of abstraction to capture the customer intent at a SDDC level rather than at a cluster level at which DRS operates. As a result, a single policy can apply to multiple clusters within the SDDC at the same time. It aims to provide a framework to not only allow placement and load balancing decisions for VMs, but also to handle entire workloads.
Mandatory policies are equivalent to the DRS “must” rules, while preferential policies are similar to the DRS “should” rules. Preferential policies cannot block a host from entering into maintenance mode. However, a policy cannot be violated for fixing cluster imbalance or host over-utilization.
Currently, policies can only be created and deleted. To update a policy, you will need to delete and add the policy with the changes required.
No. All defined policies (except Disable DRS vMotion) are treated the same, and no one policy is preferred over the other. As a result, one policy cannot be violated to remediate another.
In the current implementation there is no conflict detection. This means that if a user configures two policies that conflict with each other, no user error or warning will be generated. DRS will enforce all the policies in the best manner it can, as described below.
It depends. VM-Host affinity is a preferential policy. Please discuss with your ISV vendor whether preferential policies are acceptable as per the terms of your licensing agreements.
In VMware Cloud on AWS, VM Power ON, maintenance and availability have a higher priority over policy enforcement. However, policy enforcement has a higher priority over host utilization. As a result, there are scenarios where a VM may not run on a designated host. For example: • If a host goes down due to any failure, and if HA is enabled, the recovering VM may get powered ON on any available host in the cluster. • Similarly, if reservations are used, and if a compliant host cannot satisfy a VM's reservations, the VM will get powered ON on any available (non-compliant) host that can satisfy the reservation. • If there is no compliant host (i.e. if no host has the Host-tag specified by the policy), the VM shall be powered ON an available host. • If the user configures multiple VM-Host affinity policies that are in conflict for VM, the policies shall be ignored and the VM shall be powered ON a suitable host chosen by DRS. Note, however, that in all cases, Compute Policy will keep trying to move the VMs back to the compliant hosts.
Enforcing a VM-VM anti-affinity policy implies that DRS will try to ensure that it keeps each VM (that has the policy's VM tag) on different hosts. This anti-affinity relation between the VMs will be considered by DRS during VM power-on, host maintenance mode and load balancing. If a VM is involved in a VM-VM anti-affinity policy, then DRS will always prefer those candidate hosts which do not have any powered-on VM that has the policy's VM tag.
One scenario is when any provisioning operation issued by its corresponding API call specifies a destination host is allowed to violate a policy. However, DRS will try to move the VM in a subsequent remediation cycle. If it is not possible to place a VM as per its VM-VM anti-affinity policies, then the policy is dropped and the operation (power-on or host enter MM) continues. This means that first DRS tries to place the VM such that policy can be satisfied, but if that is not possible then DRS will continue to find the best host per other factors, even if it violates the policy. Other scenarios where VMs may not be placed as per the policy could be: • Every host in the cluster has at least one VM with the tag specified by VM-VM anti-affinity policy. • None of the policy preferred host can satisfy VM's CPU/memory/vNIC reservation requirements.
DRS will first try to place as many VMs on different hosts as possible, which in this case will be equal to the number of hosts available in the cluster. After that, the policy shall not be enforced, i.e. the remaining VMs will be placed based on the other factors DRS, which may result in multiple VMs on the same host. To remedy this violation, additional hosts can be added to the cluster. Once the hosts are added, DRS will move the VMs that are violating the policy to the newly added hosts.
Yes. DRS always tries to place the VM such that policy can be satisfied, but if that is not possible, for example, when there is no compliant host or when all the hosts in the cluster have the Host tag included in the policy or resource reservations for a VM can't be met on a compliant host, then DRS will continue to find the best host per other factors even if it violates the policy. A policy shall not be violated for fixing cluster imbalance or host over-utilization. However, a VM power on is not prevented. If the user configures multiple affinity or anti-affinity policies that are in conflict for the VM, the policies shall be ignored and the VM shall be powered ON a suitable host chosen by DRS.
Enforcing a VM-VM affinity policy means that DRS will try to ensure that it keeps each VM that has the policy's VM tag on the same host. This affinity relation between the VMs will be considered by DRS during VM power-on, host maintenance mode and load balancing.
DRS will always try to place as many VMs belonging to this policy on the same host as possible. Once it is no longer possible to place additional VMs on the same host, DRS may violate the policy and power on VMs on other hosts. This could happen if the VMs subjected to the policy have reservations that the host cannot meet. DRS, however, continues to scan the cluster and will move the VMs to ensure compliance at the first available opportunity.
This policy indicates that DRS would not migrate or load balance a virtual machine away from the host on which it was powered-on, except for the case when the host is being put into maintenance mode. This policy can be useful for applications that may be sensitive to vMotions, (e.g., large real-time/latency sensitive transactional databases or VoIP applications. The VMs subjected to this policy are identified using vSphere tags, and this policy is not applicable for a power-on operation. However, once a VM is powered on, and is subjected to this policy, it will not be moved to remediate a VM-Host affinity or VM-VM Anti-affinity policy.
Go to the VMware Cloud on AWS Console, click on your SDDC and select Add Cluster action. Under the section Cluster to Be Added you will see that you can specify the Number of CPU Cores Per Host. Select the value that works best for your workloads and finish the action
The following Custom CPU Core values are supported for each host type:
Here is the list of specific points about the custom CPU core count capability: • This is for additional clusters only. Cluster 0 must have all cores enabled. • This is an at "Add Cluster" deployment time decision only. This cannot be changed post deployment. • All hosts in the cluster must have the same number of CPU cores, including Add/Remove Host operations.
To preserve the number of licensed CPU cores, it is highly recommended that you leverage VMware Cloud on AWS Compute Policies (Simple VM-Host Affinity) to tag all applicable VMs and all the original hosts in the cluster, so that the compute policy can keep these VMs on those hosts. During regular VMware Cloud on AWS patch and upgrade operations, an additional host is added to a cluster. Therefore, you need to include the license for this additional host in your initial licensing contract, making it N+1 since day one.
Yes. Reducing core count affects the compute performance of all workloads on the host and increases the likelihood of system performance degradation. For example, vCenter and vSAN overhead can become more noticeable, and operations such as adding clusters and hosts can take longer to complete.
Yes, you can create custom roles in addition to the CloudAdmin role that is provided out of the box. Users that have the Authorization.ModifyRoles privilege can create/update/delete roles. Users with the Authorization.ModifyPermissions privilege can assign roles to users/groups.
If the user has the privileges to modify roles, they can create/modify/delete custom roles that have privileges lesser than or equal to their current role. You may be able to create roles that have privileges greater than CloudAdmin but you will not be able to assign the role to any users or groups.
Users will only be able to modify or delete any roles that have lesser than or equal to the privileges of their current role.
Yes, you now have access to the entire inventory tree. However, in order to limit contention across the VMs that you create, we strongly recommend that you continue to use the Compute Resource Pool as the location to create your VMs.
No, custom vCenter roles not supported for NSX-V networking configurations. Only NSX-T configurations are supported by this feature.
i3en.metal instance is a 96 vCPU, 768 GiB memory & 8*7,500 NvME SSD storage instance. It utilizes the Intel Xeon Cascade Lake processor @2.5 GHz. This instance provides network-level encryption for east-west traffic by default.
I3en.metal instance is available in Oregon, N.Virginia, N.California, Ohio, Canada(Central), London, Frankfurt, Paris, Stockholm, Ireland, Sydney, Tokyo, Singapore, Mumbai, Seoul, Sao Paulo and GovCloud(US-West) AWS regions today. It will be made available in a phased manner across the VMware Cloud on AWS regions. For availability in specific AWS Availability Zones within an AWS Region, please contact your VMware or AWS customer success or account representative.
i3en.metal instances are available in the following regions and respective availability zones:
Partition placement groups are enabled automatically in every region and availability zone. There are no configuration options for partition placement groups.
When a host is removed, the preference is to remove a host that is not inside a partition; new hosts are added into partitions whenever possible. In this way, SDDCs will benefit from more partitions over time.
Partition placement is a best-effort operation. Placement may fail if there are insufficient physical racks or insufficient capacity. If partition placement fails, a host is added outside of a partition. This means the host is still added, but it is added to a rack that may already have a host from the same cluster. No further action is required when partition placement is sub-optimal.
Stretched clusters facilitate zero RPO infrastructure availability for mission-critical applications. This enables you to failover workloads with zero RPO within clusters spanning two AWS Availability Zones (AZs). It also enables developers to focus on core application requirements and capabilities, instead of infrastructure availability. With this feature, you can deploy a single SDDC across two AZs. Utilizing vSAN's stretched cluster feature, it allows us to guarantee synchronous writes across two AZs in a single SDDC cluster. This feature also extends workload logical networks to support vMotion between AZs. In the case of an AZ failure, vSphere HA will attempt to restart your VMs on the surviving AZ.
Two. When you provision your SDDC, select your AZ just the way you do now. The only change is that you then select a second AZ. Using this information, we automatically deploy your SDDC and stretch your clusters across these two AZs.
Yes. Custom CPU cores can be configured in an SDDC that has two or more stretched clusters. However, custom CPU cores cannot be configured in the first stretched cluster.
The smallest supported stretched cluster is two hosts and provides a 99.9% availability guarantee.
At six hosts the service increases the availability guarantee to 99.99%. This is because we require a quorum to survive in case of a full AZ failure. This implies you must have three nodes per AZ. Thus, six is the smallest stretched cluster to provide the 99.99% SLA.
Yes. Just like a regular cluster, you can add and remove hosts at any time. However, in a stretched cluster these hosts must be added and removed in pairs. You must have the same number of hosts on each side at all times. Thus, you can grow a cluster from 6 to 8, 10, 12, etc.
In addition to the hosts you request, we always provision one additional ESXi host in the case of stretched cluster to act as a witness node. This is to prevent issues such as split brain in the case of a network partition. You will see this host in the UI, but it will not be a member of the cluster and you cannot run guest VM's on that host. This host is a special version of ESXi that runs as a guest. This allows us to charge less for the service since the witness ESXi does not consume an entire physical host.
No. Stretched clusters improve availability but are not intended for DR. AWS AZs in an AWS region are located in the same geographical area. A disaster affecting a geographical area could take out all AZs in an AWS region.
We support ESXi as a guest in this special case. Because the witness does not run any guest workloads, we are able to support virtualized ESXi for this purpose only.
You can use HCX to migrate workloads from a single AZ cluster to an on-premises data center and then migrate the workloads from on-premises into the stretched cluster.
No. A stretched cluster spans across 2 AZs within the same region. If you wish to protect against a regional failure, please use a DR tool such as our Site Recovery service.
Yes. Because we are performing synchronous writes across two AZs there is additional overhead in write transactions. This is the case in any stretched cluster implementation.
We will re-synchronize the vSAN datastore. This resync time will depend on how much data you have stored and how long the systems have been segmented. This operation is automatic and monitored by our operations team.
There are no additional charges to use the Stretched Clusters feature. Stretched Clusters Cross-AZ charges are also waived for up to 10 petabytes of Cross-AZ traffic per month. Usage will be monitored and for instances where a customer’s usage exceeds this limit, VMware reserves the right to inform the customer of the issue and charge the full amount.
All EDRS policies – Cost, Performance and Rapid Scale Out – are supported with Stretched Clusters, in addition to the Storage-only default policy.
EDRS monitors utilization in each Availability Zone. A scale-out event is triggered when a threshold is exceeded in either Availability Zone. Scale-in, on the other hand, occurs only when utilization goes below the threshold in both Availability Zones.
Elastic DRS (eDRS) is a feature that uses the resource management features of vSphere to analyze the load running in your SDDC to scale your clusters up or down. Using this feature, you can enable VMware Cloud on AWS to manage your cluster sizes without manual intervention.
eDRS will automatically scale up when your cluster reaches a capacity threshold. The system automatically monitors your current capacity and your capacity trend to make a decision to add more capacity to your cluster.
Scale Up for Storage Only policy is now configured for every cluster deployed within your SDDC. Previously, you were simply advised to maintain at least 30% slack space in your SDDCs, but this is now being enforced. The maximum usable capacity of your vSAN datastore is 75%; when you reach that threshold, eDRS will automatically start the process of adding a host to your cluster and expanding your vSAN datastore. Please note that even if you free up enough storage to fall below the threshold, the cluster will not scale-down automatically. You will need to manually remove host(s) from the cluster. For more details, please refer to this blog post here
Yes, you will get notified via email and in-console notification once any cluster is within 5% of any storage scale-out event. You will also be notified immediately after any hosts are added.
No, eDRS will not add hosts sequentially. eDRS is throttled to prevent runaway cluster scaling. The system is also monitored by our operations team to ensure that scale operations are conducted correctly.
If you have an SPBM policy that requires a minimum number of hosts, such as RAID 6, eDRS will not scale down below that minimum number. To allow scale down, reconfigure SPBM to use a policy without that restriction such as RAID 1.
You are billed per host per hour on VMware Cloud on AWS. eDRS simply changes the number of hosts you have running in your SDDC. It is the same as if you manually added hosts to your SDDC.
This depends on how heavily loaded your host is. A lightly loaded host will take only a few minutes to remove from the cluster. A very heavily loaded host could take many hours. In the case of eDRS, we only remove hosts which are lightly loaded so we expect this operation to be on the lower end of this spectrum. However, your actual evacuation time largely depends on how many VM's are running and how much data must be evacuated from the host so your times will vary.
No. Because eDRS is throttled, it's not designed for very sudden load spikes such as caused by a DR event. In this case, you should script the host addition process as part of your DR runbook. After the DR workload is started, you can rely on eDRS to maintain the correct number of hosts in your cluster.
Elastic DRS (eDRS) is enabled by default and cannot be disabled in VMware Cloud on AWS. VMware has pre-configured Elastic DRS thresholds across all available policies to ensure SDDC availability. One of the Elastic DRS policies listed in Select Elastic DRS Policy is always active.
EDRS Rapid Scale Out maximum thresholds are the same as the thresholds for the EDRS performance policy. The minimum thresholds are 0%; this means scale-in must be performed manually.
With the i3.metal host instance, each ESXi host comes with NVMe SSD storage. A 3 ESXi host cluster running vSAN provides approximately 15 TiB usable storage and 4 ESXi host cluster running vSAN provides approximately 21 TiB usable storage, with all virtual machines protected against a single host failure (FTT=1). With the i3en.Metal host instance, each ESXi host comes with NVMe SSD Storage as well. A 3 host ESXi cluster running vSAN provides approximately 60 TiB of usable storage. Please note that exact usable storage will vary depending on the type of workload. All virtual machines are protected against a single host failure (FTT=1).
The following subset of vSAN policies can be configured by the user on the SDDC vSAN cluster:
Storage provided from an EC2 based virtual storage array to a VMware Cloud on AWS guest OS is ideal for a variety of use cases including; test and development, elasticity for big data workloads and user/home directories. Both block and file protocols are supported. Note that access to external storage is only available from the VMware Cloud on AWS guest operating system. VMware Cloud on AWS cluster datastore access to external storage is not supported.
VMware Cloud on AWS supports a variety of AWS EC2 based virtual storage arrays and general purpose operating systems that export storage volumes or LUNs. Our storage partners will independently test and provide documentation for their respective solutions.
Deduplication removes redundant data blocks, whereas compression removes additional redundant data within each data block. These techniques work together to reduce the amount of physical storage required to store the data. VMware vSAN applies deduplication followed by compression as it moves data from the cache tier to the capacity tier.
Storage savings resulting from Deduplication & Compression is highly dependent on the workload data. For example:
Although some customers using vSAN on-premises report savings up to 7x for VDI workloads, we generally see storage savings on the average of 2x based on the current deployments.
No, deduplication or compression cannot be enabled individually, it is a cluster-wide setting. Also, all the volumes in VMware Cloud on AWS are automatically enabled for this feature without any user configuration and cannot be turned off.
Although vSAN Deduplication & Compression are very efficient, users may experience some impact. For most workloads the impact is minimal.
vSAN encrypts all data at rest both in the caching and capacity tiers, while preserving the storage efficiencies from deduplication and compression.
Customer data at rest is natively encrypted by vSAN. vSAN uses AWS Key Management Service to generate the Customer Master Key (CMK). While CMK is acquired from AWS, two additional keys are generated by vSAN. Those keys are an intermediate key, referred as Key Encryption Key (KEK) and Disk Encryption Key (DEK).
Similar to De-duplication & Compression, vSAN encryption at rest cannot be turned on or off for individual clusters; it is a cluster-wide setting that is always on by default when a cluster is provisioned in the SDDC.
No. External storage can only be added through the Managed Service Provider(MSP). Both the SDDC and the external storage are managed by the Managed Service Provider(MSP).
Three NFS datastores are attached to an SDDC. The size of the datastores depends on the Managed Service Provider (MSP) offering. Check with the Managed Service Provider (MSP).
External storage is provided as cloud storage by the Managed Service Provider (MSP) in several worldwide locations. Check with the Managed Service Provider (MSP) on supported locations.
External storage is offered in select regions that are in close proximity to Managed Service Provider (MSP) cloud storage. Check with the Managed Service Provider (MSP) on supported regions.
Please check the VMware Cloud on AWS release notes for a list of caveats and limitations related to the usage of external storage through the Managed Service Provider (MSP). Also, please check with the Managed Service Provider (MSP) for additional details.
With the latest release, all customer data at rest will be natively encrypted by vSAN. vSAN will use AWS Key Management Service to generate the Customer Master Key (CMK). While CMK is acquired from AWS, two additional keys are generated by vSAN. Those keys are an intermediate key, referred as Key Encryption Key (KEK) and Disk Encryption Key (DEK). The Customer Master Key (CMK) wraps the Key Encryption Key (KEK), and the Key Encryption Key (KEK) in turn wraps the Disk Encryption Key (DEK). The CMK never leaves AWS control. Encryption and decryption of the Key Encryption Key (KEK) is offered via standard AWS API call. One Customer Master Key (CMK) and one Key Encryption Key (KEK) is required per cluster and one Disk Encryption Key (DEK) is required for every disk in the cluster.
vSAN encryption uses an XTS AES 256 cipher and leverages the Intel AES-NI hardware for industry leading encryption with minimal impact on performance. In most cases, we do not expect any impact on CPU overhead, IOPS or latency. During extreme encryption operations, we have seen consumption of up to 1 CPU core overhead per host and up to 5% drop in IOPs and latency.
Customers have the option to change the KEK (Key Encryption Key) either through vSAN API or through the vSphere UI. This process is called shallow rekey. Note, shallow rekey doesn’t change the Disk Encryption Key (DEK) or the Customer Master Key(CMK). Changing the Disk Encryption Key (DEK) and Customer Master Key (CMK) is not supported. In rare situations, if there is a need to change the DEK or CMK, users have the option to set up a new cluster with new CMK and storage vMotion the data from the existing cluster.
All existing clusters in the last release will be migrated to the latest release. As part of migration, encryption shall be turned on for all existing clusters. All new clusters will be provisioned with encryption turned on by default.
The Customer Master Key(CMK) is sourced from AWS Key Management Service and this is the only option available.
Like any storage system, vSAN uses slack space to maintain the health of the system. This space is used for re-balancing objects, performing operations like deduplication and for recovering from hardware failures.
eDRS is aware of vSAN and ESXi capacity requirements and will automatically add or remove hosts to be certain that your SDDC remains healthy. eDRS is the best way to ensure that your SDDC is sized correctly at all times.
Storage policies define levels of protection or performance for your VMs or VMDKs. Typically, a user manually sets a policy for one or more VMs and these are then managed by vCenter. With Automatic adjustment of vSAN policy for improved data availability, we will automatically set the policy for you based on the number of nodes in your VMware Cloud on AWS cluster.
VMware Cloud on AWS provides a 99.9% availability commitment as per the SLA. If an SLA event occurs i.e. a service component is unavailable, you will be eligible for SLA credits, provided that your cluster meets certain protection requirements that are set by storage policies. By allowing VMware Cloud on AWS to automatically set these policies for you, the criteria required to be eligible for these credits is already taken care of while ensuring that your clusters have the optimal level of protection.
'Automatic adjustment of vSAN policy' feature is supported from v1.10 release of VMware Cloud on AWS
For Standard Cluster:
=< 5 hosts: Failure to tolerate 1 - Raid-1 >= 6 hosts: Failure to tolerate 2 - Raid-6
Dual Site Mirroring, Failure to tolerate 1– Raid-1
Yes, we will automatically change the policy for your cluster
Yes, you can override this function of Automatic adjustment of vSAN policy and set your own policies.
Trim/Unmap is a vSAN feature that allows the guest OS to issue trim/unmap commands so that vSAN can remove unused blocks. This benefits thin provisioned VMDKs as unused blocks can be reclaimed automatically. This is an opportunistic space efficiency feature that can deliver much better storage capacity utilization in vSAN environments.
This process carries benefits of freeing up storage space but also has other secondary benefits:
As this feature is being released as a preview, we will enable the feature on a per cluster basis, based on your preference. Please contact your account team to have this feature enabled for your cluster.
This process does carry some performance impact. However, we have built it in a way that it will only consume up to a certain threshold of bandwidth and it will be throttled as it reaches this threshold.
Cloud Native Storage (CNS) is a VMware Cloud on AWS and Kubernetes (K8s) feature that makes K8s aware of how to provision storage on VMC on-demand in a fully automated, scalable fashion as well as providing visibility for the administrator into container volumes through the CNS UI within vCenter. Cloud Native Storage on VMC is supported with TKG and TKG Plus.
Cloud Native Storage (CNS) comprises of two parts: A Container Storage Interface (CSI) plugin for K8s and the CNS Control Plane within vCenter. There is nothing to install or configure within the service to get this integration working. Simply deploy Kubernetes with the vSphere CSI.
This feature scans a customers’ environment for VMs and objects which have SLA non-compliant policies and notifies a VMware Cloud on AWS customers about the same. VMware Cloud on AWS customers will receive an email notification which contains details of all the non-compliant policies and which VMs/objects they are mapped to for their VMware Cloud on AWS ORG. Customers will also be able to view the entire list of VMs with non-compliant policies within the VMC console and will be able to move to a managed storage policy with the click of a single button.
SLA compliance is required to ensure that your workloads are protected and that you are eligible for credits should a failure occur (Click here to learn more about the VMware Cloud on AWS SLA). SLA compliant policies are policies which follow the VMware Cloud on AWS SLA guidance and non-compliant policies are policies which are different from what is stated in the VMware Cloud on AWS SLA document.
You will be notified via email about which VMs have non-compliant policies. The email will include a link which re-directs you to the VMC console where you can view the entire list of VMs and objects with SLA non-compliant policies for your ORG.
The scan is performed daily and if there are new non-compliant policies, you will only be notified about these policies. Previously notified non-compliant policies will not be included in an email but they will be listed in the inventory view if they haven't been remediated.
No. In the VMC console inventory view, you will have the option to select which VMs you want to change to a compliant policy. You will have the option to either select specific VMs you want to remediate or remediate the entire inventory. VMs that have not been moved to a SLA compliant policy will remain in the inventory.
By default, there is no external access to the vCenter Server system in your SDDC on VMware Cloud on AWS. Open access to your vCenter Server system by: • Configuring a firewall rule to allow access to the vCenter Server system. • Configuring an IPsec VPN or Direct Connect between your on-premises data center and your SDDC to access the vCenter privately. vCenter is also accessible privately from the linked VPC and from a compute VM in the SDDC.
With NSX-T, there is connectivity from AWS VPC to components behind management gateway. From the EC2 instance deployed in AWS VPC users can reach vCenter.
When you deploy an SDDC in VMware Cloud on AWS, it is configured with two networks: a management network and a compute network. The management network handles network traffic for the SDDC hosts, vCenter Server, NSX Manager, and other management functions. The compute network handles network traffic for your workload VMs. The gateways allow users to access these networks from Internet, on-premises , and connected AWS VPC. The NSX edge acts as the gateway.
There are three traffic groups in VMware Cloud on AWS: • VMkernel Traffic (ESX Management, vMotion) • Management Appliance Traffic (vCenter, SRM, vSphere Replication Appliance, NSX Manager) • Workload VM Traffic
IPFIX is a standard that allows virtual or physical switches to export flow information going through the switch to collector tools. Customers may decide to monitor all flows on a particular logical switch or set of logical switches.
An IPFIX template provides meta data format about the collected flows. For example, the flow template may include "timestamp when flow started and ended” "amount of bytes allowed during that time.”
Flow is a combination of 5 tuples : Source and Dest IP, Source and Dest Port, and Protocol. There is always a unique flow across two application talking to each other on a specific port.
Collector tools perform flow analysis and reports information about the health and performance of the applications. These are sometimes called as application monitoring tools. Customers can configure 4 collector tools.
By default the Compute Gateway and Management Gateways are connected through a logical segment. You can control communication through the firewall policy on the Management Gateway.
No. There is no granularity to select a vNic of a virtual machine. All vNics traffic will be port mirrored.
You can configure up to 5 DNS zones. Out of those, one should be with on-premises domain (FQDN) pointing to on-premises DNS server. And the other should be with AWS domain (FQDN) pointing to the DNS server in AWS
As you deploy a 3 or higher nodes SDDC, default logical network will not be created. It is the responsibility of the user to create a network with appropriate CIDR before deploying virtual machines
There were many incidents where default logical network CIDR (192.168.1.0/24) overlapped with on-premises network and caused connectivity issues. These issues are very difficult to troubleshoot.
No, either native DHCP capabilities can be used or DHCP Relay. User's will not be able to use DHCP Relay if there are any network segments using native DHCP capabilities; the respective network segments will have to be deleted first.
NSX VMC Policy API includes all the NSX Networking and Security APIs for the NSX capabilities within the SDDC. NSX VMC AWS Integration API includes APIs that are specific to AWS like Direct Connect.
NSX-T APIs can easily be found and used within the VMware Cloud on AWS SDDC’s API Explorer. Furthermore, customers can even perform a search on keywords. Customers can easily lookup and test NSX-T APIs directly from API Explorer before including them in larger scripts or applications.
Go to API Explorer, which can be found under the Developer Center. From API Explorer, select your Organization and SDDC, and you will see both "NSX VMC Policy" API and "NSX VMC AWS Integration" API. Click on the one you would like to use. You will see a list of relevant NSX APIs. You can put in the requested information and click the Execute button to execute the API.
VMware has a comprehensive vulnerability management program that includes third-party vulnerability scanning and penetration testing. VMware conducts regular security assessments to maintain VMware Cloud on AWS compliance programs and continuously improve cloud platform security controls and processes. While the requirements to conduct penetration testing vary by industry compliance regulations, customer environments benefit greatly with penetration testing to measure the security effectiveness within their virtual infrastructure (SDDCs) and applications. To notify VMware that you plan to conduct penetration testing, please use this Request Form to provide us relevant information about your test plans. VMware will respond with an approval by email. Penetration testing must be conducted in accordance with our Penetration Testing Rules of Engagement.
VMware Cloud on AWS supports Jumbo Frames for networking traffic on Direct Connect. To fully benefit from Jumbo Frames and avoid fragmentation, you must ensure that the Direct Connect interface MTU is set equal to the end to end path MTU from your SDDC to your Data Center over Direct Connect. On the AWS Account, the Direct Connect private VIF must be created with this MTU size. On the SDDC, the Intranet uplink MTU must be set to 8900.
The integrated solution is about providing Policy-Based IPSec VPN connectivity between SD-WAN enabled branches and application workloads that reside in VMware Cloud on AWS. The solution leverages the VMware SD-WAN Gateways, as an on-ramp mechanism to VMware SDDC deployed on AWS. The SD-WAN Gateway is the peer end of the tunnel that is set up on the VMware SDDC T0 Gateway. The SD-WAN solution has a feature called “Non-VeloCloud-Site,” which allows SD-WAN Gateways to set up IPSec tunnels to non-SD-WAN locations.
VMware SD-WAN by VeloCloud is a global service that delivers high-performance, reliable branch access to cloud services, private data centers, and SaaS-based enterprise applications. SD-WAN increases bandwidth economically by aggregating WAN circuits of any type, providing faster response even for single application flows. Data plane function and orchestration are delivered in the cloud to provide direct and optimized access to cloud as well as on-premises resources. You can deploy a branch in minutes with VMware SD-WAN Edge activation from the cloud. Automatic WAN circuit discovery and monitoring eliminate link-by-link and branch-by-branch configuration.
VMware provides hybrid and multi-cloud capacity while VMware SD-WAN provides the fabric between clouds. As customers leverage more of VMware Cloud on AWS, SD-WAN will offer the optimal connectivity VMware Cloud on AWS.
No, at this time, VMware SD-WAN focuses on WAN connection between branches and VMware Cloud on AWS for workload or application access.
To get started with VMware SD-WAN, customers will need to have an SD-WAN subscription with the Premium license (which provides access to SD-WAN Gateways, and Non-VeloCloud-Site capabilities) or Enterprise License (which needs Non-VeloCloud-Site capability via Gateway add-on option). Customers should also have access to the VMware SD-WAN Orchestrator to have the capability to create a Non-VeloCloud Site Network Service. Customers will also need to have at least a single-host VMware Cloud on AWS environment with access to manage Networking and Security.
Yes, you must call into VMware GSS and mention this KB article. This KB article discusses that the SD-WAN Gateway private IP must be obtained for the configuration of the VMware Cloud on AWS side, and this information can only be gained from Support. Additionally, while this integration with VMware SD-WAN will provide the capability for branches to communicate with VMware Cloud on AWS workloads, this integration is not recommended to be used for migration of workloads from the data center to cloud using IPSec VPN.
At this time, there is only a singular non-redundant tunnel that is instantiated. This limitation will be addressed in future releases of VMware Cloud on AWS and SD-WAN integration.
When encountering issues with the integration of VMware SD-WAN with VMware Cloud on AWS, please contact VMware Global Support Services (GSS), and they will work with you to reach a resolution and engage the appropriate resources.
Multi-CGW will enable the following use cases:
· Multi-tenancy within an SDDC
· Overlapping IPv4 address space across CGWs
· Support for static routes on customer managed CGW
· Deployment of Isolated test 'segments’ for Disaster Recovery (DR) testing or “sandbox” environments.
Three types of MCGWs are supported:
· Routed – Segments behind a routed CGW are part of the SDDC’s routing table
· NATted – Segments behind a NATted CGW are reachable only via NAT configuration and are not part of the SDDC’s routing table.
· Isolated – Segments behind an Isolated CGW are not available to the rest of the SDDC.
Multi-CGW supports multiple NAT options
· Source NAT (SNAT) – Changes Source IP
· Destination NAT (DNAT) – Changes Destination IP
· Reflexive NAT – Stateless NAT
· No SNAT
· No DNAT
For any Multi-CGW connected segment to communicate with Direct Connect (DX), VMware Transit Connect, or the ESXi management network, Route Aggregation must be configured. Route aggregation is not required for Internet via the SDDC’s Internet Gateway.
Static routes can be configured on the Multi-CGWs. Non-default static routes can be configured on any type of Multi-CGW (Routed, NATted, or Isolated). The default route (0.0.0.0/0) can only be configured on Isolated Multi-CGWs.