Start with a Robust and Reliable Foundation
Achieve Better Scalability and Performance in your Data Center
The hypervisor plays a key part in delivering scalable virtualization performance. See detailed performance demonstrations and comparisons in the performance section of the VMware website.
You’ll see that VMware vSphere achieves high-performance throughput in a heavily virtualized environment, even as the number of total supported users and virtual machines per physical host increases. Join the discussion on the latest performance topics on VROOM!, VMware’s performance team blog. Blog discussions include:
Better Memory Management for Scalability
In most virtualization scenarios, system memory is the limiting factor controlling the number of virtual machines that can be consolidated onto a single server. By more intelligently managing virtual machine memory use, VMware lets you maximize the number of virtual machines your hardware can support. Of all x86 bare-metal hypervisors, VMware vSphere supports the highest efficiency of memory utilization with minimal performance impact by combining several exclusive technologies.
VMware vSphere uses four techniques for memory management:
- Transparent Page Sharing. Think of it as de-duplication for your memory. During periods of idle CPU activity, ESXi scans memory pages loaded by each virtual machine to find matching pages that can be shared. The memory savings can be substantial, especially when the same operating system or applications are loaded in multiple guests, as is the case with VDI. Transparent Page Sharing has a negligible effect on performance (sometimes it evens improves guest performance) and users can tune ESXi parameters to speed up scanning if desired. Also, despite claims by our competitors, Transparent Page Sharing will in fact work with large memory pages in guests by breaking those pages into smaller sizes to enable page sharing when the host is under memory pressure.
- Guest Ballooning. This is where ESXi achieves most of its memory reclamation. When the ESXi hypervisor needs to provide more memory for virtual machines that are just powering on or getting busy, it asks the guest operating systems in other virtual machines to provide memory to a balloon process that runs in the guest as part of the VMware Tools. ESXi can then loan that “ballooned” memory to the busy VMs. The beauty of ballooning is that it’s the guest operating system, not ESXi, that decides which processes or cache pages to swap out to free up memory for the balloon. The guest, whether it’s Windows or Linux, is in a much better position than the ESXi hypervisor to decide which memory regions it can give up without impacting performance of key processes running in the VM.
- Hypervisor Swapping. Any hypervisor that permits memory oversubscription must have a method to cope with periods of extreme pressure on memory resources. Ballooning is the preferred way to reclaim memory from guests, but in the time it takes for guests to perform the in-guest swapping involved, other guests short on memory would experience freezes, so ESXi employs hypervisor swapping as a fast-acting method of last resort. With this technique, ESXi swaps its memory pages containing mapped regions of virtual machine memory to disk to free host memory. Reaching the point where hypervisor swapping is necessary will impact performance, but vSphere supports swapping to increasingly common solid state disks, which testing shows can cut the performance impact of swapping by a factor of five.
- Memory Compression. To reduce the impact of hypervisor swapping, vSphere introduced memory compression. The idea is to delay the need to swap hypervisor pages by compressing the memory pages managed by ESXi – if two pages can be compressed to use only one page of physical RAM, that’s one less page that needs to be swapped. Because the compression/decompression process is so much faster than disk access, performance is preserved.
With VMware vSphere, virtual machines look and act just like physical machines. Any guest operating systems and any applications or monitoring tools in the virtual machines see a consistent, fixed amount of installed RAM. That ensures that guest software and management tools behave as expected.
If all virtual machines on a host spike at the same time and require all of their memory allocation, VMware DRS can automatically load balance by performing vMotion live migrations of virtual machines to other hosts in the DRS cluster.
Watch a technical video on: VMware Distributed Resource Scheduler and VMware vSphere