Management of VMware ESXi is done via APIs. This allows for an “agent-less” approach to hardware monitoring and system management. VMware also provides remote command lines, such as the
vSphere Command Line Interface (vCLI) and PowerCLI, to provide command and scripting capabilities in a more controlled manner. These remote command line sets include a variety of commands for configuration, diagnostics and troubleshooting. For low-level diagnostics and the initial configuration, menu-driven and command line interfaces are available on the local console of the server.
Patching and updating of vSphere hosts running ESXi allows flexibility and control. During the patching process, only the specific modules being updated are changed, letting the administrator preserve any previous updates to other components. Whether installed on disk or embedded flash memory, ESXi employs a “dual-image” approach where both the updated image as well as the previous image are both present. When a patch is installed, the new image is copied to the host and the boot loader modified to use the new image. If there is a problem with the update, or if the administrator wishes to revert to the prior image, the host is simply rebooted again, at which time the administrator can interrupt the boot process by simultaneously holding the “shift” and “R” key to instruct the host to use the image that was in place prior to the update.
Various deployment methods are supported. ESXi Installer, scripted installations and network based installation using PXE.
These scripts run locally on the vSphere host and can perform various tasks such as configuring the host’s virtual networking and joining it to VMware vCenter Server.
vSphere ESXi supports installing to a local hard disk, FC, iSCSI, or FCoE LUN, USB/SD as well as network boot via PXE. Refer to the vSphere Hardware Compatibility List (HCL) for supported storage adapters that have been qualified for ESXi.
Hardware Monitoring (including SNMP)
The Common Information Model (CIM) is an open standard that defines a framework for agent-less, standards-based monitoring of hardware resources for vSphere hosts running the ESXi architecture. This framework consists of a CIM object manager, often called a CIM broker, and a set of CIM providers.
CIM providers are used as the mechanism to provide management access to device drivers and underlying hardware. Hardware vendors, including server manufacturers and specific hardware device vendors, can write providers to provide monitoring and management of their particular devices. VMware also writes providers that implement monitoring of server hardware storage infrastructure, and virtualization-specific resources. These providers run inside the vSphere host and hence are designed to be extremely lightweight and focused on specific management tasks. The CIM broker takes information from all CIM providers and presents it to the outside world via standard APIs, such as WS-MAN and CIM-XML. Any software tool that understands one of these APIs, such as HP SIM or Dell OpenManage, can read this information and hence monitor the hardware of the vSphere host.
One consumer of the CIM information is VMware vCenter Server. Through the vSphere Client or the Web Client, you can view the hardware status of any vSphere host in your environment, thus providing a single view of the physical and virtual health of your systems. You can also set vCenter Server alarms to be triggered on certain hardware events, such as temperature or power failure and warning states.
vSphere also exposes hardware status information via SNMP for other management tools that rely upon that standard. SNMP Traps are available from both the vSphere host and vCenter Server.
Systems Management and Backup
Systems management and backup products integrate with vSphere via the vSphere APIs. The API-based partner integration model significantly reduces management overhead by eliminating the need to install and manage agents in the COS.
VMware has worked extensively with our ecosystem to transition all partner products to the API-based integration model of the ESXi hypervisor. As a result, the majority of systems management and backup vendors in the VMware ecosystem support ESXi today.
Logging is important for both troubleshooting and compliance. vSphere exposes logs from all system components using industry-standard syslog format, with the ability to send logs to a central logging server. Persistent logging onto a file on a local datastore accessible to the vSphere host is done for you automatically if a suitable datastore is available.
Keeping the vSphere host in synch with an accurate time source is very important for ensuring log accuracy and is required for compliance. It is also important if you are using the host to maintain accurate time on the guest virtual machines. vSphere hosts have built-in NTP capabilities for synchronizing with NTP timeservers.
Although day-to-day operations are done via vCenter Server, there are instances when you need to work with the vSphere host directly, such as configuration backup and log file access. To control access to the host, you can configure the vSphere hosts to join an Active Directory domain, and any user trying to access the host will automatically be authenticated against the centralized user directory. You can also have local users defined and managed on a host-by-host basis and configured using the vSphere Client, vCLI or PowerCLI. This second method can be used either in place of, or in addition to, the Active Directory integration.
You can also create local roles, similar to vCenter roles, which define what the user is authorized to do on the host. For instance, a user can be granted read-only access, which only allows them to view host information, or they can be granted Administrator access, which allows them to both view and modify host configuration. If the host is integrated with Active Directory, local roles can also be granted to AD users and groups.
The only user defined by default on the system is the root user. The initial root password is typically set interactively via the Direct Console User Interface (DCUI) or as a part of an automated installation. It can be changed afterwards using the vSphere Client, vCLI or PowerCLI.
With vSphere, users can be assigned administrative privileges where they will automatically get full shell access. With full shell access, privileged admin users no longer need to “su” to root in order to run privileged commands.
With vSphere, all host activity, from both the Shell and the DCUI, are now logged under the account of the logged in user. This ensures user accountability, making it easy to monitor and audit activity on the host.
Direct Console User Interface (DCUI)
The DCUI is the menu-driven interface available at the console of the physical server on which ESXi is installed or embedded. Its main purpose is to perform initial configuration of the host (IP address, hostname, root password) and diagnostics.
The DCUI has several diagnostic menu items that allow administrators to:
- Restart all management agents, including,
- Reset configuration settings, such as,
- Fix a misconfigured vSphere Distributed Switch
- Reset all configurations to factory defaults
- Enable the ESXi Shell for troubleshooting, including,
- Local access (on the console of the host)
- Remote access (ssh-based)
vSphere Command Line Interface
The vCLI has numerous commands for troubleshooting, including:
The ESXi Shell is a local console for advanced technical support. In addition to being available on the local console of a host, it can also be accessed remotely through SSH. Access to the ESXi Shell is controlled in the following ways:
- Both local and remote ESXi Shell access can be enabled and disabled separately in both the DCUI as well as vCenter Server.
- ESXi Shell may be used by any authorized user, not just root. Users become authorized when they are granted the Administrator role on a host (including through AD membership in a privileged group).
- All commands issued in ESXi Shell are logged, allowing for a full audit trail. If a syslog server is configured, then this audit trail is automatically included in the remote logging.
- A timeout can be configured for ESXi Shell (both local and remote), so that after being enabled, it will automatically be disabled after the configured time.