VMware, Inc. 79
Chapter 4 Virtual Infrastructure Management
The more vMotion compatible ESXi hosts DRS has available, the more choices it has to recommend
vMotions to improve usable resource availability for virtual machines in the DRS cluster. Besides CPU
incompatibility, there are other misconfigurations that can block vMotion between two or more hosts. For
example, if the hostsʹ vMotion network adapters are not connected by a 1Gb/s or faster Ethernet link then
the vMotion might not occur between the hosts.
Other configuration settings to check for are virtual hardware version compatibility, misconfiguration of
the vMotion gateway, incompatible security policies between the source and destination host vMotion
network adapter, and virtual machine network availability on the destination host. Refer to VMware
vCenter Server and Host Management for further details.
When possible, make sure every host in a DRS cluster has connectivity to the full set of datastores
accessible by the other hosts in that cluster. Such full connectivity allows DRS to make better decisions
when computing vMotion recommendations.
Just as in previous versions of vSphere, virtual machines with smaller memory sizes and/or fewer vCPUs
provide more opportunities for DRS to migrate them in order to improve balance across the cluster.
Virtual machines with larger memory sizes and/or more vCPUs add more constraints in migrating the
virtual machines. This is one more reason to configure virtual machines with only as many vCPUs and
only as much virtual memory as they need.
Starting in vSphere 7.0, however, when computing vMotion recommendations DRS considers granted
memory (that is, the total RAM available to a virtual machine) when evaluating a virtual machine’s
memory demand. This means that now over-provisioning a virtual machine’s memory could even more
significantly constrain the DRS migration options than in previous vSphere versions.
If a cluster is in DRS fully automated mode, only virtual machines that are also in DRS fully automated
mode will be considered for recommended migrations. Thus setting virtual machines on such clusters to
DRS fully automated mode provides DRS a broader range of recommendation options.
Powered-on virtual machines consume memory resources—and typically some CPU resources—even
when idle. Thus even idle virtual machines, though their utilization is usually small, can affect DRS
decisions. For this and other reasons, a marginal performance increase might be obtained by shutting
down or suspending virtual machines that are not being used.
Resource pools help improve manageability and troubleshooting of performance problems. In order to
allow DRS to best manage resource pools, especially in deployments with varying inventory, we
recommend activating a new option introduced in vSphere 7.0, Scalable Shares. This option, which can
be activated at the cluster or resource pool level, brings dynamic and relative entitlements to resource
pools and virtual machines, based on their share value settings. This allows resource pools and virtual
machines to be made siblings in a hierarchy without creating a dilution problem.
When Scalable Shares is deactivated, however, we recommended that resource pools and virtual machines
not be made siblings in a hierarchy. Instead, each level should contain only resource pools or only virtual
machines. This is because with Scalable Shares deactivated, resource pools are assigned default share
values that might not compare appropriately with those assigned to virtual machines, potentially
resulting in unexpected performance.
DRS affinity rules can keep two or more virtual machines on the same ESXi host (“VM/VM affinity”) or
make sure they are always on different hosts (“VM/VM anti-affinity”). DRS affinity rules can also be used
to make sure a group of virtual machines runs only on (or has a preference for) a specific group of ESXi
hosts (“VM/Host affinity”) or never runs on (or has a preference against) a specific group of hosts
(“VM/Host anti-affinity”).
In most cases leaving the affinity settings unchanged will provide the best results. In rare cases, however,
specifying affinity rules can help improve performance. To change affinity settings, from the vSphere
Client select a cluster, click the Configure tab, expand Configuration, click VM/Host Rules, click Add,
enter a name for the new rule, choose a rule type, and proceed through the GUI as appropriate for the rule
type you selected.
Besides the default setting, the affinity setting types are: