If you have been following this blog series, you know that I’m exploring HPE SimpliVity architecture at a detailed level, to help IT administrators understand VM data storage and management in HPE hyperconverged clusters.
Part 1 covered virtual machine storage and management in an HPE SimpliVity Cluster. Part 2 covered the Intelligent Workload Optimizer (IWO) and how it automatically manages VM data in different scenarios. Part 3 followed on, outlining options available for provisioning VMs.
This post covers how the HPE SimpliVity platform automatically manages virtual machines and their associated data containers after initial provisioning, as VM’s grow (or shrink) in size.
With regards the automatic management of Data Containers, part two mainly (but not exclusively) focused on the initial placement of these Data Containers. Lets call this “day one” provisioning operations of virtual machines within a HPE SimpliVity cluster.
So how does the HPE SimpliVity platform manage virtual machines and their associated Data Containers after day one operations and as virtual machines grow (or shrink) in size?

For a large part, this is handled by the Auto Balancer service. The Auto Balancer service is a separate service to IWO and the Resource Balancer service, however its ultimate goal is the same; to keep resources as balanced as possible within the cluster.
At the risk of repeating myself, think of IWO and the Resource Balance responsible for the provisioning of workloads (VM and associated Data Container placement) and think of Auto Balancer responsible for management of these Data Containers as they evolve in regards to overall node consumption. IWO will be aware of any changes Auto Balancer may implement and will update DRS affinity rules accordingly.
How does Auto Balancer work ?
I previously talked about how Resource Balancer will migrate Data Containers (for VDI workloads) to balance load across nodes. In its current iteration Auto Balancer takes this one step further and will migrate secondary Data Containers for all other VM types and associated backups to less utilized nodes should a node be running low on physical capacity. Auto Balancer does not operate until a node is at 50% utilization (in terms of capacity). As with IWO and Resource Balancer, Auto Balancer is also designed to be zero touch, i.e. the process is handled automatically.
Low physical capacity on a node can be a result of growth of one or more VM’s, backups or the provisioning of more virtual machines into a HPE SimpliVity cluster.

In the above illustration VM-3’s on-disk data has grown by an amount that has in-turn caused Node 3 to become space constrained. Auto Balancer will take a pro-active decision to re-balance data across the cluster in-order to try and achieve optimum distribution of data in terms of space and IOPS. In this simplified example Auto Balancer has elected to migrate the secondary copy of VM-2 to Node 1 to keep the overall cluster balanced. Again this process is invisible to the user.
Lets take a second example to re-enforce what we have learned over the two previous posts, in regards to DRS, IWO and Auto Balancing of resources.
The above illustration does not scale well when representing multiple nodes and virtual machines. It is easier to represent virtual machines and nodes in table format (below); this format will also prove useful in upcoming posts where we’ll learn how to view data distribution across nodes for individual VM’s and backups and how to manually balance this data if required.

For the sake of simplicity, the total physical capacity available to each node in the above table is 2TB. The physical OS space consumed after de-duplication and compression for each VM is listed. For this example we will omit backup consumption. Therefore we know the following.
- Total cluster capacity is 8TB
- Total consumed cluster capacity is 2.1TB ((50 + 300 + 400 + 300) x 2, allowing for HA)
- Node 3 is currently the most utilized node, consuming 1TB of space
- Currently the cluster is relatively balanced with no nodes space constrained
- DRS is chosen not to run any workload Node 4
Now let’s imagine it’s consumed space grows by 200GB and CPU and Memory consumption also increases.
We now know the following
- Total cluster capacity is 8TB
- Total consumed cluster capacity is 2.5TB ((50 + 300 + 400 + 300) x 2, allowing for HA)
- Node 3 is currently over utilized, consuming 1.2TB of space.
An automatic redistribution of resources could move data such that it matches the table below.
- DRS has chosen to run VM-4 on Node 4 due constrained CPU and Memory on Node 3, thus promoting VM-4’s standby Data Container to the primary Data Container.
- The Auto Balancer service has migrated the secondary copy of VM-3 to node 1 to re-balance cluster resources.

It is worth noting that other scenarios are equally as valid. i.e. VM-4 secondary data container could also have been migrated to node 1 for example (after the DRS move), which would of resulted in roughly the same re-distribution of data.
In my previous post we talked about capacity alarms being generated at 80% space consumption on an individual. In the above scenario, no alarms would of been generated on any node within the cluster, Auto Balancer redistribute workload according to its algorithms to try an avoid that exact alarm.
Monitoring Placement Decisions

The Auto Balancer service runs on each node. One node in a cluster is chosen as a leader
The leader may change over time or as nodes are added/removed from the cluster.
The Auto Balancer leader will submit tasks to other nodes to perform Data Container or Backup migrations.
The command “dsv-balance-show –shownodeIP“ command shows the current leader

The log file balancerstatus.txt shows submitted tasks and status on the leader node. i.e this txt file is only present from the leader node. Issue the command “cat /var/svtfs/0/log/balancerstatus.txt” to view the status of any active migrations

The output of this files shows the migration of backup from Node 1 to Node 2.
Historic migrations can be view by issuing the following commands
“dsv-active-tasks-show | grep migrate” should show active backup or hive migrations
“dsv-tasks-show | grep migrate” shows active and completed migration tasks
Currently the Auto Balancer does not support the migration of remote backups
Another handy command is ‘dsv-balance-migration-show – showVMName’ which was introduced in version 3.7.7. This is cluster wide command so it can be run from any node in the cluster. This will list the Virtual Machine being migrated along with the host it has been migrated from and too.
Closing Thoughts
Intelligent Workload Optimizer and Auto Balancer is not a magic wand; it balances existing resources, but cannot create resources in an overloaded DC. In some scenarios, manual balancing may be required. The next post will explore some of these scenarios, how to analyze resource utilization and manually balance resources with a cluster if required.

One comment