Delimiting Volumes over 10 Nodes
Delimiting volume allocation is a new feature in Windows Server 2019 Storage Spaces Direct (Azure Stack HCI) to significantly increase the fault tolerance of your cluster. Normally, a 4+ node Storage Spaces Direct cluster can survive up to two concurrent failures before the cluster volumes go offline. With the delimited volume allocation feature, it can tolerate up to three concurrent failures under certain conditions. Delimiting is an option for 6 or more servers in a cluster that uses three-way mirroring and Windows Server 2019, that manually delimits the allocation of volumes in Storage Spaces Direct. Essentially, the cluster administrator will directly determine which nodes will receive the ‘slabs’ of data for each volume created. Storage volumes created with parity or mirror-accelerated parity (MAP) are not supported with this feature.
With a normal three-way mirror, the volume is divided into many small ‘slabs’ that are copied and distributed over the cluster. This default allocation maximizes parallel reads and writes, leading to better performance, and is appealing in its simplicity. However, this volume configuration cannot survive three concurrent failures. If three servers fail at once, or if drives in three servers fail at once, the volumes would become inaccessible because at least some slabs were allocated to the exact three drives or servers that failed.
With the delimited allocation, you specify a subset of servers to use as the allocation of the data with a minimum of 4 servers per volume.
The servers you select as the fault domains are the ones that the data is allocated to for that volume. In the image below you can see Volume1 is allocated to nodes 0, 1, 2, 3, and 4. Volume2 is allocated to nodes 4, 5, 6, 7, and 8, etc.
To test delimited allocation, I shut down nodes 2, 7, and 9. (DataONN-N2, DataONN-N4, and DataON-N2). Normally, at this point with 3 nodes down, the volumes and cluster would automatically shut down.
However, you can see that all virtual disks are still online despite having 3 nodes down.
The storage pool is still online.
The VMs are still online.
Node management with delimited allocation
One of the actions you can perform with the newly introduced delimited allocation is to change the destination nodes for the data (slabs). In the image below you can see the original Volume1 was allocated to nodes [0-4].
Using the PowerShell command:
Get-VirtualDisk Volume1 | Remove-StorageFaultDomain -StorageFaultDomains $Servers[0,1,2,3]
the previously assigned Volume1 with assigned nodes [0-4] has [0-3] is removed, leaving 4 as the one left allocated.
Then I added nodes [5-7] to Volume1’s allocation by running:
Get-VirtualDisk Volume1 | Add-StorageFaultDomain -StorageFaultDomains $Servers[5,6,7]
Re-run Get-VirtualDiskFootPrintbySSU to view the new results of the allocation of the volume. You can then see the footprint of Volume1 has changed nodes.
Adding nodes just for the compute resources
Starting off with a cluster of 8 nodes with the delimited volumes, I made 4 volumes once again.
Then I simulated adding 2 nodes into the cluster with no disks, just the CPU and memory. From here I ran the PowerShell commands:
I created a virtual machine to live on DataON-N5 (one of the two nodes I added into the cluster).
And tested if it can failover to Node DataONN-N4 (one of the original nodes with disks).
The virtual machine did a live migration over with no issues and no complications, so I did a live migration back to DataON-N5.
There were no cluster errors being reported, no disk errors, nothing complaining as I had a virtual machine live on a node with no disks.
With delimiting volumes, you can easily manage your volumes and where you want to allocate them, as well as change their allocated locations within PowerShell with simple command lines. With the right steps, you can take a server down and perform maintenance on it without effecting your cluster or virtual machines since you have copies of your volume on other servers. In the end, delimiting volumes is a very realistic possibility for those who have 6 or more servers in a cluster. As you allocate where your volumes are being assigned, you have the peace of mind that your cluster can withstand a 3-fault tolerance, being three servers, three disks, or a mix of servers and disks.