Many software cluster solutions require shared storage between all cluster nodes. A very prominent example for this might be Oracle RAC (Real Application Clusters). In enterprise environments shared cluster storage is often implemented using SAN storage that is connected to multiple systems.
For (private) test scenarios a SAN storage system might exceed the budget. Fortunately there is a possibility to provided virtualized shared storage. If you’re using VMware ESXi the keyboard for this is “multi-writer“. ESXi uses its own file system called VMFS for local and iSCSI storage – this file system automatically creates locks to make sure that particular files cannot be accessed from multiple virtual machines (unless you’re using Fault Tolerance). By disabling this behavior it is possible to access virtual hard drives (.vmdk files) parallely from up to 8 virtual machines.
The advantages and disadvantages of this configuration are described very detailed in a VMware knowledge base article – I’d like to thank my colleague Johannes who recommend this solution. 🙂
In every case using a cluster lock manager (e.g. dlm) between the appropriate virtual machines is mandatory – otherwise the parallel access causes data inconsistencies. Implementing the storage between two or multiple machines is quite easy:
- Adding a “Thick provisioned – Eager zeroed” virtual hard drive to the first virtual machine. The hard drive needs to reside on a iSCSI or – if all the virtual machines are running on the same host – local SCSI/SAS storage, NFS is not supported! You may want to use the same SCSI-ID on all virtual machines.
- Turning off all concerned virtual machines.
- Modifying the .vmx configuration file of the first virtual machine by addind the following parameter:scsiX:Y.sharing = “multi-writer”
The SCSI-ID needs to be set to match the previously created cluster hard drive. For customizing the configuration file you might want to use vSphere client or vi over SSH – I was not successful with setting the parameter using the Web client multiple times.
- Adding a pre-existing hard drive to the other virtual machines by specifying the path to the hard drive created previously.
- Anpassen der .vmx-Konfigurationsdateien analog zu Schritt 3.
Afterwards the virtual machines can be started. After this customiztation the following ESXi functions and features aren’t supported anymore:
- snapshots and online expansion of the shared hard drive
- hibernating the virtual machine
- Storage vMotion
- Changed Block Tracking (important for “agentless” backup solutions)
- vSphere Flash Read Cache
Afterwards the cluster can be configured to use the new hard drive. If you don’t have a cluster yet you can also temporarily create and mount a conventional file system on the particular nodes:
nodeA # mkfs.ext4 /dev/sdb nodeA # mkdir /cluster ; mount /dev/sdb /cluster nodeA # echo "mimimi" > /cluster/test nodeA # umount /cluster nodeB # mkdir /cluster ; mount /dev/sdb /cluster nodeB # cat /cluster/test mimimi nodeB # umount /cluster ...
I’d like to repeat that the parallel access of multiple virtual machines to the same hard drive without using a cluster lock manager can cause data inconsistencies very easily. With this in mind – happy vClustering! 🙂