CloudStack vSphere integration has not kept up with the evolution of vSphere itself, and several functions can be performed natively by vSphere much more efficiently than by CloudStack. vSphere also has additional features which would be beneficial to the operators of vSphere based CloudStack clouds.
This feature introduces support in CloudStack for VMFS6, vSAN, vVols and datastore clusters. Also, vSphere storage policies are tied with compute and disk offerings to improve linking offerings with storages, and CloudStack will allow inter-cluster VM and volume migrations, meaning that running VMs can now migrate along with all volumes across clusters. Furthermore, storage operations (create and attach volume; create snapshot / template from volume) are improved in CloudStack by using the native APIs of vSphere.
Storage types and management concepts
CloudStack supports NFS and VMFS5 storage for primary storage, but vSphere has supported other storage technologies for some time now (VMFS6, vSAN, vVols and datastore clusters). vSphere also has ‘vStorage API for Array Integration’ (VAAI), which enables vSphere integration with other vendors’ storage arrays on different storage technologies. Each storage technology is designed to serve a slightly different use case, but ultimately, they are all designed to improve the flexibility, efficiency, speed, and availability of storage to vSphere hosts. In addition to the storage types, there are storage management concepts in vSphere such as vSphere Storage Policies, which are not available in CloudStack.
Let us briefly go through these new technologies and concepts that are supported in CloudStack and vSphere.
VMFS6 is the latest VMware File System version (introduced with vSphere 6.5), and a few enhancements were introduced over VMFS5. The major differences are:
- SESparse disks which provide improved space efficiency are now default disk types in VMFS6
- Automatic space reclamation allows vSphere to reclaim dead or stranded space on thinly provisioned VMFS volumes in storage arrays
vSAN was introduced with vSphere 5.5 and is a software-defined, enterprise storage solution that supports hyper-converged infrastructure (HCI) systems. vSAN is fully integrated with VMware vSphere, as a distributed layer of software within the ESXi hypervisor.
Virtual volumes (vVols), introduced with vSphere 6.5, is an integration and management framework for external storage providers, enabling a more efficient operational model optimized for virtualized environments and centred on the application instead of the infrastructure. vVols uniquely shares a common storage operational model with vSAN. Both solutions use storage policy-based management (SPBM) to eliminate storage provisioning, and uses descriptive policies at the VM or VMDK level.
A datastore cluster is a collection of datastores with shared resources and a shared management interface. After a datastore cluster is created, vSphere Storage DRS can be used to manage storage resources. When a datastore is added to a datastore cluster, the datastore’s resources become part of the datastore cluster’s resources.
Storage policies have become vSphere’s preferred method of determining the best placement of a disk image when differing ‘qualities’ of storage are available, and are simply a set of filters. For instance, a storage policy may say that the underlying disk must be encrypted, when a user specifies that storage policy, they would only be returned a list of data stores which have encrypted disks to choose to place the VM’s disk on. Storage policies are effectively a prerequisite for the use of the vSAN and the vVOLs.
GUI or API support
CloudStack has new APIs, modified existing APIs and UI support so that vSphere’s advanced capabilities can be used. To support different storage types for primary storages, the UI has changed.
Storage types Previously the only options for storage protocol type in CloudStack (while adding primary storage) were NFS, VMFS or custom. A new generic type called “presetup” has been added (for VMware) to add storage types VMFS5, VMFS6, vSAN or vVols. When a presetup datastore is added to CloudStack, the management server automatically identifies the storage pool type and saves it to the database.
To add a datastore cluster (which must have already been created on vCenter) as a primary storage, there is another new storage protocol type called “Datastore Clusters”.
To add one of the new Primary Storage types:
- Infrastructure tab -> Primary Storage -> Click “Add Primary Storage”
- Under “Protocol” the following options are available:
- datastore cluster
3. When “PreSetup” is selected as the storage protocol type, specify the vCenter server, datacenter and datastore details as shown below:
- New APIs are introduced to import and list already imported storage policies from vCenter.
- Storage policies are imported automatically from vCenter when a VMware zone is added in CloudStack.
- Storage policies are re-imported and with CloudStack database whenever the “updateVmwareDc” API or the “importVsphereStoragePolicies” API are called. During re-import, any new storage policies added at vCenter will be newly imported to CloudStack and any storage policy deleted at vCenter will be marked as removed in CloudStack database.
- Another new API “” is added, to list the compatible storage pools for an imported storage policy.
|importVsphereStoragePolicies||zoneid: id of the zone from which storage policies have to imported from corresponding vSphere|
|listVsphereStoragePolicies||zoneid: id of the zone to list storage policies in it.|
|listVsphereStoragePolicyCompatiblePools||zoneid: id of the zone to list storage pools in it which are compatible with storage policy
policyid: UUID of the storage policy
Existing APIs “createDiskOffering” and “createServiceOffering” are modified to bind vSphere storage policy to the offerings using a new parameter “storagepolicy” which takes policy UUID as input. In the GUI, while creating a service or disk offering, after selecting a specific VMware zone, already imported storage policies in that are zone are listed as below:
- When VMs are deployed in VMware hosts, a primary storage pool is selected which is compliant with the storage policy defined in the offerings. For data disks, the storage policy defined in the disk offering will be used, and for root disks, the storage policy defined in the service offering will be used.
As mentioned above, CloudStack now supports adding the new storage types and datastore clusters under one protocol category called “presetup”. Following are the steps that management server takes while adding a primary storage for the various storage protocols:
Management server mounts the NFS storage to the ESXi hosts by sending create NAS datastore API to vCenter.
- Management server assumes that the provided datastore is already created on vCenter.
- Management server checks the access of the datastore with name and vCenter details provided
- Once a datastore with the provided name is found, management server fetches the type of datastore and adds the protocol type to “storage_pool_details” table in database
Since datastore cluster on vCenter is a collection of datastores, let us call the actual datastore cluster the parent datastore and the datastores inside the cluster as child datastores. CloudStack handles a datastore cluster by adding it as a single primary storage. The pools inside the cluster are hidden and won’t be available individually for any operation.
There were some implementation challenges to directly add it as a primary storage. On vCenter a datastore cluster looks similar to the other datastore types as shown below.
In the underlying vSphere implementation, the type of all datastores other than datastore cluster is “Datastore” whereas the type of datastore cluster is “StoragePod”. vSphere native APIs related to storage operations are applicable only for the types “Datastore” and not to “StoragePod”. Due to this, the existing design of adding a datastore as a primary storage in CloudStack did not work for datastore cluster. The challenge comes here on how CloudStack abstracts the datastore cluster as a primary storage directly as a single entity. This is achieved by:
- When a datastore cluster is added as a primary storage in CloudStack, it auto imports the child datastores inside the cluster as primary storages in CloudStack; eg.: when datastore cluster DS1 with 2 child datastores is added into CloudStack, the management server will create 3 primary storages (1 parent datastore and 2 child datastores) and note the child datastore’s parent in database.
- A new column “parent” is introduced in “storage_pool” table in database.
- “parent” column of child datastores is pointed to the parent datastore.
- Only parent datastore is made visible to admins and child datastore are hidden making the datastore cluster acts like a black box.
- Whenever a storage operation is performed on a datastore cluster, management server chooses one of the child datastores for that operation.
- Any operation on a datastore cluster in fact performs that operation on all its child datastores. For example, if a datastore cluster is put in maintenance mode then all the child datastores will be put in maintenance mode and upon any failure, it reverts to the original state and throws error on original operation.
Following are the APIs where datastore cluster implementation is involved (storageid is passed as a parameter):
- updateConfiguration – configures the value of global setting passed to the datastore cluster to all its child datastores
- listSystemVms – lists all system VMs located in all child datastores
- prepareTemplate – prepares templates in one of the available child datastores
- listVirtualMachines – lists all virtual machines located in all child datastores
- migrateVirtualMachine – migrates a VM to one of the available child datastores
- migrateVolume – migrates a volume to one of the available child datastores
- On a datastore cluster which is already added as a primary storage if any storage pool needs to be added or removed from the datastore cluster then we will need to remove the primary storage from CloudStack and re-add it after required modifications on the storage.
On vCenter, storage policies act like a filter and control which type of storage is provided for the virtual machine, and how the virtual machine is placed within storage. So the best fit for storage policies in CloudStack is in disk offering and compute offering, since these offerings are also used to find the suitable storage and resources during virtual machine deployment.
An admin can select an imported storage policy while creating a disk or service offering. Based on the storage policy, the corresponding disk is placed in the relevant storage pool which is in compliance with the storage policy, and the VM and disk are configured to enforce the required level of service based on the policy.
- If a compute offering is created with “VVol No Requirement Policy” (the default storage policy for vVols), CloudStack tries to keep the root disk of the virtual machine in vVols primary storage, and the VM is also configured with that policy. Upon on any other storage operation (ie. volume migration) this storage policy will be taken into consideration for best placement of the VM and root disk.
- If a disk offering is created with any storage policy, the same applies to the data disk.
vSphere related changes
“fcd” named folder in the root directory of storage pool
- Previously any data disk was placed in the root folder of the primary storage pool. This is possible for NFS or VMFS5 storage types, but there is a limitation with the vSAN storage type as it does not support storage of user files directly in the root of the directory structure. Therefore, a separate folder is now created on all primary storage pools with the name “fcd”.
- Since the storage operations are made independent of storage type, the “fcd” folder is created on all storage types.
- The folder name is “fcd” because when the vSphere API is used to create first class disk, vCenter automatically creates a folder called ‘fcd’ (unless it already exists) and creates disk in that folder.
vVols template or VM creation with UUID as name
- When deploying a VM from OVF template on vCenter, the UUID cannot be used as the name of VM or template. CloudStack seeds templates from secondary to primary using template UUID and uses a newly generated UUID for creating worker VM. So in case of vVols, datastore VM or template creation operations a suffix “cloud.uuid-” is added to the UUID whenever it is used:
vVols disk movement
vVols does not allow the disk to move from where it is created or from where it is placed using vSphere native APIs. If a disk is moved from its intended location then the pointer to the underlying vVols storage is lost and disk will be inaccessible. Therefore, following are the changes made with respect to vVols storage pool to avoid disk movements:
The VM will be cloned from the template on the vVols datastore with CloudStack’s VM internal name directly (Eg. i-2-43-VM). Prevoiusly CloudStack used to clone VM from template with root disk name (Eg. ROOT-43) and move volumes from root disk name folder to VM internal name.
- When a volume is first created and placed in a folder on the storage, it will not be moved from that folder whether it is attached to a VM or detached from a VM.
As of LTS version 4.15, CloudStack supports vSAN, vVols, VMFS5, VMFS6, NFS, datastore clusters, storage policies, and will also operate more like vSphere does natively to better manage them.