Posts

When a network is created in CloudStack, it is by default not provisioned until the first VM is deployed on that network, at which point a a VLAN ID is assigned. Until then, the network exists only as a database entry. If you wanted to create and provision a network without deploying any VMs, you would need to create a persistent network. With persistent networks, you can deploy physical devices like routers / switches, etc. without having to deploy VMs on it, as it provisions the network at the time of its creation. More information about persistent networks in CloudStack can be found here.

Until now, persistent networks have only been available on an isolated network. This feature introduces the ability to create persistent networks in an L2 network, as well as enhancing the way it currently works on isolated networks:

  • For isolated networks, a VR is deployed immediately on creation of the persistent network and the network transitions to ‘implemented’ state irrespective of whether a VLAN ID is specified.
  • For L2 networks, the network resources (bridges or port-groups depending on the hypervisor) and VLANs get created across all hosts of a zone and the network transitions to ‘implemented’ state
  • Persistent networks will not be garbage collected, i.e., it will not shutdown the network in case there are no active VMs running on it · When the last VM on the network is stopped / destroyed / migrated or unplugged, the network resources will not be deleted
  • Network resources will not be created on hosts that are disabled / in maintenance mode, or on those that have been added post creation of persistent networks. If the network needs to be set up on such hosts once those hosts become available, a VM will need to be deployed on them though CloudStack. Deploying a VM on a specific host will provision the required network resources on that specific host.
  • For isolated networks, specify VLAN ID for VPC.

To create a persistent network, we need to first create and enable a network offering that has the ‘Persistent’ flag set to true:

L2 Persistent Networks and enhancement of Isolated Persistent Networks

 

We then can go ahead and create a persistent network using the previously created network offering:

Persistent Mode in L2 Networks

 

Once the network is created, it will transition to ‘Implemented’ state, indicating that the network resources have been created on every host across the zone, which can be confirmed by any manual configuration or creating and starting a VM.

Persistent Mode in L2 Networks

 

This feature will be available as part of the Q3/4 2021 LTS release of CloudStack.

Migration of virtual machines or clusters is essential for cloud operators, allowing them to perform maintenance with little or no downtime, or balance compute and storage resources when necessary. CloudStack supports both live and cold migration (if supported by the hypervisor), and most hypervisors allow VM and volume migration in some form or another.

VMware vMotion provides both live and cold migration of VM and volumes. By leveraging vMotion with the APIs migrateVirtualMachine, , migrateSystemVm and migrateVolume, migration of user and system VMs and their volume(s) can be performed easily in CloudStack.

However, until now CloudStack had the following limitations for VM and volume migration:

  • Migration would fail when attempted between ) – a typical setup for clusters with cluster-wide storage pools in CloudStack.
  • When migrating stopped user VMs with multiple volumes from UI, CloudStack would This would result in some volumes getting migrated to incompatible storage pools with storage tag mismatch for the volume’s disk offering.
  • Only running system VMs can be migrated and migration can be done between hosts of the same cluster only.

This feature adds several improvements to CloudStack for migrating VMs and volumes between CloudStack clusters with VMware:

  • Cross-cluster migration
    • To assist migration, the findHostsForMigration API has been improved to return hosts from different pods for user VMs and hosts from different clusters for system VMs in supported cases.
    • UVMs can now be migrated between hosts belonging to clusters of the same or different pods.
    • Volumes of user VMs can be migrated between belonging to different pods.
    • System VMs can now be migrated between hosts belonging to clusters of the same pod.

Note: Migrating system VMs between hosts in different pods cannot be supported as system VMs acquire IP addresses from the IP range of the pod. Changing the public IP of the system VM would result in reconfiguring the VM and in the case of virtual routers it would also require reconfiguring various networking rules inside the VM which can be a risky and can cause huge downtime.  

  • Support for migration shared storages Improvements have been made in CloudStack’s migration framework to leverage VMware’s vMotion capabilities to allow migration without shared storages. This would allow VMs running with volumes on storage pools of one cluster to be migrated to storage pools of a different cluster. Similarly, detached volumes can now be migrated between storage pools of different clusters with migrateVolume API or using the ‘Migrate Volume’ action in the UI, without going over Secondary Storage as before.
  • Support for stopped user VM migration in migrateVirtualMachineWithVolume API Stopped user VMs with multiple volumes can be migrated to different storage pools based on disk offering compatibility. An operator can choose to provide volume to pool mapping for all volumes of the VM or just the ROOT volume, in which case CloudStack will automatically map remaining volumes to the compatible storage pool in the same destination cluster.
  • UI improvements for user VM migration
    • The Migration form in the CloudStack UI has been updated to provide details of cluster and pod of the destination hosts:

VMware Migration Improvements

    • Migrate to another primary store action in UI will now utilize the migrateVirtualMachineWithVolume API to migrate stopped VMs. This will allow migrating different volumes to compatible storage(s) and not to the same storage pool. 
  • Support for stopped system VM migration in migrateSystemVm API To enable migration of a stopped system VM a new parameter, storageid, has been added to the migrateSystemVm Since CloudStack does not allow the listing of volumes for system VMs, the operator may have to refer to the CloudStack database to find the for a system VM. A API call for migrating a stopped system VM to a different storage pool will look like this:

migrate systemvm virtualmachineid=<UUID_OF_SYSTEM_VM> storageid=<UUID_OF_DESTINATION_STORAGE_POOL> 

Migration of running system VMs will work same as earlier. The hostid parameter of the migrateSystemVM API can be used to specify the destination host while CloudStack will work out the suitable destination storage pool while migrating VM’s ROOT volume. A cloudmonkey API call for migrating a running system VM to a different host will look like this:

migrate systemvm virtualmachineid=<UUID_OF_SYSTEM_VM> hostid=<UUID_OF_DESTINATION_HOST>

  • UI changes for system VM migration New actions have been added in different system VM views (SSVM, CPVM, VR and load balancer) to allow migration of stopped VMs:

VMware Migration Improvements 2

New UI form for migrating system VM to another primary storage:

VMware Migration Improvements 3

For running system VMs, UI will now show a form similar to user VM migration showing details of destination hosts:

VMware Migration Improvements 4

To allow VM and volume inter-cluster migrations, the VMware vSphere setup / configuration prerequisites for vMotion and storage vMotion must be in place. Also, migration of a VM with a higher hardware version to an older ESXi host (that doesn’t support that VM hardware version) will fail (native VMware limitation). Therefore, a VM running on an ESXi 6.7 host with VM hardware version 14 will fail to migrate to an ESXi 6.5 host.

These changes will be part of the next CloudStack LTS release which is scheduled for Q3/4 2021.

If you are a system engineer managing shared networks and deploying virtual machines with CloudStack, you should be aware that currently there is no option to assign a specific IP address for the Virtual Router. The router is assigned the first free IP address. For many engineers, this might be annoying, as you are not able to make the selection by yourself. Moreover, you would prefer to hold the inventory under control and select the IP address to be assigned by yourself.

In this article, we present a new feature in CloudStack, which make the management of shared networks easier. The new capability will be available in Q3 2021 LTS release of CloudStack and will enable users to specify VR IP in shared networks.

A shared network is a network that can be accessed by virtual machines (VMs) belonging to many different accounts, and can only be created by administrators. Currently, during the creation of a shared network, the network’s DHCP service provides the range of IP addresses (IPv4 / v6), gateway, and netmask. When the first VM is deployed in this network, the Virtual Router (VR) created for the shared network is assigned the first free IP address, and this IP is persistent for the lifetime of the network.

This feature makes it possible to specify an IP address for the VR.

To make this possible, the createNetwork API has been extended to take routerIP and routerIPv6 as optional inputs:

  • routerip: (string) IPv4 address to be assigned to a router
  • routeripv6: (string) IPv6 address to be assigned to a router

If the router IP is not explicitly provided, then the VR is assigned the first free IP available in the network range as usual. An IP address specified also ensures persistence of a VR’s IP address after various lifecycle tasks post-creation of a network (such as a restarting network with clean up).

The following is checked when the VR’s IP is passed. If any of these checks fail then it will not be possible to specify an IP for the router:

  • IP address is valid
  • IP address is within the network range
  • The network offering provides at least one service that requires a VR

Creation of shared network specifying a VR IP via API can be done as follows:

$ create network name=SharedNet displaytext=”Shared Network” vlan=99 gateway=99.99.99.1 netmask=255.255.255.0 startip=99.99.99.50 endip=99.99.99.80 routerip=99.99.99.75 zoneid=<zone_id> networkofferingid=<network offering providing at least one service requiring a VR>

UI Support for the VR IP fields:

Specify VR IP in Shared Networks

This feature will be available in the Q3 2021 LTS release of CloudStack.

For primary storage, CloudStack supports many managed storage solutions via storage plugins, such as SolidFire, Ceph, Datera, CloudByte and Nexenta. There are other managed storages which CloudStack does not support, one of which is Dell EMC PowerFlex (formerly known as VxFlexOS or ScaleIO). PowerFlex is a distributed shared block storage, like Ceph / RBD storage.

This feature provides a new storage plugin that enables the use of a Dell EMC PowerFlex v3.5 storage pool as a managed Primary Storage for KVM hypervisor, either as a zone-wide or cluster-wide pool. This pool can be added either from the UI or API.

Adding a PowerFlex storage pool

To add a pool via the CloudStack UI, Navigate to “Infrastructure -> Primary Storage -> Add Primary Storage” and specify the following:

  • Scope: Zone-Wide (For Cluster-Wide – Specify Pod & Cluster)
  • Hypervisor: KVM
  • Zone: Select a zone from the list where to add
  • Name: Specify custom name for the storage pool
  • Provider: PowerFlex
  • Gateway: Specify PowerFlex gateway
  • Gateway Username: Specify PowerFlex gateway username
  • Gateway Password: Specify PowerFlex gateway password
  • Storage Pool: Specify PowerFlex storage pool name
  • Storage Tags: Add a storage tag for the pool, to use in the compute/disk offering

To add from the API, use createStoragePool API and specify the storage pool name, scope, zone, (cluster & pod for cluster-wide), hypervisor as KVM, provider as PowerFlex with the url in the pre-defined format below:

PowerFlex storage pool URL format:

powerflex://<API_USER>:<API_PASSWORD>@<GATEWAY>/<STORAGEPOOL>

where,

<API_USER> : user name for API access

<API_PASSWORD> : url-encoded password for API access

<GATEWAY> : gateway host

<STORAGEPOOL> : storage pool name (case sensitive)

For example, the following cmk command would add PowerFlex storage pool as a zone-wide primary storage with a storage tag ‘powerflex’:

create storagepool name=mypowerflexpool scope=zone hypervisor=KVM provider= PowerFlex tags=powerflex url=powerflex://admin:P%40ssword123@10.2.3.137/cspool zoneid=ceee0b39-3984-4108-bd07-3ccffac961a9

Service and Disk Offerings for PowerFlex

You can create service and disk offerings for a PowerFlex pool in the usual way from both the UI and API using a unique storage tag. Use these offerings to deploy VMs and create data disks on PowerFlex pool.

If QoS parameters, bandwidth limit and IOPs limit need to be specified for a service or disk offering, the details parameter keys bandwidthLimitInMbps & iopsLimit need to be passed to the API. For example, the following API commands (using cmk) creates a service offering and disk offering with storage tag ‘powerflex’ and QoS parameters:

create serviceoffering name=pflex_instance displaytext=pflex_instance storagetype=shared provisioningtype=thin cpunumber=1 cpuspeed=1000 memory=1024 tags=powerflex serviceofferingdetails[0].bandwidthLimitInMbps=90 serviceofferingdetails[0].iopsLimit=9000

create diskoffering name=pflex_disk displaytext=pflex_disk storagetype=shared provisioningtype=thick disksize=3 tags=powerflex details[0].bandwidthLimitInMbps=70 details[0].iopsLimit=7000

When explicit QoS parameters are not passed, they are defaulted to 0 which means unlimited.

VM and Volume operations

The lifecycle operations of CloudStack resources, templates, volumes, and snapshots in a PowerFlex storage pool can managed through the new plugin. The following operations are supported for a PowerFlex pool:

  • VM lifecycle and operations:
    • Deploy system VMs from the systemvm template
    • Deploy user VM(s) using the selected template in QCOW2 & RAW formats, and from an ISO image
    • Start, Stop, Restart, Reinstall, Destroy VM(s)
    • VM snapshot (disk-only, snapshot with memory is not supported)
    • Migrate VM from one host to another (within and across clusters, for zone-wide primary storage)
  • Volume lifecycle and operations:
    • Create ROOT disks using the selected template (in QCOW2 & RAW formats, seeding from NFS secondary storage and direct download templates)
    • List, Detach, Resize ROOT volumes
    • Create, List, Attach, Detach, Resize, Delete DATA volumes
    • Create, List, Revert, Delete snapshots of volumes (with backup in Primary, no backup to secondary storage)
    • Create template (on secondary storage in QCOW2 format) from PowerFlex volume or snapshot
    • Support PowerFlex volume QoS using details parameter keys: bandwidthLimitInMbps, iopsLimit in service/disk offering. These are the SDC (ScaleIO Data Client) limits for the volume.
    • Migrate volume (usually Volume Tree or V-Tree) from one storage-pool to another (limited to storage pools within the same PowerFlex cluster)
    • Config drive on scratch / cache space on KVM host, using the path specified in the agent.properties file on the KVM host.

Note: PowerFlex volumes are in RAW format. The disk size is rounded to the nearest 8GB as the PowerFlex uses 8GB disk boundary.

New Settings

Some new settings are introduced for effective management of the operations on a PowerFlex storage pool:

Configuration Description Default
storage.pool.disk.wait New primary storage level configuration to set the custom wait time for PowerFlex disk availability in the host (currently supports PowerFlex only). 60 secs
storage.pool.client.timeout New primary storage level configuration to set the PowerFlex REST API client connection timeout (currently supports PowerFlex only). 60 secs
custom.cs.identifier New global configuration, which holds 4 characters (initially randomly generated). This parameter can be updated to suit the requirement of unique CloudStack installation identifier that helps in tracking the volumes of a specific cloudstack installation in the PowerFlex storage pool, used in Sharing basis. No restriction in min/max characters, but the max length is subject to volume naming restriction in PowerFlex. random 4 character string

In addition, the following are added / updated to facilitate config drive caching on the host, and router health checks, when the volumes of underlying VMs and Routers are on the PowerFlex pool.

Configuration Description Default
vm.configdrive.primarypool.enabled The scope changed from Global to Zone level, which helps in enabling this per . false
vm.configdrive.use.host.cache.on.unsupported.pool New zone level configuration to use host cache for config drives when storage pool doesn’t support config drive. true
vm.configdrive.force.host.cache.use New zone level configuration to force host cache for config drives. false
router.health.checks.failures.to.recreate.vr New test “filesystem.writable.test” added, which checks the router filesystem is writable or not. If set to “filesystem.writable.test”, the router is recreated when the disk is read-. <empty>

Agent Parameters

The agent on the KVM host uses a cache location for storing the config drives. It also uses some commands of the PowerFlex client (SDC) to sync the volumes mapped. The following parameters are introduced in the agent.properties file to specify custom cache path and SDC installation path (if other than the default path):

Parameter Description Default
host.cache.location new parameter to specify the host cache path. Config drives will be created on the “/config” directory on the host cache. /var/cache/cloud
powerflex.sdc.home.dir new parameter to specify SDC home path if installed in custom directory, required to rescan and query_vols in the SDC. /opt/emc/scaleio/sdc

Implementation details

This new storage plugin is implemented using the storage subsystem framework in the CloudStack architecture. A new storage provider “PowerFlex” is introduced with the associated subsystem classes (Driver, Lifecycle, Adaptor, Pool), which are responsible for handling all the operations supported for Dell EMC PowerFlex / ScaleIO storage pools.

A ScaleIO gateway client is added to communicate with the PowerFlex / ScaleIO gateway server using RESTful APIs for various operations and to query the pool stats. It facilitates the following functionality:

  • Secure authentication with provided URL and credentials
  • List all storage pools, find storage pool by ID / name
  • List all SDCs and find a SDC by IP address
  • Map / Unmap volume(s) to SDC (a KVM host)
  • Other volume lifecycle operations supported in ScaleIO

All storage related operations (eg. attach volume, detach volume, copy volume, delete volume, etc) are handled by various Command handlers and the KVM storage processor as orchestrated by the KVM server resource class (LibvirtComputingResource).

The cache storage directory path on the KVM host is picked from the parameter “host.cache.location” in agent.properties file. This path will be used to host config drive ISOs.

Naming conventions used for PowerFlex volumes

The following naming conventions are used for CloudStack resources in a PowerFlex storage pool, which avoids naming conflicts when the same PowerFlex pool is shared across multiple CloudStack zones / installations:

  • Volume: vol-[vol-id]-[pool-key]-[custom.cs.identifier]
  • Template: tmpl-[tmpl-id]-[pool-key]-[custom.cs.identifier]
  • Snapshot: snap-[snap-id]-[pool-key]-[custom.cs.identifier]
  • VMSnapshot: vmsnap-[vmsnap-id]-[vol-id]-[pool-key]-[custom.cs.identifier]

Where…

[pool-key] = 4 characters picked from the pool uuid. Example UUID: fd5227cb-5538-4fef-8427-4aa97786ccbc => fd52(27cb)-5538-4fef-8427-4aa97786ccbc. The highlighted 4 characters (in yellow) are picked. The pool can tracked with the UUID containing [pool-key].

[custom.cs.identifier] = value  of the global configuration “custom.cs.identifier”, which holds 4 characters randomly generated initially. This parameter can be updated to suit the requirement of unique CloudStack installation identifier, which helps in tracking the volumes of a specific CloudStack installation. 

PowerFlex Capacity in CloudStack

The PowerFlex capacity considered in CloudStack for various capacity related checks matches with the capacity stats marked with red box in the below image.

Other Considerations

  • CloudStack will not manage the creation of storage pool/domains etc. in ScaleIO. This must be done by the Admin prior to creating a storage pool in CloudStack. Similarly, deletion of ScaleIO storage pool in CloudStack will not cause actual deletion or removal of storage pool on ScaleIO side.
  • ScaleIO SDC is installed in the KVM host(s), service running & connected to the ScaleIO Metadata Manager (MDM).

This feature will be included in the Q3 2021 LTS release of Apache CloudStack.

More about Dell EMC PowerFlex

The CloudStack Kubernetes Services (CKS) uses CoreOS templates to deploy Kubernetes clusters. However, as CoreOS reached EOL on May 26th, 2020 we needed to find a suitable replacement meeting the requirements of resilience, security, and popularity in the community. Keeping these requirements in mind, we have chosen to modify the existing Debian-based SystemVM template so it can also be used by CKS instead of CoreOS.

Before coming to this decision, we considered other operating systems, such as FlatCar Linux, Alpine Linux and Debian, and based our decision on the following parameters:

  FlatCar Linux Alpine Linux Debian
Brief Description Drop-in replacement for CoreOS Alpine Linux is a Linux distribution based on musl and BusyBox, designed for security, simplicity, and resource efficiency Debian is one of the oldest operating systems based on the Linux kernel. New distributions are updated regularly, and the next candidate is released after a time-based freeze.
Size ~ 500MB – 600MB Small image of approx. 5MB – Because of its small size, it is commonly used in containers providing quick boot-up times ~ 500MB – 600MB
Security Quite secure as it mitigates security vulnerabilities by means of delivering the OS as an immutable filesystem All userland binaries are compiled as Position Independent Executables (PIE) with stack smashing protection. These proactive security features prevent exploitation of entire classes of zero-day and other vulnerabilities. Debian is on a par with most other Linux distributions.
Release Management Frequent releases – almost bi-weekly or monthly There are several releases of Alpine Linux available at the same time. There is no fixed release cycle but typically every 6 months Debian announces its new stable release on a regular basis. 3 years of full support for each release and 2 years of extra LTS support.
Maintenance It is maintained by Kinvolk – a Berlin based consulting firm known for their work around rkt, kubernetes, etc. Alpine linux is backed by a pretty large community base with mailer lists, etc to find support Unparalleled support –claim to provide you with answers for queries on mailing lists within minutes!
Main Reason for Choosing / Not Choosing NOT CHOSEN: A small community, not a popular choice and chances of meeting the same fate as CoreOS i.e., EOL NOT CHOSEN: Init system used by Alpine Linux is openrc – and up until recently k8s did not support openrc systems
https://github.com/kubernetes/kubeadm/issues/1295
CHOSEN: Huge community support, and most importantly – we can modify  the existing systemVM templates!

Using the modified System VM template also simplifies the use of CKS. Using CoreOS to deploy Kubernetes clusters in CKS, we needed to first register the CoreOS template and ensure that the template name coincided with the name set against the global settings shown below. However, with the new Debian-based SystemVM templates, this is no longer necessary, and these global settings are not required:

To ensure the new SystemVM template will support deployment of Kubernetes clusters, we have included docker, containerd and cloud-init packages, which will only be enabled on SystemVM types for CKS nodes (as these packages are only used by the CKS nodes). These services are disabled on all other SystemVM hypes.

So that we do not increase the overall size of the SystemVM templates, we have included support for growing / resizing the root disk partition during boot up to a predefined / provided disk size. For CKS nodes, the minimum root disk size will be 8GB and can be increased by setting the node root disk size while creating the Kubernetes cluster. For other systemVMs the root disk size can be configured by setting the ‘systemvm.root.disk.size’ global setting

In summary, from Apache CloudStack 4.16 LTS onwards, CKS will use the modified (Debian) SystemVM templates for deployment of Kubernetes clusters.

Since the addition of CloudStack Kubernetes Service, users can deploy and manage Kubernetes clusters in CloudStack. This not only makes CloudStack a more versatile and multifaceted application, but also reduces the gap between virtualization and containerization. As with any step in the right direction, it came with a few challenges, and one of them was manual scaling of the cluster.

Automating this process by monitoring cluster metrics may address this issue, but Kubernetes strongly advises against this. Instead, it is recommended that Kubernetes itself make these scaling decisions, and specifically for , Kubernetes has the ‘Cluster Autoscaler’ feature – a standalone program that adjusts the size of a Kubernetes cluster to meet current needs. It runs as a deployment (`cluster-autoscaler`)

The cluster autoscaler has built-in support for several cloud providers (such as AWS, GCE, and recently, Apache CloudStack) and provides an interface for it to communicate with CloudStack. This allows it to dynamically scale the cluster based on the capacity requirements. If there are pods that failed to schedule on any of the current nodes due to insufficient resources, or removes a node if it is not needed due to low utilization.

To enable communication between the cluster autoscaler and CloudStack, a separate service user kubeadmin is created in the same account as the cluster owner. The autoscaler uses this user’s API keys to get the details of the cluster as well as dynamically scale it. It is imperative that this user is not altered or have its keys regenerated.

To enable users to utilize this new feature, the existing scaleKubernetesCluster API has been enhanced to support autoscaling by adding the autoscalingenabled, minsize and maxsizse parameters. To enable autoscaling, simply call the scaleKubernetesCluster API along with the desired minimum and maximum size the cluster should be scaled, e.g.:

scaleKubernetesCluster id=<cluster-id> autoscalingenabled=true minsize=<minimum size of the cluster> maxsize=<maximum size of the cluster >

Autoscaling on the cluster can be disabled by passing `autoscalingenabled=false`. This will delete the deployment and leave the cluster at its current size, e.g.:

            scaleKubernetesCluster id=<cluster-id> autoscalingenabled=false

Autoscaling can also be enabled on a cluster via the UI:

Cluster autoscaling on CloudStack Kubernetes clusters is supported from Kubernetes version 1.16.0 onward. The cluster-autoscaler configuration can be changed and manually deployed for supported Kubernetes versions. The guide to manually deploying the cluster autoscaler can be found here, and an in-depth explanation on how the cluster-autoscaler works can be found on the official Kubernetes cluster autoscaler repository.

This feature will be available in the Q1/2 2021 LTS release of CloudStack.

Projects have proven to be a boon in organizing and grouping accounts and resources together, giving users in the same domain the ability to collaborate and share resources such as VMs, snapshots, volumes and IP addresses. However, there is a limitation. Only accounts can be added as members to projects, which can be an issue if we only want to add a single user of an account to a project. To address this, we’ve enhanced the way project membership is handled to facilitate addition of individual users.

Adding users to projects and assigning project-level roles

In order to restrict users in projects to a limited set of operations (adding further restrictions to those already defined by their account-level roles) we’ve brought in the concept of .
Project Roles are characterized by name and Project ID, and a project can have many project roles. Project Roles are then associated with Project Role Permissions which determine what operations users / accounts associated with a specific role can perform. It is crucial to understand that project-level permissions will not override those set at Account level.

Creation of Project Roles via the API:
$ create projectrole name=<projectRoleName> projectid=<project_uuid> description=<optional description>

Creation and association of a project role permission with a project role via the API:
$ create projectrolepermission projectid=<project_uuid> projectroleid=<project_role_id> permission=<allow/deny> rule=<API name/ wildcard> description=<optional description>

One can also create project roles and project role permissions from the UI:

1. Navigate to the specific project and enter its Details View

2. Go to the Project Roles Sub-tab and click on the Create Project Role button. Fill in the required details in the pop-up form and click OK:

3. To associate project role permissions to the created role, click on the + button on the left of the project role name and hit the ‘Save new Rule’ button:

The re-order button to the left of the rule name will invoke the ‘updateprojectRolePermission’ API as follows:

$ update projectrolepermission projectid=<project_uuid> projectroleid=<project_role_uuid> ruleorder=<list of project rule permission uuids that need to be moved to the top>

Other parameters that the updateProjectRolePermission API can take are:

4. One can also update the permission, namely Allow / Deny associated with the rule, by selecting the option from the drop-down list:

This invokes the ‘updateProjectRolePermission’ API, but passes the permission parameter instead of rule order, as follows:

$ update projectrolepermission projectid=<project_uuid> projectroleid=<project_role_uuid> projectrolepermissionid=<uuid of project role permission> permission=<allow/deny>

Now that we’ve seen how we create / modify the project roles and permissions, let’s understand how we associate them with users / accounts for them to take effect. When adding / inviting users or accounts to projects, we can now specify the project role:

 

The API call corresponding to this operation is ‘AddUserToProject’ or ‘AddAccountToProject’ and can be invoked as follows:

$ add userToProject username=<name of the user> projectid=<project_uuid> projectroleid=<project_role_uuid>

Project Admins

Regular users or accounts in a project can perform all management and provisioning tasks. A project admin can perform these tasks as well as administrative operations in a project such as create / update / suspend / activate project and add / remove / modify accounts. With this feature, we can have multiple users or accounts as project admins, providing more flexibility than before.

1. Creation of Projects with a user as the default project admin
The ‘createProject’ API has been extended to take user ID as an input along with account ID and domain ID:

$ create project name=<project name> displaytext=<project description> userid=<uuid of the user to be added as admin> accountid=<uuid of the account to which the user belongs to> domainid=<uuid of the domain in which the user exists>

 

2. Multiple Project Admins
Change the default ‘swap owner’ behaviour (single project admin allowed) to allow multiple project admins and promote / demote users to project admins / regular users respectively. Use 2. the ‘Promote’ (up arrow) or ‘Demote’ (down arrow) buttons to change the role of a user in a Project:

Please note:

1. Admins, Domain Admins or Project Admins permissions will never be affected by a changed project role
2. One cannot demote / delete the project admin if there is only one

If a role type isn’t specified while adding / inviting users to projects, then by default they become regular members. However, we can override this behaviour by passing the ‘roletype’ parameter to the ‘addAccountToProject’ or ‘addUserToProject’ APIs.

Upgrading CloudStack from any lower version to 4.15 will not affect existing projects and their members. Furthermore, in case we still want the swap owner feature we have the ‘swapowner’ parameter as part of ‘updateProject’ API (which by default is set to true for backward compatibility against the legacy UI). This parameter should be set to false if we want to promote or demote a particular member of the project.

In conclusion, this feature enhances the way Projects behave such that everything that happened at the Account level is now made possible at user level too. This feature will be available as part of CloudStack 4.15 LTS.

CloudStack has more than 600 APIs which can be allowed / disallowed in different combinations to create dynamic roles for the users. The aim of this feature is more effective use and management of these dynamic roles, allowing CloudStack users and operators to:

  1. Import and export roles (rule definitions) for the purpose of sharing.
  2. Create a new role from an existing role (clone and rename) to create a slightly different role.
  3. Use additional built-in roles (to quickly create read-only and support users and operators), such as:
    • Read-Only Admin role: an admin role in which an account is only allowed to perform any list / get / find APIs but not perform any other operation or changes to the infrastructure, configuration or user resources.
    • Read-Only User role: a user role in which an account is only allowed to perform list / get / find APIs who may only be interested in monitoring and usage for instance.
    • Admin-Support role: an admin role in which an admin account is limited to perform day-to-day tasks, such as creating offerings, but cannot change physical networks or add / remove hosts (but can put them in maintenance).
    • User-Support role: a user role in which an account cannot create or destroy resources (any create*, delete* etc. APIs are disallowed) but can view resources and perform operations such as start / stop VMs and attach / detach volumes, ISOs etc.

The existing role types (Admin, Domain Admin, Resource Admin, User) remain unchanged. This feature deals purely with the Dynamic Roles which filter the APIs which a user is allowed to call. The default roles and their permissions cannot be updated or deleted.

Cloning a role

An existing role can be used to create a new role, which will inherit the existing role’s type and permissions. A new parameter: roleid is introduced in the existing createRole API which takes the existing role id as the input, to clone from. The new role created can be modified to create a slightly different role later.
Example API call:
http://<ManagementServerIP>:8080/client/api?command=createRole&name=TestCloneUser&description=Test%20CloneUser01&roleid=ca9871c2-8ea7-11ea-944e-c2865825b006

The Add Role dialog screen shown below is used to create a new role by selecting an existing role:

Import role and export rule definitions

A role can be imported with its rule definitions (rule, permission, description) using a new API: importRole with the following parameters:

  • name (Type: String, Mandatory) – role name
  • type (Type: String, Mandatory) – role type, any of the four role types: Admin, Resource Admin, Domain Admin, User
  • description (Type: String, Optional) – brief description of the role
  • rules (Type: Map, Mandatory) – rules set in the sort order, with key parameters: rule, permission and description
  • force (Type: Boolean, Optional)- whether to override any existing role (with same name and type) or not, “true” / “false”. Default is false

Example API call:
http://<ManagementServerIP>:8080/client/api?command=importRole&name=TestRole&type=User&description=Test%20Role&rules[0].rule=create*&rules[0].permission=allow&rules[0].description=create%20rule&rules[1].rule=list*&rules[1].permission=allow&rules[1].description=listing&force=true

The import role option in the Roles section of the UI opens up the Import Role dialog screen below. Here you can specify the rules with a CSV file with rule, permission and description in the header row, followed by the rule values in each row:

The imported rule definitions are added to the rule set of the role. If a role already exists with the same name and role type, then the import will fail with a ‘role already exists’ message, unless it is forced to override the role by enabling the force option in the UI or setting the “force” parameter to true in the importRole API.

The ‘Export rules’ operation for a role is allowed at the UI level only, at the rules details as shown below. This operation fetches the rules for the selected role and exports to a CSV file. The exported rule definitions file can be thereafter used to import a role.

The rule definitions import / export file (CSV) contains details of role permissions. Each permission is defined in a row with comma-separated rule, permission (allow/deny) and description values. The row sequence of these permission details is considered to be the sort order, and the default export file name is “<RoleName>_<RoleType>.csv”.
Example CSV format:

rule,permission,description
<Rule1>,<Permission1>,<Description1>
<Rule2>,<Permission2>,<Description2>
<Rule3>,<Permission3>,<Description3>

…and so on, where:

  • Rule – Specifies the rule (API name or wildcard rule, in valid format)
  • Permission – Whether to “allow” or “deny”
  • Description – Brief description of the role permission (can be empty)

Example file (.csv), for TestUser with User role, TestUser_User.csv contains:

rule,permission,description
listVirtualMachines,allow,listing VMs
listVolumes,allow,volumes list
register*,deny,
attachVolume,allow,
detach*,allow,
createNetworkACLList,deny,not allow acl
delete*,allow,delete permit

TestUser_User.csv shown in a spreadsheet (for clarity):

 

 

 

 

 

 

 

New built-in roles

New read-only and support roles (with pre-defined sets of permissions) for user and operator, namely Read-Only Admin, Read-Only User, Support Admin and Support User have been added to quickly create read only & support users and admins.

CloudStack doesn’t allow any modifications to built-in roles (new & existing), i.e. these default roles and their permissions cannot be updated, deleted or overridden. The image below shows new and existing built-in roles:

The following permissions are applicable for these roles:

  • Read-Only Admin: an admin role in which an account is only allowed to perform all list APIs, read-only get and quota APIs.
  • Read-Only User: a user role in which an account is only allowed to perform list APIs, read-only get and quota APIs which has user level access.
  • Support Admin: an admin role in which an account is only allowed to perform creating offerings, host/storage maintenance, start/stop VMs, Kubernetes Cluster, attach/detach volumes, ISOs.
  • Support User: a user role in which an account is only allowed to perform, start/stop VMs, Kubernetes Cluster, attach/detach volumes, ISOs.

Any of these roles can be selected to create an account:

This feature will be included in Apache CloudStack 4.15, which is an LTS release.

CloudStack supports sharing templates and ISOs between accounts and projects through the API ‘updateTemplatePermissions’ and sharing templates through the UI. However, prior to version 4.15, it was not possible to share ISOs from the UI. This feature introduces support for sharing ISOs through different accounts and / or projects via the UI.

With this feature, a user or administrator must be able to update the permissions for an ISO via API and UI, being able to:

  • Share the ISO with another account
  • Share the ISO with another project
  • Revoke the access to the ISO for an account
  • Revoke the access to the ISO for a project
  • Reset the ISO permissions to the default

A new button is added in the ISO details view: ‘Update ISO permissions’. This button is located at the top right corner of the ISO detail view:

Once clicked, the user is prompted with a dialogue like the one below:

The user or administrator must complete three fields:

  • Operation: Must be one of the following values: ‘Add’, ‘Remove’, ‘Reset’
  • Share With: It is not displayed if the Operation field is set to ‘Reset’. If the Operation field is set to ‘Add’ or ‘Remove’ then the possible values are: ‘Account’, ‘Project’
  • [Account/Project]: It is not displayed if the Operation field is set to ‘Reset’. The field label depends on the selected ‘Share With’ field type. In this field, the user or administrator must provide an account or project name to be added to or removed from the permitted list of the ISO.
    • When ‘allow.user.view.all.domain.accounts’ = true: the dialog box displays a list of accounts within the same domain, otherwise the user must specify a comma-separated list of account names instead of selecting the account names from a list.

The ISOs shared with a user are displayed under the ‘Shared’ section of the ISOs list view:

This feature Will be available from CloudStack 4.15.

As of 2021, CentOS 7 will be receiving maintenance updates only, and is end of life in 2024. Considering this, it is important that CloudStack supports CentOS 8 as a KVM hypervisor host and as a host for the management and usage servers. This support has been developed and will be included as of CloudStack 4.15.

CentOS 8 uses a more recent QEMU version, Python 3 by default and deprecates several networking tools (such as bridge-utils), therefore a number of changes have been made:

  • Python scripts related to setting up of management and usage servers and KVM agent have been migrated from Python 2 to Python 3.
  • Python 2 dependencies from cloudstack packages (cloudstack-common, cloudstack-management, cloudstack-usage and cloudstack-agent) have been removed.
  • Support for MySQL 8 (as CentOS 8 installs this by default).

With this feature, changes have also been made to the snapshot related codebase on KVM to support the newer version of the qemu-img utility. This should prevent issues with snapshot management on an OS with newer QEMU version.
KVM hosts and management / usage servers on CentOS 7 will continue to work as before and only new python 3 dependencies (python3, python3-pip and python3-setuptools) will be installed during upgrade.