Tag Archive for: CloudStack Feature First Look

For primary storage, CloudStack supports many managed storage solutions via storage plugins, such as SolidFire, Ceph, Datera, CloudByte and Nexenta. There are other managed storages which CloudStack does not support, one of which is Dell EMC PowerFlex (formerly known as VxFlexOS or ScaleIO). PowerFlex is a distributed shared block storage, like Ceph / RBD storage.

This feature provides a new storage plugin that enables the use of a Dell EMC PowerFlex v3.5 storage pool as a managed Primary Storage for KVM hypervisor, either as a zone-wide or cluster-wide pool. This pool can be added either from the UI or API.

 

Adding a PowerFlex storage pool

To add a pool via the CloudStack UI, Navigate to “Infrastructure -> Primary Storage -> Add Primary Storage” and specify the following:

  • Scope: Zone-Wide (For Cluster-Wide – Specify Pod & Cluster)
  • Hypervisor: KVM
  • Zone: Select a zone from the list where to add
  • Name: Specify custom name for the storage pool
  • Provider: PowerFlex
  • Gateway: Specify PowerFlex gateway
  • Gateway Username: Specify PowerFlex gateway username
  • Gateway Password: Specify PowerFlex gateway password
  • Storage Pool: Specify PowerFlex storage pool name
  • Storage Tags: Add a storage tag for the pool, to use in the compute/disk offering

To add from the API, use createStoragePool API and specify the storage pool name, scope, zone, (cluster & pod for cluster-wide), hypervisor as KVM, provider as PowerFlex with the url in the pre-defined format below:

PowerFlex storage pool URL format:

powerflex://<API_USER>:<API_PASSWORD>@<GATEWAY>/<STORAGEPOOL>

where,

<API_USER> : user name for API access

<API_PASSWORD> : url-encoded password for API access

<GATEWAY> : gateway host

<STORAGEPOOL> : storage pool name (case sensitive)

For example, the following cmk command would add PowerFlex storage pool as a zone-wide primary storage with a storage tag ‘powerflex’:

create storagepool name=mypowerflexpool scope=zone hypervisor=KVM provider= PowerFlex tags=powerflex url=powerflex://admin:P%40ssword123@10.2.3.137/cspool zoneid=ceee0b39-3984-4108-bd07-3ccffac961a9

Service and Disk Offerings for PowerFlex

You can create service and disk offerings for a PowerFlex pool in the usual way from both the UI and API using a unique storage tag. Use these offerings to deploy VMs and create data disks on PowerFlex pool.

If QoS parameters, bandwidth limit and IOPs limit need to be specified for a service or disk offering, the details parameter keys bandwidthLimitInMbps & iopsLimit need to be passed to the API. For example, the following API commands (using cmk) creates a service offering and disk offering with storage tag ‘powerflex’ and QoS parameters:

create serviceoffering name=pflex_instance displaytext=pflex_instance storagetype=shared provisioningtype=thin cpunumber=1 cpuspeed=1000 memory=1024 tags=powerflex serviceofferingdetails[0].bandwidthLimitInMbps=90 serviceofferingdetails[0].iopsLimit=9000

create diskoffering name=pflex_disk displaytext=pflex_disk storagetype=shared provisioningtype=thick disksize=3 tags=powerflex details[0].bandwidthLimitInMbps=70 details[0].iopsLimit=7000

When explicit QoS parameters are not passed, they are defaulted to 0 which means unlimited.
VM and Volume operations

The lifecycle operations of CloudStack resources, templates, volumes, and snapshots in a PowerFlex storage pool can managed through the new plugin. The following operations are supported for a PowerFlex pool:

  • VM lifecycle and operations:
    • Deploy system VMs from the systemvm template
    • Deploy user VM(s) using the selected template in QCOW2 & RAW formats, and from an ISO image
    • Start, Stop, Restart, Reinstall, Destroy VM(s)
    • VM snapshot (disk-only, snapshot with memory is not supported)
    • Migrate VM from one host to another (within and across clusters, for zone-wide primary storage)
  • Volume lifecycle and operations:
  • Note: PowerFlex volumes are in RAW format. The disk size is rounded to the nearest 8GB as the PowerFlex uses an 8GB disk boundary.
    • Create ROOT disks using the selected template (in QCOW2 & RAW formats, seeding from NFS secondary storage and direct download templates)
    • List, Detach, Resize ROOT volumes
    • Create, List, Attach, Detach, Resize, Delete DATA volumes
    • Create, List, Revert, Delete snapshots of volumes (with backup in Primary, no backup to secondary storage)
    • Create template (on secondary storage in QCOW2 format) from PowerFlex volume or snapshot
    • Support PowerFlex volume QoS using details parameter keys: bandwidthLimitInMbps, iopsLimit in service/disk offering. These are the SDC (ScaleIO Data Client) limits for the volume.
    • Migrate volume from one PowerFlex storage pool to another. Supports both encrypted or non-encrypted volumes which are attached to either stopped or running VM.
        • Supports migration within the same PowerFlex storage cluster, along with snapshots, using the native V-Tree (Volume Tree) migration operation.
        • Supports migration across different PowerFlex storage clusters, without snapshots, through block copy (using qemu-img convert) of the disks, by mapping the source and target disks (with RAW format) in the same host, which acts as an SDC.
    • Config drive on scratch/cache space on KVM host, using the path specified in the agent.properties file on the KVM host.

New Settings

Some new settings are introduced for effective management of the operations on a PowerFlex storage pool:

Configuration Description Default
storage.pool.disk.wait New primary storage level configuration to set the custom wait time for PowerFlex disk availability in the host (currently supports PowerFlex only). 60 secs
storage.pool.client.timeout New primary storage level configuration to set the PowerFlex REST API client connection timeout (currently supports PowerFlex only). 60 secs
storage.pool.client.max.connections New primary storage level configuration to set the PowerFlex REST API client max connections (currently supports PowerFlex only). 100
custom.cs.identifier New global configuration, which holds 4 characters (initially randomly generated). This parameter can be updated to suit the requirement of unique CloudStack installation identifier that helps in tracking the volumes of a specific cloudstack installation in the PowerFlex storage pool, used in Sharing basis. No restriction in min/max characters, but the max length is subject to volume naming restriction in PowerFlex. random 4 character string

 

In addition, the following are added / updated to facilitate config drive caching on the host, and router health checks, when the volumes of underlying VMs and Routers are on the PowerFlex pool.

Configuration Description Default
vm.configdrive.primarypool.enabled The scope changed from Global to Zone level, which helps in enabling this per . false
vm.configdrive.use.host.cache.on.unsupported.pool New zone level configuration to use host cache for config drives when storage pool doesn’t support config drive. true
vm.configdrive.force.host.cache.use New zone level configuration to force host cache for config drives. false
router.health.checks.failures.to.recreate.vr New test “filesystem.writable.test” added, which checks the router filesystem is writable or not. If set to “filesystem.writable.test”, the router is recreated when the disk is read-. <empty>

Agent Parameters

The agent on the KVM host uses a cache location for storing the config drives. It also uses some commands of the PowerFlex client (SDC) to sync the volumes mapped. The following parameters are introduced in the agent.properties file to specify custom cache path and SDC installation path (if other than the default path):

Parameter Description Default
host.cache.location new parameter to specify the host cache path. Config drives will be created on the “/config” directory on the host cache. /var/cache/cloud
powerflex.sdc.home.dir new parameter to specify SDC home path if installed in custom directory, required to rescan and query_vols in the SDC. /opt/emc/scaleio/sdc

Implementation details

This new storage plugin is implemented using the storage subsystem framework in the CloudStack architecture. A new storage provider “PowerFlex” is introduced with the associated subsystem classes (Driver, Lifecycle, Adaptor, Pool), which are responsible for handling all the operations supported for Dell EMC PowerFlex / ScaleIO storage pools.

A ScaleIO gateway client is added to communicate with the PowerFlex / ScaleIO gateway server using RESTful APIs for various operations and to query the pool stats. It facilitates the following functionality:

  • Secure authentication with provided URL and credentials
  • Auto renewal of the client (after session expiry and on ‘401 Unauthorized’ response) without management server restart
  • List all storage pools, find storage pool by ID / name
  • List all SDCs and find a SDC by IP address
  • Map / Unmap volume(s) to SDC (a KVM host)
  • Other volume lifecycle operations supported in ScaleIO

All storage related operations (eg. attach volume, detach volume, copy volume, delete volume, migrate volume, etc.) are handled by various Command handlers and the KVM storage processor as orchestrated by the KVM server resource class (LibvirtComputingResource).

For volume migration, PowerFlex storage pools suitable for migration are marked as such (when the underlying storage allows it). These pools are listed when searching for suitable PowerFlex pools for migration as shown below.

PowerFlex Pools Migrate Volumes - Power Flex

The cache storage directory path on the KVM host is picked from the parameter “host.cache.location” in agent.properties file. This path will be used to host config drive ISOs.

Naming conventions used for PowerFlex volumes

The following naming conventions are used for CloudStack resources in a PowerFlex storage pool, which avoids naming conflicts when the same PowerFlex pool is shared across multiple CloudStack zones / installations:

  • Volume: vol-[vol-id]-[pool-key]-[custom.cs.identifier]
  • Template: tmpl-[tmpl-id]-[pool-key]-[custom.cs.identifier]
  • Snapshot: snap-[snap-id]-[pool-key]-[custom.cs.identifier]
  • VMSnapshot: vmsnap-[vmsnap-id]-[vol-id]-[pool-key]-[custom.cs.identifier]

Where…

[pool-key] = 4 characters picked from the pool uuid. Example UUID: fd5227cb-5538-4fef-8427-4aa97786ccbc => fd52(27cb)-5538-4fef-8427-4aa97786ccbc. The highlighted 4 characters (in yellow) are picked. The pool can tracked with the UUID containing [pool-key].

[custom.cs.identifier] = value  of the global configuration “custom.cs.identifier”, which holds 4 characters randomly generated initially. This parameter can be updated to suit the requirement of unique CloudStack installation identifier, which helps in tracking the volumes of a specific CloudStack installation. 

PowerFlex Capacity in CloudStack

The PowerFlex capacity considered in CloudStack for various capacity related checks matches with the capacity stats marked with red boxes in the below image.

The Provisioned size (“sizeInKb” property) and Allocated size (“netProvisionedAddressesInKb” property) of the PowerFlex volume from Query Volume object API response, are considered as the virtual size (total capacity) and used size of the volume in CloudStack respectively.

Other Considerations

The Provisioned size (“sizeInKb” property) and Allocated size (“netProvisionedAddressesInKb” property) of the PowerFlex volume from Query Volume object API response, are considered as the virtual size (total capacity) and used size of the volume in CloudStack respectively.

  • CloudStack will not manage the creation of storage pool/domains etc. in ScaleIO. This must be done by the Admin prior to creating a storage pool in CloudStack. Similarly, deletion of ScaleIO storage pool in CloudStack will not cause actual deletion or removal of storage pool on ScaleIO side.
  • ScaleIO SDC is installed in the KVM host(s), service running & connected to the ScaleIO Metadata Manager (MDM).
  • The seeded ScaleIO template volume(s) [in RAW format] converted from the direct templates [QCOW2/RAW] on secondary storage have the template’s virtual size as the Allocated size in ScaleIO, irrespective of the “Zero Padding Policy” setting for the pool.
  • The ScaleIO ROOT volume(s) [RAW] converted from the seeded templates volume(s) [RAW] have its total capacity (virtual size) as the Allocated size in ScaleIO, irrespective of the “Zero Padding Policy” setting for the pool.
  • The ScaleIO DATA volume(s) [RAW] created / attached have the Allocated size as ‘0’ in ScaleIO initially, and changes with the file system / block copy.

This feature will be included in the Q3 2021 LTS release of Apache CloudStack.

More about Dell EMC PowerFlex

A lot of work has gone into the CloudStack UI recently, and it is now a modern, role-based UI that not only gives a fresh look to CloudStack but also makes development and customisation much easier. In this blog, I provide guidance on how to customise the UI, and have classified customisation into two categories – basic and advanced.

Basic Customisations

Users can customise the UI by means of this configuration file: /etc/cloudstack/management/config.json to modify theme, logos, etc. as required. These changes can be made while the CloudStack management server is running, and the changes can be seen immediately with a browser refresh.

The configuration file provides the following properties for basic customisation:

Property Description
apiBase Changes the suffix for the API endpoint
docBase Changes the base URL for the documentation
appTitle Changes the title of the portal
footer Changes the footer text
logo Changes the logo top-left side image
banner Changes the login banner image
error.404 Changes the image of error Page not found
error.403 Changes the image of error Forbidden
error.500 Changes the image of error Internal Server Error.

 

To change the logo, login banner, error page icon, documentation base URL, etc. the following details can be edited in config.json:

"apiBase": "/client/api",
"docBase": "http://docs.cloudstack.apache.org/en/latest",
"appTitle": "CloudStack",
"footer": "Licensed under the <a href='http://www.apache.org/licenses/' target='_blank'>Apache License</a>, Version 2.0.",
"logo": "assets/logo.svg",
"banner": "assets/banner.svg",
"error": {
    "404": "assets/404.png",
    "403": "assets/403.png",
    "500": "assets/500.png"
}

Theme Customisation

The customisation of themes is also possible, such as modifying banner width or general color. This can be done by editing the “theme” section of the config.json file. This section provides the following properties for customisation:

Property Description
@logo-background- Changes the logo background color
@project-nav-text-color Changes the navigation menu background color of the project
@project-nav-text-color Changes the navigation menu background color of the project view.
@navigation-background-color Changes the navigation menu background color
@primary-color Changes the major background color of the page (background button, icon hover, etc).
@link-color Changes the link color
@link-hover-color Changes the link hover color
@loading-color Changes the message loading color and page loading bar at the top page
@success-color Changes success state color
@processing-color Changes processing state color. Exp: progress status
@warning-color Changes warning state color
@error-color Changes error state color
@heading-color Changes table header color
@text-color Change in major text color
@text-color-secondary Change of secondary text color (breadcrumb icon)
@disabled-color Disable state color (disabled button, switch, etc)
@border-color-base Change in major border color
@logo-width Change the width of the logo top-left side
@logo-height Change the height of the logo top-left side
@banner-width Changes the width of the login banner
@banner-height Changes the height of the login banner
@error-width Changes the width of the error image
@error-height Changes the height of the error image

 

Some example theme colors:

  • Blue: #1890FF
  • Red: #F5222D
  • Yellow: #FAAD14
  • Cyan: #13C2C2
  • Green: #52C41A
  • Purple: #722ED1

This example shows the configuration changes necessary in /etc/cloudstack/management/config.json to customise logo and colors:

{
  "apiBase": "/client/api",
  "docBase": "http://docs.cloudstack.apache.org/en/latest",
  "appTitle": "Shapeblue Cloud",
  ...
  "logo": "assets/customlogo.svg",
  ...
  "theme": {
    ...
    "@primary-": "#dd55ff",
    ...
    "@warning-color": "#ff2a7f",
    ...
    "@text-color": "#37c8ab
",

...

Basic Customizations on CloudStack UI

Links to Contextual Help

The UI provides support for showing links to contextual help in pages and forms. By default, the links are to the official CloudStack documentation. For each section item (menu items in the left pane of the UI such as Instances, Volumes, Templates, etc.) or UI form there is a suffix defined in the code in the Javascript file for the section as docHelp property. This suffix is added to the docBase property defined in the config file / and the resulting URL is set as the link for a contextual help button.

The docHelpMappings property can be used to provide a list of override mappings for different suffix values, and to change a particular help URL, a mapping can be added in the configuration using the suffix part of the URL. By default, `docHelpMappings` lists all existing documentation URL suffixes, mapped to themselves, in the configuration file that are used in the code. This list of documentation URL suffixes can also be found in the CloudStack documentation.

In the example below, we change the docBase and docHelpMappings values to show a custom link for contextual help. By default, docBase is set to http://docs.cloudstack.apache.org/en/latest and contextual help on Instances page links to http://docs.cloudstack.apache.org/en/latest/adminguide/virtual_machines.html.

To make Instances page link to http://mycustomwebsite.com/custom_vm_page.html, docBase can be set to http://mycustomwebsite.com and a docHelpMapping can be added for adminguide/virtual_machines.html as custom_vm_page.html.

Changes in /etc/cloudstack/management/config.json:

{
  ...
  "docBase": http://mycustomwebsite.com,
  ...
  "docHelpMappings": {
    "adminguide/virtual_machines.html": "custom_vm_page.html",
    ...
  },
  ...
}

 

Plugin support

The CloudStack UI also supports custom plugins. Changes in /etc/cloudstack/management/config.json can show a list of custom plugins that would allow showing custom in an iframe. Custom HTML pages can be used for showing some static content to the users while an HTTP service running on an internally deployed web server or an external website can be used to show any dynamic content.

The example below adds two custom plugins in the UI with their own navigation sections. The first plugin shows a custom HTML file. The second plugin shows CloudStack website within the UI.

...
  "plugins": [
    {
      "name": "ExamplePlugin",
      "icon": "appstore",
      "path": "example.html"
    },
    {
      "name": "ExamplePlugin1",
      "icon": "fire",
      "path": "https://cloudstack.apache.org/"
    }
  ]
}

Custom Plugin 1 Custom Plugin

An icon for the plugin can be chosen from Ant Design icons listed at Icon – Ant Design Vue.

For displaying custom HTML in the plugin, an HTML file can be stored in the CloudStack management server’s web application directory on the server, i.e., /usr/share/cloudstack-management/webapp and path can be set to the name of the file. For displaying an HTTP service or a web page, URL can be set as the path of the plugin.

 

Advanced Customisation

Advanced UI customisation is possible by changing source code files that define rules for different elements in the UI, and requires building CloudStack from the source code (available at github.com/apache/cloudstack). This will require some experience in JavaScript, VueJS and nodejs. Also, the UI is built using Ant Design components, so knowledge of ant-design-vue & its principles would help greatly. More information about Ant Design Vue can be found here.

The source code can be obtained either from the CloudStack website in tarball form or from the Apache Cloudstack Github repository. For example, using git, the repository can be cloned locally:

git clone https://github.com/apache/cloudstack.git
# To checkout specific release TAG
cd cloudstack
git fetch --tags
git checkout TAG
# CloudStack 4.15.0.0 has tag named 4.15.0.0 on Github, to checkout the same
git checkout 4.15.0.0

After obtaining the CloudStack source code, the UI code can be found in the UI sub-directory. For different customisations, changes can be made in the code and then npm can be used to create a static web application. Finally, one would copy the built UI to the webapp directory on the management server host. Building the UI will require installing dependencies for nodejs, npm and VueJS. The necessary steps for building UI from source code are dependent on the host operating system and can be found in the CloudStack UI development documentation. The instructions below have been tested on Ubuntu 20.04.

Install dependencies:

sudo apt-get install npm nodejs
# Install system-wide dev tools
sudo npm install -g @vue/cli npm-check-updates

 

Fetch npm package dependencies and build:

cd ui
npm install
npm run build

 

Copy built UI to webapp directory on the management server host:

cd dist
scp -rp ./ {user-on-management-server}@{management-server}:/usr/share/cloudstack-management/webapp/
# Access UI at {management-server}:8080/client in browser

 

Alternatively, packages can be rebuilt for the desired platform. UI will be packaged in cloudstack-ui package. Refer to the CloudStack packaging documentation for more details. For testing changes during development npm can be started without build:

cd ui
npm install
npm run serve
# Or run: npm start

Examples of advanced customisations can be seen below.

Icon changes

Custom icons can be added in the directory cloudstack/ui/src/assets/icons

Once a new icon file (preferably an SVG file) is placed in the directory it can be imported in the Javascript (.js) file for the corresponding section item (menu items in the left pane in the UI).

A list of available Ant Design icons can be found at https://www.antdv.com/components/icon/

The example below shows changing icon for Compute menu and Instances sub-menu:

New files added named customcompute.svg and custominstances.svg added in the cloudstack/ui/src/assets/icons/ directory:

⇒ ls cloudstack/ui/src/assets/icons/ -l
total 36
-rw-rw-r-- 1 shwstppr shwstppr 3008 Feb 16 15:55 cloudian.svg
-rw-r--r-- 1 shwstppr shwstppr 483 Oct 26 1985 customcompute.svg
-rw-r--r-- 1 shwstppr shwstppr 652 Oct 26 1985 custominstances.svg
-rw-rw-r-- 1 shwstppr shwstppr 10775 Feb 16 15:55 debian.svg
-rw-rw-r-- 1 shwstppr shwstppr 10001 Feb 16 15:55 kubernetes.svg

cloudstack/ui/src/config/section/compute.js updated to import and set a new icon for the menu items,

...
import kubernetes from '@/assets/icons/kubernetes.svg?inline'
import store from '@/store'
+import customcompute from '@/assets/icons/customcompute.svg?inline'
+import custominstances from '@/assets/icons/custominstances.svg?inline'

export default {
   name: 'compute',
   title: 'label.compute',
-  icon: 'cloud',
+  icon: 'customcompute',
   children: [
     {
       name: 'vm',
       title: 'label.instances',
-      icon: 'desktop',
+      icon: 'custominstances',
       docHelp: 'adminguide/virtual_machines.html',
       permission: ['listVirtualMachinesMetrics'],

...

After rebuilding and installing the new packages, the UI will show the new icon(s):

Localization

Language translation files for text in the UI are placed in cloudstack/ui/public/locales/. A copy of the file cloudstack/ui/public/locales/en.json can be made in the same directory to include all translation keys, following the naming convention for locales (for example, el will be for Greek, but el_CY and el_GR can be other variants as well).

Once string keys are translated, changes can be made in the file cloudstack/ui/src/components/header/TranslationMenu.vue for the new language to be displayed as an option in the Languages dropdown in the UI. This example shows a dummy locale being added in the UI.

 

New file added in cloudstack/ui/public/locales/ with name zz.json:

⇒ ls cloudstack/ui/public/locales/ -lr
total 3112
-rw-rw-r-- 1 shwstppr shwstppr 196471 Feb 25 11:42 zz.json
-rw-rw-r-- 1 shwstppr shwstppr 186117 Feb 16 15:55 zh_CN.json
-rw-rw-r-- 1 shwstppr shwstppr 354705 Feb 16 15:55 ru_RU.json
...

Changes necessary in cloudstack/ui/src/components/header/TranslationMenu.vue to add the new language with above translation file:

...
       :selectedKeys="[language]"
       @click="onClick">
       <a-menu-item key="en" value="enUS">English</a-menu-item>
+      <a-menu-item key="zz" value="hi">New Language</a-menu-item>
       <a-menu-item key="hi" value="hi">हिन्दी</a-menu-item>
       <a-menu-item key="ja_JP" value="jpJP">日本語</a-menu-item>
       <a-menu-item key="ko_KR" value="koKR">한국어</a-menu-item>
...

Upon re-building and installing new packages, the UI will show the newly added language in the Languages dropdown:

Other Modifications

There could be several use-cases that require tweaking the UI to enable/disable functionality or to hide or show different elements in the UI. For such modifications, a thorough understanding of Javascript and Vue.js will be required. The development section on the CloudStack repository can be referred to for making such advanced changes.

The following example shows hiding Register Template from URL action from Templates view in the UI for User role:

The templates sub-menu is defined in the Images section cloudstack/ui/src/config/section/image.js, and a list of actions can be defined for each child of the section item. The Register Template from URL action can be found in the actions property with label value label.action.register.template. To hide the action for User role accounts, we can use show property for the action. It can be set as follows:

...
       actions: [
 {
   api: 'registerTemplate',
   icon: 'plus',
   label: 'label.action.register.template',
   docHelp: 'adminguide/templates.html#uploading-templates-from-a-remote-http-server',
           listView: true,
           popup: true,
+          show: (record, store) => {
+            return (['Admin', 'DomainAdmin'].includes(store.userInfo.roletype))
+          },
           component: () => import('@/views/image/RegisterOrUploadTemplate.vue')
         },
...

After making these changes, the Register Template from URL action is shown only for Admin and Domain Admin accounts.

Custom Templates View CloudStack

It should be considered that removing elements from the UI does NOT restrict a users ability to access the functionality through the CloudStack API and should, therefore, only be used in a usability context, not a security context. CloudStacks Roles based security model should be used if a user is to be prohibited from accessing functionality.

 

Conclusion

The UI is no longer part of the core CloudStack Management server code (giving a much more modular and flexible approach) and is highly customisable, with even advanced changes possible with some knowledge of JS and Vue.js. The UI was designed keeping simplicity and user experience in mind.

The UI is designed to work across all browsers, tablets and phones. From a developer perspective, the codebase should be about a quarter that of the old UI and, most importantly, the Vue.JS framework is far easier for developers to work with.

 

 

The latest version of XCP-ng – the opensource hypervisor based on XenServer – XCP-ng 8.2 was released in November 2020, and is the first long term support (LTS) version. As such it will receive support and updates for the next 5 years compared to only a year for a standard XCP release. The hypervisor is getting more and more popular in the open-source world thanks to its modern and user-friendly UI, scalability, live migration capabilities and security level.

XCP-ng 8.2 comes with a wide range of new capabilities including UEFI support, openflow controller access, native support for Gluster, ZFS, XFS, CephFS, new CPUs support and security improvements.

CloudStack has included support for XCP-ng since its early days. CloudStack 4.15.0 added support for  XCP-ng 8.0 and 8.1, and the upcoming 4.15.1 will add support for XCP-ng 8.2 LTS. 4.15.1 will also add guest OS mappings for new and missing operating systems for XCP-ng 8.1 and 8.2. Support for the following operating systems has been added:

  • SUSE Linux Enterprise Desktop 12 SP3 (64-bit)
  • SUSE Linux Enterprise Desktop 12 SP4 (64-bit)
  • SUSE Linux Enterprise Server 12 SP4 (64-bit)
  • Scientific Linux 7
  • NeoKylin Linux Server 7
  • CentOS 8
  • Debian Buster 10
  • SUSE Linux Enterprise 15 (64-bit)

XCP-ng Support in CloudStack

Additionally, CloudStack 4.15.1 will also come with fixes for dynamic scalable template behaviour for XCP-ng and XenServer hypervisor.

 

When a network is created in CloudStack, it is by default not provisioned until the first VM is deployed on that network, at which point a a VLAN ID is assigned. Until then, the network exists only as a database entry. If you wanted to create and provision a network without deploying any VMs, you would need to create a persistent network. With persistent networks, you can deploy physical devices like routers / switches, etc. without having to deploy VMs on it, as it provisions the network at the time of its creation. More information about persistent networks in CloudStack can be found here.

Until now, persistent networks have only been available on an isolated network. This feature introduces the ability to create persistent networks in an L2 network, as well as enhancing the way it currently works on isolated networks:

  • For isolated networks, a VR is deployed immediately on creation of the persistent network and the network transitions to ‘implemented’ state irrespective of whether a VLAN ID is specified.
  • For L2 networks, the network resources (bridges or port-groups depending on the hypervisor) and VLANs get created across all hosts of a zone and the network transitions to ‘implemented’ state
  • Persistent networks will not be garbage collected, i.e., it will not shutdown the network in case there are no active VMs running on it · When the last VM on the network is stopped / destroyed / migrated or unplugged, the network resources will not be deleted
  • Network resources will not be created on hosts that are disabled / in maintenance mode, or on those that have been added post creation of persistent networks. If the network needs to be set up on such hosts once those hosts become available, a VM will need to be deployed on them though CloudStack. Deploying a VM on a specific host will provision the required network resources on that specific host.
  • For isolated networks, specify VLAN ID for VPC.

To create a persistent network, we need to first create and enable a network offering that has the ‘Persistent’ flag set to true:

L2 Persistent Networks and enhancement of Isolated Persistent Networks

 

We then can go ahead and create a persistent network using the previously created network offering:

Persistent Mode in L2 Networks

 

Once the network is created, it will transition to ‘Implemented’ state, indicating that the network resources have been created on every host across the zone, which can be confirmed by any manual configuration or creating and starting a VM.

Persistent Mode in L2 Networks

 

This feature will be available as part of the Q3/4 2021 LTS release of CloudStack.

Migration of virtual machines between physical hosts or clusters is essential for cloud operators, allowing them to perform maintenance with little or no downtime, or balance compute and storage resources when necessary. CloudStack supports both live and cold migration (if supported by the hypervisor), and most hypervisors allow VM and volume migration in some form or another.

VMware vMotion provides both live and cold migration of VM and volumes. By leveraging vMotion with the APIs migrateVirtualMachine, migrateVirtualMachineWithVolume, migrateSystemVm and migrateVolume, migration of user and system VMs and their volume(s) can be performed easily in CloudStack.

However, until now CloudStack had the following limitations for VM and volume migration:

  • Migration would fail when attempted between non-shared storages for VMware (i.e., when source and destination datastores are mounted on different hosts) – a typical setup for clusters with cluster-wide storage pools in CloudStack.
  • When migrating stopped user VMs with multiple volumes from UI, CloudStack would migrate all volumes of the VM to the same storage pool. This would result in some volumes getting migrated to incompatible storage pools with storage tag mismatch for the volume’s disk offering.
  • Only running system VMs can be migrated and migration can be done between hosts of the same cluster only.

This feature adds several improvements to CloudStack for migrating VMs and volumes between CloudStack clusters with VMware:

Cross-cluster migration

  • To assist migration, the findHostsForMigration API has been improved to return hosts from different pods for user VMs and hosts from different clusters for system VMs in supported cases.
  • User VMs can now be migrated between hosts belonging to clusters of the same or different pods.
  • Volumes of user VMs can be migrated between cluster-wide storage pools of clusters belonging to different pods.
  • System VMs can now be migrated between hosts belonging to clusters of the same pod.

Note: Migrating system VMs between hosts in different pods cannot be supported as system VMs acquire IP addresses from the IP range of the pod. Changing the public IP of the system VM would result in reconfiguring the VM and in the case of virtual routers it would also require reconfiguring various networking rules inside the VM which can be a risky and can cause huge downtime.

  • Support for migration shared storages – Improvements have been made in CloudStack’s migration framework to leverage VMware’s vMotion capabilities to allow migration without shared storages. This would allow VMs running with volumes on storage pools of one cluster to be migrated to storage pools of a different cluster. Similarly, detached volumes can now be migrated between storage pools of different clusters with migrateVolume API or using the ‘Migrate Volume’ action in the UI, without going over Secondary Storage as before.
  • Support for stopped user VM migration in migrateVirtualMachineWithVolume API – Stopped user VMs with multiple volumes can be migrated to different storage pools based on disk offering compatibility. An operator can choose to provide volume to pool mapping for all volumes of the VM or just the ROOT volume, in which case CloudStack will automatically map the remaining volumes to the compatible storage pool in the same destination cluster.
  • UI improvements for user VM migration
    • The Migration form in the CloudStack UI has been updated to provide details of cluster and pod of the destination hosts:

VMware Migration Improvements

    • Migrate to another primary store action in UI will now utilize the migrateVirtualMachineWithVolume API to migrate stopped VMs. This will allow migrating different volumes to compatible storage(s) and not to the same storage pool. 
  • Support for stopped system VM migration in migrateSystemVm API 

To enable migration of a stopped system VM a new parameter, storageid, has been added to the migrateSystemVm API. Since CloudStack does not allow the listing of volumes for system VMs, the operator may have to refer to the CloudStack database to find the source storage pool for a system VM. A cloudmonkey API call for migrating a stopped system VM to a different storage pool will look like this:

migrate systemvm virtualmachineid=<UUID_OF_SYSTEM_VM> storageid=<UUID_OF_DESTINATION_STORAGE_POOL>

Migration of running system VMs will work same as earlier. The hostid parameter of the migrateSystemVM API can be used to specify the destination host while CloudStack will work out the suitable destination storage pool while migrating VM’s ROOT volume. A cloudmonkey API call for migrating a running system VM to a different host will look like this:

migrate systemvm virtualmachineid=<UUID_OF_SYSTEM_VM> hostid=<UUID_OF_DESTINATION_HOST>
  • UI changes for system VM migration

New actions have been added in different system VM views (SSVM, CPVM, VR and load balancer) to allow migration of stopped VMs:

VMware Migration Improvements 2

New UI form for migrating system VM to another primary storage:

VMware Migration Improvements 3

For running system VMs, UI will now show a form similar to user VM migration showing details of destination hosts:

VMware Migration Improvements 4

To allow VM and volume inter-cluster migrations, the VMware vSphere setup / configuration prerequisites for vMotion and storage vMotion must be in place. Also, migration of a VM with a higher hardware version to an older ESXi host (that doesn’t support that VM hardware version) will fail (native VMware limitation). Therefore, a VM running on an ESXi 6.7 host with VM hardware version 14 will fail to migrate to an ESXi 6.5 host.

These changes will be part of the next CloudStack LTS release which is scheduled for Q3/4 2021.

 

If you are a system engineer managing shared networks and deploying virtual machines with CloudStack, you should be aware that currently there is no option to assign a specific IP address for the Virtual Router. The router is assigned the first free IP address. For many engineers, this might be annoying, as you are not able to make the selection by yourself. Moreover, you would prefer to hold the inventory under control and select the IP address to be assigned by yourself.

In this article, we present a new feature in CloudStack, which make the management of shared networks easier. The new capability will be available in Q3 2021 LTS release of CloudStack and will enable users to specify VR IP in shared networks.

A shared network is a network that can be accessed by virtual machines (VMs) belonging to many different accounts, and can only be created by administrators. Currently, during the creation of a shared network, the network’s DHCP service provides the range of IP addresses (IPv4 / v6), gateway, and netmask. When the first VM is deployed in this network, the Virtual Router (VR) created for the shared network is assigned the first free IP address, and this IP is persistent for the lifetime of the network.

This feature makes it possible to specify an IP address for the VR.

To make this possible, the createNetwork API has been extended to take routerIP and routerIPv6 as optional inputs:

  • routerip: (string) IPv4 address to be assigned to a router
  • routeripv6: (string) IPv6 address to be assigned to a router

If the router IP is not explicitly provided, then the VR is assigned the first free IP available in the network range as usual. An IP address specified also ensures persistence of a VR’s IP address after various lifecycle tasks post-creation of a network (such as a restarting network with clean up).

The following is checked when the VR’s IP is passed. If any of these checks fail then it will not be possible to specify an IP for the router:

  • IP address is valid
  • IP address is within the network range
  • The network offering provides at least one service that requires a VR

Creation of shared network specifying a VR IP via API can be done as follows:

$ create network name=SharedNet displaytext=”Shared Network” vlan=99 gateway=99.99.99.1 netmask=255.255.255.0 startip=99.99.99.50 endip=99.99.99.80 routerip=99.99.99.75 zoneid=<zone_id> networkofferingid=<network offering providing at least one service requiring a VR>

UI Support for the VR IP fields:

Specify VR IP in Shared Networks

This feature will be available in the Q3 2021 LTS release of CloudStack.

The CloudStack Kubernetes Services (CKS) uses CoreOS templates to deploy Kubernetes clusters. However, as CoreOS reached EOL on May 26th, 2020 we needed to find a suitable replacement meeting the requirements of resilience, security, and popularity in the community. Keeping these requirements in mind, we have chosen to modify the existing Debian-based SystemVM template so it can also be used by CKS instead of CoreOS.

Before coming to this decision, we considered other operating systems, such as FlatCar Linux, Alpine Linux and Debian, and based our decision on the following parameters:

  FlatCar Linux Alpine Linux Debian
Brief Description Drop-in replacement for CoreOS Alpine Linux is a Linux distribution based on musl and BusyBox, designed for security, simplicity, and resource efficiency Debian is one of the oldest operating systems based on the Linux kernel. New distributions are updated regularly, and the next candidate is released after a time-based freeze.
Size ~ 500MB – 600MB Small image of approx. 5MB – Because of its small size, it is commonly used in containers providing quick boot-up times ~ 500MB – 600MB
Security Quite secure as it mitigates security vulnerabilities by means of delivering the OS as an immutable filesystem All userland binaries are compiled as Position Independent Executables (PIE) with stack smashing protection. These proactive security features prevent exploitation of entire classes of zero-day and other vulnerabilities. Debian is on a par with most other Linux distributions.
Release Management Frequent releases – almost bi-weekly or monthly There are several releases of Alpine Linux available at the same time. There is no fixed release cycle but typically every 6 months Debian announces its new stable release on a regular basis. 3 years of full support for each release and 2 years of extra LTS support.
Maintenance It is maintained by Kinvolk – a Berlin based consulting firm known for their work around rkt, kubernetes, etc. Alpine linux is backed by a pretty large community base with mailer lists, etc to find support Unparalleled support –claim to provide you with answers for queries on mailing lists within minutes!
Main Reason for Choosing / Not Choosing NOT CHOSEN: A small community, not a popular choice and chances of meeting the same fate as CoreOS i.e., EOL NOT CHOSEN: Init system used by Alpine Linux is openrc – and up until recently k8s did not support openrc systems
https://github.com/kubernetes/kubeadm/issues/1295
CHOSEN: Huge community support, and most importantly – we can modify  the existing systemVM templates!

Using the modified System VM template also simplifies the use of CKS. Using CoreOS to deploy Kubernetes clusters in CKS, we needed to first register the CoreOS template and ensure that the template name coincided with the name set against the global settings shown below. However, with the new Debian-based SystemVM templates, this is no longer necessary, and these global settings are not required:

To ensure the new SystemVM template will support deployment of Kubernetes clusters, we have included docker, containerd and cloud-init packages, which will only be enabled on SystemVM types for CKS nodes (as these packages are only used by the CKS nodes). These services are disabled on all other SystemVM hypes.

So that we do not increase the overall size of the SystemVM templates, we have included support for growing / resizing the root disk partition during boot up to a predefined / provided disk size. For CKS nodes, the minimum root disk size will be 8GB and can be increased by setting the node root disk size while creating the Kubernetes cluster. For other systemVMs the root disk size can be configured by setting the ‘systemvm.root.disk.size’ global setting

In summary, from Apache CloudStack 4.16 LTS onwards, CKS will use the modified (Debian) SystemVM templates for deployment of Kubernetes clusters.

Since the addition of CloudStack Kubernetes Service, users can deploy and manage Kubernetes clusters in CloudStack. This not only makes CloudStack a more versatile and multifaceted application, but also reduces the gap between virtualization and containerization. As with any step in the right direction, it came with a few challenges, and one of them was manual scaling of the cluster.

Automating this process by monitoring cluster metrics may address this issue, but Kubernetes strongly advises against this. Instead, it is recommended that Kubernetes itself make these scaling decisions, and specifically for , Kubernetes has the ‘Cluster Autoscaler’ feature – a standalone program that adjusts the size of a Kubernetes cluster to meet current needs. It runs as a deployment (`cluster-autoscaler`)

The cluster autoscaler has built-in support for several cloud providers (such as AWS, GCE, and recently, Apache CloudStack) and provides an interface for it to communicate with CloudStack. This allows it to dynamically scale the cluster based on the capacity requirements. If there are pods that failed to schedule on any of the current nodes due to insufficient resources, or removes a node if it is not needed due to low utilization.

To enable communication between the cluster autoscaler and CloudStack, a separate service user kubeadmin is created in the same account as the cluster owner. The autoscaler uses this user’s API keys to get the details of the cluster as well as dynamically scale it. It is imperative that this user is not altered or have its keys regenerated.

To enable users to utilize this new feature, the existing scaleKubernetesCluster API has been enhanced to support autoscaling by adding the autoscalingenabled, minsize and maxsizse parameters. To enable autoscaling, simply call the scaleKubernetesCluster API along with the desired minimum and maximum size the cluster should be scaled, e.g.:

scaleKubernetesCluster id=<cluster-id> autoscalingenabled=true minsize=<minimum size of the cluster> maxsize=<maximum size of the cluster >

Autoscaling on the cluster can be disabled by passing `autoscalingenabled=false`. This will delete the deployment and leave the cluster at its current size, e.g.:

            scaleKubernetesCluster id=<cluster-id> autoscalingenabled=false

Autoscaling can also be enabled on a cluster via the UI:

Cluster autoscaling on CloudStack Kubernetes clusters is supported from Kubernetes version 1.16.0 onward. The cluster-autoscaler configuration can be changed and manually deployed for supported Kubernetes versions. The guide to manually deploying the cluster autoscaler can be found here, and an in-depth explanation on how the cluster-autoscaler works can be found on the official Kubernetes cluster autoscaler repository.

This feature will be available in the Q1/2 2021 LTS release of CloudStack.

Projects have proven to be a boon in organizing and grouping accounts and resources together, giving users in the same domain the ability to collaborate and share resources such as VMs, snapshots, volumes and IP addresses. However, there is a limitation. Only accounts can be added as members to projects, which can be an issue if we only want to add a single user of an account to a project. To address this, we’ve enhanced the way project membership is handled to facilitate addition of individual users.

Adding users to projects and assigning project-level roles

In order to restrict users in projects to a limited set of operations (adding further restrictions to those already defined by their account-level roles) we’ve brought in the concept of .
Project Roles are characterized by name and Project ID, and a project can have many project roles. Project Roles are then associated with Project Role Permissions which determine what operations users / accounts associated with a specific role can perform. It is crucial to understand that project-level permissions will not override those set at Account level.

Creation of Project Roles via the API:
$ create projectrole name=<projectRoleName> projectid=<project_uuid> description=<optional description>

Creation and association of a project role permission with a project role via the API:
$ create projectrolepermission projectid=<project_uuid> projectroleid=<project_role_id> permission=<allow/deny> rule=<API name/ wildcard> description=<optional description>

One can also create project roles and project role permissions from the UI:

1. Navigate to the specific project and enter its Details View

2. Go to the Project Roles Sub-tab and click on the Create Project Role button. Fill in the required details in the pop-up form and click OK:

3. To associate project role permissions to the created role, click on the + button on the left of the project role name and hit the ‘Save new Rule’ button:

The re-order button to the left of the rule name will invoke the ‘updateprojectRolePermission’ API as follows:

$ update projectrolepermission projectid=<project_uuid> projectroleid=<project_role_uuid> ruleorder=<list of project rule permission uuids that need to be moved to the top>

Other parameters that the updateProjectRolePermission API can take are:

4. One can also update the permission, namely Allow / Deny associated with the rule, by selecting the option from the drop-down list:

This invokes the ‘updateProjectRolePermission’ API, but passes the permission parameter instead of rule order, as follows:

$ update projectrolepermission projectid=<project_uuid> projectroleid=<project_role_uuid> projectrolepermissionid=<uuid of project role permission> permission=<allow/deny>

Now that we’ve seen how we create / modify the project roles and permissions, let’s understand how we associate them with users / accounts for them to take effect. When adding / inviting users or accounts to projects, we can now specify the project role:

 

The API call corresponding to this operation is ‘AddUserToProject’ or ‘AddAccountToProject’ and can be invoked as follows:

$ add userToProject username=<name of the user> projectid=<project_uuid> projectroleid=<project_role_uuid>

Project Admins

Regular users or accounts in a project can perform all management and provisioning tasks. A project admin can perform these tasks as well as administrative operations in a project such as create / update / suspend / activate project and add / remove / modify accounts. With this feature, we can have multiple users or accounts as project admins, providing more flexibility than before.

1. Creation of Projects with a user as the default project admin
The ‘createProject’ API has been extended to take user ID as an input along with account ID and domain ID:

$ create project name=<project name> displaytext=<project description> userid=<uuid of the user to be added as admin> accountid=<uuid of the account to which the user belongs to> domainid=<uuid of the domain in which the user exists>

 

2. Multiple Project Admins
Change the default ‘swap owner’ behaviour (single project admin allowed) to allow multiple project admins and promote / demote users to project admins / regular users respectively. Use 2. the ‘Promote’ (up arrow) or ‘Demote’ (down arrow) buttons to change the role of a user in a Project:

Please note:

1. Admins, Domain Admins or Project Admins permissions will never be affected by a changed project role
2. One cannot demote / delete the project admin if there is only one

If a role type isn’t specified while adding / inviting users to projects, then by default they become regular members. However, we can override this behaviour by passing the ‘roletype’ parameter to the ‘addAccountToProject’ or ‘addUserToProject’ APIs.

Upgrading CloudStack from any lower version to 4.15 will not affect existing projects and their members. Furthermore, in case we still want the swap owner feature we have the ‘swapowner’ parameter as part of ‘updateProject’ API (which by default is set to true for backward compatibility against the legacy UI). This parameter should be set to false if we want to promote or demote a particular member of the project.

In conclusion, this feature enhances the way Projects behave such that everything that happened at the Account level is now made possible at user level too. This feature will be available as part of CloudStack 4.15 LTS.

CloudStack has more than 600 APIs which can be allowed / disallowed in different combinations to create dynamic roles for the users. The aim of this feature is more effective use and management of these dynamic roles, allowing CloudStack users and operators to:

  1. Import and export roles (rule definitions) for the purpose of sharing.
  2. Create a new role from an existing role (clone and rename) to create a slightly different role.
  3. Use additional built-in roles (to quickly create read-only and support users and operators), such as:
    • Read-Only Admin role: an admin role in which an account is only allowed to perform any list / get / find APIs but not perform any other operation or changes to the infrastructure, configuration or user resources.
    • Read-Only User role: a user role in which an account is only allowed to perform list / get / find APIs who may only be interested in monitoring and usage for instance.
    • Admin-Support role: an admin role in which an admin account is limited to perform day-to-day tasks, such as creating offerings, but cannot change physical networks or add / remove hosts (but can put them in maintenance).
    • User-Support role: a user role in which an account cannot create or destroy resources (any create*, delete* etc. APIs are disallowed) but can view resources and perform operations such as start / stop VMs and attach / detach volumes, ISOs etc.

The existing role types (Admin, Domain Admin, Resource Admin, User) remain unchanged. This feature deals purely with the Dynamic Roles which filter the APIs which a user is allowed to call. The default roles and their permissions cannot be updated or deleted.

Cloning a role

An existing role can be used to create a new role, which will inherit the existing role’s type and permissions. A new parameter: roleid is introduced in the existing createRole API which takes the existing role id as the input, to clone from. The new role created can be modified to create a slightly different role later.
Example API call:
http://<ManagementServerIP>:8080/client/api?command=createRole&name=TestCloneUser&description=Test%20CloneUser01&roleid=ca9871c2-8ea7-11ea-944e-c2865825b006

The Add Role dialog screen shown below is used to create a new role by selecting an existing role:

Import role and export rule definitions

A role can be imported with its rule definitions (rule, permission, description) using a new API: importRole with the following parameters:

  • name (Type: String, Mandatory) – role name
  • type (Type: String, Mandatory) – role type, any of the four role types: Admin, Resource Admin, Domain Admin, User
  • description (Type: String, Optional) – brief description of the role
  • rules (Type: Map, Mandatory) – rules set in the sort order, with key parameters: rule, permission and description
  • force (Type: Boolean, Optional)- whether to override any existing role (with same name and type) or not, “true” / “false”. Default is false

Example API call:
http://<ManagementServerIP>:8080/client/api?command=importRole&name=TestRole&type=User&description=Test%20Role&rules[0].rule=create*&rules[0].permission=allow&rules[0].description=create%20rule&rules[1].rule=list*&rules[1].permission=allow&rules[1].description=listing&force=true

The import role option in the Roles section of the UI opens up the Import Role dialog screen below. Here you can specify the rules with a CSV file with rule, permission and description in the header row, followed by the rule values in each row:

The imported rule definitions are added to the rule set of the role. If a role already exists with the same name and role type, then the import will fail with a ‘role already exists’ message, unless it is forced to override the role by enabling the force option in the UI or setting the “force” parameter to true in the importRole API.

The ‘Export rules’ operation for a role is allowed at the UI level only, at the rules details as shown below. This operation fetches the rules for the selected role and exports to a CSV file. The exported rule definitions file can be thereafter used to import a role.

The rule definitions import / export file (CSV) contains details of role permissions. Each permission is defined in a row with comma-separated rule, permission (allow/deny) and description values. The row sequence of these permission details is considered to be the sort order, and the default export file name is “<RoleName>_<RoleType>.csv”.
Example CSV format:

rule,permission,description
<Rule1>,<Permission1>,<Description1>
<Rule2>,<Permission2>,<Description2>
<Rule3>,<Permission3>,<Description3>

…and so on, where:

  • Rule – Specifies the rule (API name or wildcard rule, in valid format)
  • Permission – Whether to “allow” or “deny”
  • Description – Brief description of the role permission (can be empty)

Example file (.csv), for TestUser with User role, TestUser_User.csv contains:

rule,permission,description
listVirtualMachines,allow,listing VMs
listVolumes,allow,volumes list
register*,deny,
attachVolume,allow,
detach*,allow,
createNetworkACLList,deny,not allow acl
delete*,allow,delete permit

TestUser_User.csv shown in a spreadsheet (for clarity):

 

 

 

 

 

 

 

New built-in roles

New read-only and support roles (with pre-defined sets of permissions) for user and operator, namely Read-Only Admin, Read-Only User, Support Admin and Support User have been added to quickly create read only & support users and admins.

CloudStack doesn’t allow any modifications to built-in roles (new & existing), i.e. these default roles and their permissions cannot be updated, deleted or overridden. The image below shows new and existing built-in roles:

The following permissions are applicable for these roles:

  • Read-Only Admin: an admin role in which an account is only allowed to perform all list APIs, read-only get and quota APIs.
  • Read-Only User: a user role in which an account is only allowed to perform list APIs, read-only get and quota APIs which has user level access.
  • Support Admin: an admin role in which an account is only allowed to perform creating offerings, host/storage maintenance, start/stop VMs, Kubernetes Cluster, attach/detach volumes, ISOs.
  • Support User: a user role in which an account is only allowed to perform, start/stop VMs, Kubernetes Cluster, attach/detach volumes, ISOs.

Any of these roles can be selected to create an account:

This feature will be included in Apache CloudStack 4.15, which is an LTS release.