Share:

cks clusters cloudstack

Flexible CKS Clusters in CloudStack 4.21 | CloudStack Feature First Look

Apache CloudStack has significantly enhanced its Kubernetes integration through the CloudStack Kubernetes Service (CKS), particularly with the release of CloudStack 4.21. These improvements focus on increasing the flexibility, scalability, and manageability of Kubernetes clusters within the CloudStack environment. Key developments include enhanced visibility for Cluster API for CloudStack (CAPC) clusters, granular control over node types and templates, the ability to integrate external (including bare-metal) nodes, support for unstacked etcd clusters, manual node upgrade options, flexible Container Network Interface (CNI) configuration, and dedicated host deployment for CKS nodes. Furthermore, there is a renewed community effort to advance and integrate the CloudStack Container Storage Interface (CSI) driver, which is crucial for dynamic and persistent storage management for containerised applications.

Kubernetes is an open-source system for automating the deployment, scaling and management of containerised applications. It has become the industry standard for container orchestration, providing a powerful and flexible solution for managing complex, distributed applications.

Apache CloudStack provides integration with Kubernetes since version 4.14, through the Kubernetes Cluster API and CloudStack Kubernetes Service (CKS). The CloudStack Kubernetes Service (CKS) is a fully managed container service that simplifies the deployment and management of Kubernetes clusters on CloudStack infrastructure. With CKS, organisations can easily deploy, scale, and manage Kubernetes clusters within their CloudStack environments, providing a unified and simplified approach to managing both virtualised and containerised workloads.

This blog presents multiple improvements and new functionalities added to CKS, allowing for more flexibility in the management of Kubernetes clusters on CloudStack. These improvements have been added to Apache CloudStack version 4.21.

CKS Enhancements

Ability to choose the hypervisor type for CKS clusters

The ability to choose the hypervisor type for CKS clusters allows Users to explicitly select the hypervisor type for their CloudStack Kubernetes Service (CKS) cluster nodes, enhancing the flexibility of Kubernetes cluster management within CloudStack environments.

When creating a new Kubernetes cluster, Users will find this option within a new “Advanced settings” section in the Kubernetes cluster creation window. By enabling these advanced settings, users gain more control over node creation, including the selection of the hypervisor type. Upon selection, CloudStack is responsible for ensuring that the CKS cluster nodes are deployed exclusively on hosts that are of the chosen hypervisor type.

Dedicate specific Hosts/Clusters to a specific Domain for CKS cluster deployments

Previously, while CloudStack supported dedicating Hosts to Domains and Accounts, CKS deployments did not adhere to this policy; CKS nodes would deploy on any available Host, regardless of Host dedication. Now, with this enhancement, Administrators can explicitly dedicate Hosts to a specific Domain or Account, and CloudStack will respect these dedications when deploying Kubernetes clusters.

This means the deployment logic for CKS cluster nodes is as follows:

  • When there are no hosts dedicated to the domain/account the user belongs to, then the nodes will be deployed on any host.
  • When there are hosts dedicated to the domain/account the user belongs to, then the nodes will be deployed on the dedicated hosts.

The primary purpose of this feature is to enable better resource management and allocation. It offers several benefits, including resource isolation, which leads to improved performance and security; predictable performance for consistent workload execution; and simplified management through easier tracking of resource allocation.

Ability to choose different Templates for CKS cluster nodes

Now, users can register custom Templates and mark them as CKS-compatible. This capability serves a crucial purpose: it allows users to pre-configure their Templates with specific resource allocations and software stacks. This means that once a CKS cluster is created, all necessary applications and software are immediately available, ensuring the Kubernetes cluster is up and running with the desired environment from the start. This flexibility is key to building truly heterogeneous clusters, where different node types can be deployed with templates optimised for their unique responsibilities.

To register a CKS-compatible template, simply mark the option ‘For CKS’ on the Template registration form:

cks cloudstack

 

CNI Configuration for CKS clusters

CloudStack 4.21 introduces a separate, dedicated section for CNI user data configuration located under Instances -> CNI Configuration. This new interface allows users to register CNI configurations, which can then be appended during the creation of CKS clusters under the “Advanced settings” section.

The primary purpose of this feature is to allow users to pass runtime parameters (user data) to configure CNI plugins. For example, when registering a CNI configuration, users can define inputs such as ‘peer_ip_address’ and ‘peer_as_number’. When a registered CNI configuration is selected during CKS cluster deployment, these parameters can be specified at runtime, allowing for dynamic customisation of the CNI setup.

This approach offers a valuable alternative to the method introduced in CloudStack 4.20, which supported Calico and Cilium CNI plugins but often required building CKS ISOs with the relevant CNI images already bundled. By enabling CNI configuration via user data, CloudStack 4.21 streamlines the process and provides more on-the-fly customisation options for network settings in Kubernetes clusters.

For a detailed example of CNI configuration, users should consult the official documentation for CloudStack 4.21.

CKS clusters advanced settings

By default, CloudStack uses the last registered system VM Template to deploy the Kubernetes cluster nodes and uses a single Service Offering for every cluster node.

Now, when deploying new Kubernetes Clusters, Users can have more flexibility in the node creation. The Kubernetes cluster creation form contains a new section for Advanced Settings, which is disabled by default. When it is enabled, the following options are available:

  • Service Offering selection for Control node
  • Template selection for Control node
  • Service Offering selection for Worker nodes
  • Template selection for Worker nodes
  • Possibility to dedicate specific nodes for running the etcd service and separate it from the Control node. In such cases:
    • Service Offering selection for etcd nodes
    • Template selection for etcd nodes
  • CNI configuration selection

 

cks clusters cloudstack

For the Template selection, Users are presented with the Templates marked as ‘For CKS’. If no Template is selected for a node type, then CloudStack will use the System VM template for them.

In a similar way, Users are presented with different Service Offerings per node type. If no Service Offering is selected for a node type, then the global Service Offering for the Kubernetes cluster will be used.

cks template cloudstack

 

In case Service Offerings are selected for a node type, those are displayed on the left side menu view of the Kubernetes cluster.

Additionally, on Kubernetes cluster scale, Users can also update the node types of Service Offerings:

scale kubernetes clusters cloudstack

Separate etcd nodes from control nodes of the Kubernetes clusters

By default, the etcd service in included on the Kubernetes cluster Control node as a pod. This service can be separated into dedicated nodes when specifying at least one etcd node number on the Kubernetes cluster creation.

etcd nodes cloudstack

Note: dedication of etcd nodes requires using a Kubernetes version which includes the etcd binaries for installation on the Kubernetes ISO. Example ISOs can be found on: https://download.cloudstack.org/testing/cks/custom_templates/iso-etcd/

 

Adding Pre-created Instances as Worker Nodes

CloudStack 4.21 introduces the capability to add and remove pre-created instances as worker nodes to an existing Kubernetes cluster, enhancing flexibility in managing CKS clusters.

The minimum requirements for a VM instance to be added as a working node to a Kubernetes cluster are:

  • At least 8GB ROOT disk size, 2 CPU cores and 2GB RAM
  • The VM Instance must have a NIC on the Kubernetes cluster network
  • The Management Server’s SSH Public key must be added at the cloud user’s authorized_keys file at ~/.ssh/authorized_keys.
  • Pre-installed packages which can be verified on the Official CloudStack Documentation Page.

add nodes kubernetes cluster cloudstack

When adding nodes to a Kubernetes cluster, CloudStack will list all the VMs that are in the same network as the Kubernetes cluster and the user can select multiple VMs to be added as worker nodes.

Additionally, the following parameters can be set when adding nodes to a Kubernetes cluster:

  • Use CKS packages from Virtual Router: VMware only. Uses the CKS cluster network VR to mount the Kubernetes ISO instead of attaching it to the cluster nodes
  • Mark nodes for manual upgrade: Disabled by default. It indicates if the node is marked for manual upgrade and excluded from the Kubernetes cluster upgrade operation.

The process of adding a node to a Kubernetes cluster has the following stages:

  • Validation: The external node(s) are validated to ensure that all the above-mentioned prerequisites are present
  • Addition of port-forwarding rules and firewall rules (for isolated networks)
  • VM is rebooted with the Kubernetes configuration passed as user data
  • The ISO is attached either to the node or to the VR based on the value of ‘Use CKS packages from Virtual Router’ (VMware only).
  • The cluster enters Importing state until all the nodes are successfully added, and the number of Ready nodes is equal to the expected number of nodes to be added.
  • The process timeout is set by the setting: ‘cloud.kubernetes.cluster.add.node.timeout’.

Similarly, a list of previously added nodes can be removed from a Kubernetes cluster:

The process of removal of these nodes has the following stages:

  • On the control node, drain the specific node before it can be removed
  • Reset the corresponding worker node
  • Delete the worker node from the cluster on the control node
  • Remove the port-forwarding and firewall rules (for isolated networks) for the nodes being removed
  • The cluster enters RemovingNodes state until all the nodes are successfully removed, and the number of Ready nodes is equal to the expected number of nodes
  • The process timeout is set by the setting: ‘cloud.kubernetes.cluster.remove.node.timeout’.

Conclusion

CloudStack 4.21 extends the Kubernetes integration with a set of practical improvements to the CloudStack Kubernetes Service. These changes give operators finer control over cluster composition, resource allocation, and network configuration, while also enabling new scenarios such as adding pre-created or bare-metal nodes. By introducing options for hypervisor selection, host dedication, custom templates, flexible CNIs, dedicated etcd nodes, and node import/export, CKS clusters become easier to adapt to diverse operational requirements.

Together, these enhancements make CKS deployments in CloudStack more consistent, scalable, and aligned with production-grade Kubernetes practices.

Share:

Related Posts:

ShapeBlue