Tag Archive for: Apache CloudStack

As most people know, Apache CloudStack has gained a reputation as a solid, low maintenance dependable cloud orchestration platform. That’s why in last year’s Gartner Magic Quadrant so many leaders and challengers were organisations underpinning their services with Apache CloudStack. However, version upgrades – whilst being much simpler than many competing technologies – have always been the pain point for CloudStack operators. The irony is that upgrading CloudStack itself is usually relatively painless, but upgrading its distributed networking Virtual Routers often results in network downtime for users for a number of minutes, requiring user maintenance windows.

At ShapeBlue we have a vision that CloudStack based clouds – whatever their size and complexity – should be able to be upgraded with zero downtime. No maintenance windows, no service interruptions: zero downtime. Achieving this will allow all CloudStack users/operators to benefit from the vast array of new functionality being added by the CloudStack community in every release.

We set out on the journey towards zero downtime a number of months ago and have been working hard with the CloudStack community on the first steps (it is important to note that “we” includes many people in the CloudStack community who have contributed to this work). Below, I set out the detail of what we’ve achieved so far and what we hope to be achieving in the future, but if readers just want the headline: CloudStack 4.11.1 has up to an 80%+ reduction in network downtime during upgrades compared to CloudStack 4.9.3, and downtime is near eliminated when using redundant VRs.

What’s the problem when upgrading?

During upgrades, CloudStack’s virtual routers (VRs) have to be restarted and usually destroyed and recreated (this also sometimes has to be done during day-to-day operations, but is most apparent during upgrades). These restarts usually lead to downtime for users – in some cases up to several minutes. Whilst Redundant Virtual Routers can mitigate against this they do have some limitations with regards to backward compatibility and are therefore not always a solution to the problem.

Downtime reductions in CloudStack 4.11.1

With the changes made in 4.11.1 (described below) we have managed to achieve significant reductions in network downtime during VR restarts. Please note these improvements will vary from one environment to another, and will be dependent on hypervisor type, hypervisor version, storage backend as well as network bandwidth, so we suggest testing in your own environment to determine the benefits. We’ve tested with a typical VR configuration in our virtualised lab environment.

The testing setup used is as follows:

  • CloudStack 4.9.3 and 4.11.1 environments built in parallel. To maintain the same hypervisor versions across both tests the following hypervisor versions were used:
    • VMware vSphere 5.5
    • KVM on CentOS7
    • XenServer 7.0
  • Environment configuration: In each test we build a simple isolated network with:
    • 10 VMs
    • 10 IP addresses
    • Firewall rules configured on all IP addresses

Downtime was measured as follows:

  • For egress traffic we measured the total amount of time an outbound ping would fail from a hosted VM during the restart process.
  • For ingress traffic we assumed a hosted service on a CloudStack VM and measured the amount of time SSH would be unavailable for during the restart process.
  • In all tests we carried out a “restart network with cleanup” operation and measured the above times. Note – with the new parallel VR restart process (see below) we no longer care how long the overall process takes – we are only interested in how long the network is impacted for. As a result we’ve simply measured the sum of time services were unavailable for (note this time may in some cases be a sum of multiple downtime periods).
  • Tests were repeated multiple times and average number of seconds calculated for ingress and egress downtime across tests for each hypervisor. To illustrate our best case scenarios we’ve also included the shortest measured downtime figure.

Results are as follows:

EnvironmentACS 4.9.3 avgACS 4.11.1 avg (lowest)Reduction avg (highest)
VMware 5.5119s21s (12s)
82% (90%)
KVM / CentOS744s26s (9s)40% (80%)
XenServer 7.0181s33s (15s)82% (92%)

 

How these results were achieved

Existing improvements made in CloudStack 4.11

A number of changes were made in CloudStack 4.11 designed to improve VR restart performance:

  • The system VM has been upgraded from Debian 7 (init based) to Debian 9 (systemd based)
  • The patching process and boot times have been improved, and we have also eliminated reboots after patching
  • The system VM disk size has been reduced, leading to faster deployment time.
  • The VPN backend in the VR has been upgraded to Strongswan, which provides improved VPN performance
  • The redundant VR (RVR) mechanisms have been improved.
  • Code base has been refactored, and it is now easier to build and maintain VR code
  • A number of stability improvements made

 Changes  in CloudStack 4.11.1 – Parallel VR restarts

CloudStack 4.11.1 will ship with a new feature: Parallel VR Restarts, which  changes the behaviour of the “restart network with cleanup” option. In previous CloudStack versions this method would be a serial action where the original VR would be stopped and destroyed and then a new VR started. In CloudStack 4.11.1 this has now been changed to a parallel process where a “restart with cleanup” means:

  • A new VR is started in the background while the old one is still running and providing networking services.
  • Once the new VR is up and has checked in to CloudStack management the old VR is simply stopped and destroyed.
  • This is followed by a last configuration step where ARP caches at neighbours are updated.

With this method there is no negotiation between old and new VR, CloudStack simply orchestrates the parallel startup of the new VR. As a result this method does not have any pre-requisites around the version of the original VR – meaning it can be used for VR restarts after upgrade from considerable older CloudStack versions to 4.11.1.

It is worth noting that this 4.11.1 feature does not make large reductions in the actual  VR processing time itself – however with the parallel startup this doesn’t affect network downtime, and the downtime itself is more connected to the final handover of network processing from old to new VR.

In addition to the considerable reduction in normal VR restart downtime, this feature also introduces a much improved redundant VR restart – this comes close to eliminating network downtime when redundant VR networks are restarted, but does obviously mean the old and new VRs need to be version compatible. In our own testing we have seen downtime for redundant VR networks near eliminated.

Coming in future versions

Advanced  parallel restarts

The next step on the journey is to add further handshaking between old and new VR:

  • New VR will be started in parallel to old, but with some network services and / or network interfaces disabled.
  • Once new VR is up CloudStack management will do an external handover from old VR to new, i.e. handle VR connectivity via the hypervisor.

Fully negotiated redundant VR restarts

The last step on the journey will be aiming towards a fully redundant handover from old to new VR:

  • In this final step the end goal is to make all VRs redundant capable, which will reduce same version restart times as well as future upgrade restart times.
  • New VR will again be started in parallel to old, but will be configured with the redundancy options currently used in the RVR.
  • Once new VR is up the old and new VRs will internally negotiate the handover of all networking connectivity and services, before the old VR is shut down.

– – –

During this journey there are a number of tasks needing carried out – both to make the VR internal processing more efficient as well as improving the backend network restart mechanisms:

  • General speedup of IPtables rules application
  • Fix and improvement of the DNS / DHCP configuration to eliminate repetition of processing steps and cut down on processing time
  • Further improvements of redundant Virtual Router: VRRP2 configuration, and/ or move to a different VR HA solution
  • A move to make all VRs redundant capable by default
  • Move from python2 to python3
  • Consider a move from IPtables to NFtables
  • Converge and make network topologies flexible, refactor and merge VPC and non-VPC code base

Conclusion

With the changes implemented in 4.11.1 we have already made a huge step forward in reducing network downtime as part of VR restarts – whether this is during day to day operation or as part of a CloudStack version upgrade. With downtime reduced by up to 80% and average figures of less than 30 seconds this is a considerable improvement – and this is only the first step on the journey.

We have not yet achieved our goal of “zero downtime upgrades” but it is worth considering that the network interruptions that CloudStack can now achieve during an upgrade will be less than the timeouts for many applications and protocols.

In the coming CloudStack versions we hope to continue this development and further reduce the figures, working towards the ultimate goal of “zero downtime upgrades”. 

About The Author

Dag Sonstebo is a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends his time designing, implementing and automating IaaS solutions based around Apache CloudStack.

 

Introduction

ShapeBlue have been working on a new feature for Apache Cloudstack 4.11.1 that will allow users to bypass secondary storage with KVM. The feature introduces a new way to use templates and ISOs, allowing administrators to use them without being cached on secondary storage. Using this approach Cloudstack administrators will not have to worry about massive secondary storage, since it will be simple bypassed, there won’t be any template sitting there waiting. As well it’s bypassing the SSVM since the download task will not be carried on by the SSVM, but the KVM agent itself. This will enable administrators not to spare resources for SSVM, but to use them for commercial purposes. The usual process of virtual machine deployment will stay as before.

Overview

This feature adds a new field in the vm_template table which is called ‘direct_download’. The field will determine if template needs to be downloaded by SSVM (in case of ‘0’), or directly on the host when deploying the VM (in case of ‘1’). CloudStack administrators will have the option to set this field through the UI or API call as described in the following examples:

From the UI:

From Cloudmonkey:

register template zoneid=3e80c1e6-0710-4018-9062-194d6b3bab97 ostypeid=6f232c75-5370-11e8-afb9-06354a01076f hypervisor=KVM url=http://dl.openvm.eu/cloudstack/macchinina/x86_64/macchinina-kvm.qcow2.bz2 format=QCOW2 displaytext=TestMachina name=TestMachina directdownload=true

The same feature applies to ISOs as well – they don’t need to be cached on secondary storage but can be directly downloaded by the host. CloudStack admins have this option available on the API call when registering ISOs and through the UI form as well.

Whenever a VM deployment is started the template will be downloaded on primary storage. The feature actually checks if the template/ISO has been already downloaded on the pool, checking template_spool_ref table. If there’s an entry on the table matching its pool ID and the template ID, then it won’t be downloaded again. The same action applies if the running VM requires the template again (eg. when reinstalling ). Please note that due to the direct download nature of this feature, the uniqueness of the templates across primary storage pools is the responsibility of the CloudStack operator. CloudStack itself can’t detect if the files in a template download URL have changed or not.

Metalinks are also supported for this feature, and administrators can be more flexible in terms of managing their templates as they can set priorities and location preferences in the metalink file. Metalinks are effectively xml that provides URLs for downloading files. The duplicate download locations provide reliability in case one method fails. Some clients also achieve faster download speeds by allowing different chunks/segments of each file to be downloaded from multiple resources at the same time. Please see the following example:

As the example shows, CloudStack administrators can set location preference and priority, which will be considered upon VM deployment. The deployment logic itself introduces a retry mechanism in 2 cases of failures: VM deployment failure and template download failure.

VM deployment retry logic: this will initiate the deployment on a suitable host and will try to deploy it (which includes the template download itself). If the deployment fails for some reason it will retry the deployment on another suitable host.

Template download retry logic: this is part of the VM deployment and will try to download the template/iso directly by the host. If it fails for some reason (e.g. URL not available) it will iterate through the provided list of priority and location. Once download is completed it will execute the checksum validation (if provided), if that one fails it will download it again, until it has made three attempts. If all three attempts unsuccessful it will return a deployment failure and go back to VM Deployment logic.

Please see the following simplified picture of the deployment logic:

Since the download task has been delegated to the KVM agent instead of SSVM, this feature will be available only for KVM templates.

About the author

Boris Stoyanov is Software Engineer in testing at ShapeBlue, The Cloud Specialists. Bobby spends his time testing features for the Apache CloudStack Community and for our ShapeBlue clients.

Citrix have announced that as of XenServer 7.3, the free version (including the ‘opensource’ version packaged by Citrix) will no longer have feature parity with the paid-for Standard version. Of the features which are restricted in the free version, a maximum pool size of 3 hosts and the removal of Xen Storage Motion are going to make the use of free XenServer to run production clouds pretty much untenable.

Version 4.11 of Apache CloudStack has been released with some exciting new features and a long list of improvements and fixes. It includes more than 400 commits, 220 pull requests, and fixes more than 250 issues.  This version has been worked on for 8 months and is the first release of the 4.11 LTS releases, which will be supported until  1 July 2019.

We’ve been heavily involved in this release at ShapeBlue; our engineering team has contributed a number of the major new features and our own Rohit Yadav has been the 4.11 Release Manager.

As well as some really interesting new features, CloudStack 4.11 has significant performance and reliability improvements to the Virtual Router.

This is far from an exhaustive list, but here are the headline items that we think are most significant.

New Features and Improvements

  • Support for XenServer 7.1 and 7.2, and improved support for VMware 6.5.
  • Host-HA framework and HA-provider for KVM hosts with and NFS as primary storage, and a new background polling task manager.
  • Secure agents communication: new certificate authority framework and a default built-in root CA provider.
  • New network type – L2.
  • CloudStack metrics exporter for Prometheus.
  • Cloudian Hyperstore connector for CloudStack.
  • Annotation feature for CloudStack entities such as hosts.
  • Separation of volume snapshot creation on primary storage and backing operation on secondary storage.
  • Limit admin access from specified CIDRs.
  • Expansion of Management IP Range.
  • Dedication of public IPs to SSVM and CPVM.
  • Support for separate subnet for SSVM and CPVM.
  • Bypass secondary storage template copy/transfer for KVM.
  • Support for multi-disk OVA template for VMware.
  • Storage overprovisioning for local storage.
  • LDAP mapping with domain scope, and mapping of LDAP group to an account.
  • Move user across accounts.
  • Support for “VSD managed” networks with Nuage Networks.
  • Extend config drive support for user data, metadata, and password (Nuage networks).
  • Nuage domain template selection per VPC and support for network migration.
  • Managed storage enhancements.
  • Support for watchdog timer to KVM Instances.
  • Support for Secondary IPv6 Addresses and Subnets.
  • IPv6 Prefix Delegation support in basic networking.
  • Ability to specific MAC address while deploying VM or adding a NIC to a VM.
  • VMware dvSwitch security policies configuration in network offering
  • Allow more than 7 NICs to be added to a VMware VM.
  • Network rate usage for guest offering for VRs.
  • Usage metrics for VM snapshot on primary storage.
  • Enable Netscaler inline mode.
  • NCC integration in CloudStack.
  • The retirement of the Midonet network plugin.

UI Improvements

  • High precision of metrics percentage in the dashboard:
  • Event timeline – filter related events:

  • Navigation improvements between related entities:
  • Bulk operation support for stopping and destroying VMs (note: minor known issue where manual refresh required afterwards):
  • List view improvements and additional columns with state icon:






Structural Improvements

  • Embedded Jetty and improved CloudStack management server configuration.
  • Improved support for Java 8 for building artifacts/modules, packaging, and in the systemvm template.
  • New Debian 9 based systemvm template:
    • Patches system VM without reboot, reduces VR/system VM startup time to few tens of seconds.
    • Faster console proxy startup and service availability.
    • Improved support for redundant virtual routers, conntrackd and keepalived.
    • Improved strongswan provided VPN (s2s and remote access).
    • Packer based systemvm template generation and reduced disk size.
    • Several optimization and improvements.

Documentation and Downloads

The official installation, administration and API documentation can be found below:
http://docs.cloudstack.apache.org 

The 4.11.0.0 release notes can be found at:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0 

The instruction and links to use ShapeBlue provided (noredist) packages repository can be found at:
https://www.shapeblue.com/packages 

The November EU user group was held in the lovely city of Leipzig in Saxony, Germany. First of all a great thanks to Sven Vogel and his team at Kupper Computer who hosted this event, which had a good turnout from the German CloudStack user community.

Sven started off the afternoon with welcome and introductions, before handing over to first speaker Thomas Heil from Terminal Consulting. Thomas’ gave a very interesting talk on the use of HashiCorp’s Terraform to build infrastructure in CloudStack. He started with an introduction into the basics of Terraform, and how this is configured to work with CloudStack as the cloud backend, before braving a live demo of how to build a full VPC hosting a LAMP stack with front end web server load balancing, all using Terraform. A very useful topic covered in great detail – and well done for the successful live demo. We will update this blog post with Thomas’ slide deck in due course.

Following a break – during which a number of CloudStack features and challenges were discussed over drinks and sandwiches – Dag Sonstebo gave his talk on the CloudStack usage service. This service is used to track all consumption of resources in CloudStack for reporting and billing purposes. Dag went through how the service is installed and configured, before diving deeper into how the service processes data from the CloudStack database into the different usage types (VMs, network usage, storage, etc.), before aggregating this into billable units or time slices in the cloud_usage database. He followed up with a number of examples on how to query and report on this usage data, before looking at general maintenance and troubleshooting of the service. All in all a useful topic showing how the resource usage can be tracked back to CloudStack accounts. More information in Dag’s slide deck below. Also watch out for an accompanying ShapeBlue blog post going into more detail on this topic in the coming weeks.

Finishing off the evening was Sebastian Bretschneider from BIT Group GmbH (part of Itelligence) who gave a very interesting talk on providing end user VM build capability in CloudStack using Ansible. For a larger service provider creating templates for every compute role the end user may want is a huge overhead, and Itelligence are trying to overcome this by using fewer base templates with an automation framework on top which builds the different compute roles, e.g. a web server, a DB server, and so on. Sebastian talked us through how they have overcome the challenges with providing Ansible connectivity into isolated user networks, and how Ansible playbooks are used on demand to automate and build end users’ infrastructure. He also discussed how the solution is being integrated into their custom CloudStack portal to provide the end user with a service catalogue for builds. As always – a very interesting talk from BIT Group, and we are looking forward to seeing more of how their solution works for their customers.

The user group meeting was finished off with an informal discussion on various CloudStack topics – especially the new features in CloudStack 4.10 and upcoming 4.11. We continued the discussion with some good German hospitality in a local pub.

All in all a very successful user group – and we are looking forward to the next one (which may be in Frankfurt in the first part of 2018 – to be confirmed). Again thanks to Sven and Kupper Computing for organising the meetup, for hosting us and providing food and drink.

Many people find it challenging to get started with CloudStack’s networking. There are some basic concepts, which although not overly complicated, are not especially obvious either. This blog will try to explain these underlying concepts, in order to make getting started with CloudStack networking models much easier.

A number of security flaws were recently found in the DNSMasq tool. This tool is used by many systems to provide DNS and DHCP services, including by the CloudStack System VMs.
This advisory explains their affect on CloudStack and how to patch CloudStack against these flaws.