Tag Archive for: Cloudstack Networking

Cloudstack’s multi-tenant virtualised networking model is one of its strongest features. Abstracting complex networking concepts and allowing simple UI/API configuration of networks is something loved by users of Cloudstack clouds. But, as an operator/administrator of a Cloudstack cloud you’ll almost certainly have had to troubleshoot network  problems  – and that means troubleshooting CloudStack’s Virtual Routers (VRs) .

As Roy says in the picture , turning it off and on again can often resolve issues with a VR (or restarting a VR in Cloudstack language). But if that doesn’t work, administrators need to troubleshoot the VR. In this article, I will discuss some of the common approaches to such troubleshooting and look at some new CloudStack features that have been added to make this process much easier for administrators.

The problem

Now let’s try to imagine what could be wrong with our VR? The first and most obvious things are mistakes when defining IP range, netmask and gateway for our networks, or general human errors when building out infrastructure elements. CloudStack does provide some validation of the input you give, but it’s not guaranteed to pick up every mistake.

Then, most commonly we could have connectivity issues. The list is endless here, from inside the router itself to the Internet, other VPC/networks, Private Gateways and so on. To resolve such issues, we’ll need some serious networking skills and time to investigate, and most of the time it happens to be one tiny bit of configuration that we’ve missed or misconfigured.

Solution

To get to the bottom of these issues, it is often required to dig through all configurations, run connectivity troubleshooting in and out of a machine and so on, which can be painful and time consuming. CloudStack now offers two new features which will make our life easier and troubleshooting of the VR far more painless to the root admin. Let me introduce you to the “Run Diagnostics” and “Get Diagnostics” features from the VR. Using these two features an administrator will be able to get valuable information from the virtual router without even logging in to it. Furthermore, he’ll be able to operate from the inside of the router and execute scripts, commands etc. to determine what’s wrong or even fix it.

Let’s dive into details of these features.

Run diagnostics

The new “Run diagnostics” feature allows root administrators to execute connectivity diagnostics commands from the VR to a given target, which could be a host on the internet or some other internal element from our infrastructure that we need to make sure is reachable. This enables us to trace how the traffic goes in and out from the VR. The available commands are ping, arping and traceroute. Admins can execute those with any standard option and argument that they support under the VR operating system (Debian). CloudStack effectively logs in to the VR and executes the exact command with the given options and parameters, then it will display back the response within the management console UI.

Here’s a simple example of ping command being sent to google.com:

From Infrastructure section in the GUI, we select the VR and click on Run Diagnostics:

Then select the command you want to execute, add destination and any extra arguments and click OK.

This comes the back with the VR response:

Likewise, admins can do traceroute and arping.

Obviously, there’s a new API created for this which can be used with cloudmonkey. Here’s an example how to do that:

(localcloud) SBCM5> > run diagnostics targetid=f08d92b6-4839-4ca5-8924-bb2c59ce14c2 ipaddress=google.com type=ping params='-c 2'
{
"diagnostics": {
"exitcode": "0",
"stderr": "",
"stdout": "PING google.com (216.58.198.174): 56 data bytes\n64 bytes from 216.58.198.174: icmp_seq=0 ttl=50 time=7.664 ms\n64 bytes from 216.58.198.174: icmp_seq=1 ttl=50 time=7.645 ms\n--- google.com ping statistics ---\n2 packets transmitted, 2 packets received, 0% packet loss\nround-trip min/avg/max/stddev = 7.645/7.654/7.664/0.000 ms"
}
}

…where the targetId is the given ID of the VR in the test.

Run diagnostics is available as of 4.12 release and it is hypervisor agnostic.

Get Diagnostics Data

This feature, intended to be used by root administrators provides a way to retrieve any file from system VMs if the file path is known and specified as input. By default, the API gathers logs and configuration/property files and sends them as compressed tarball to a secondary storage pool within the same zone as the target system VM. A download URL is returned to the operator on successful file retrieval to allow him to download it to their local machines.

The API can be executed against all three types of system VMs, each of them having a separate default list of files and configurations it’ll gather. Here’s a list of the defaults for each system VM type:

• VR – ‘diagnostics.data.vr.defaults’ global setting:

“IPTABLES], [IFCONFIG], [ROUTE], /etc/dnsmasq.conf, /etc/resolv.conf, /etc/haproxy.conf, /etc/hosts.conf, /etc/dnsmaq-resolv.conf, /var/log/cloud.log, /var/log/routerServiceMonitor.log, /var/log/dnsmasq.log”

• CPVM – ‘diagnostics.data.cpvm.defaults’ global setting

“[IPTABLES], [IFCONFIG], [ROUTE], /usr/local/cloud/systemvm/conf/agent.properties, /usr/local/cloud/systemvm/conf/consoleproxy.properties, /var/log/cloud.log”

• SSVM – ‘diagnostics.data.ssvm.defaults’ global setting

“[IPTABLES], [IFCONFIG], [ROUTE], /usr/local/cloud/systemvm/conf/agent.properties, /usr/local/cloud/systemvm/conf/consoleproxy.properties, /var/log/cloud.log”

Please note that one could change the default to include custom files/scripts. To get the defaults from a System VM the admin simply calls the API just giving a target. To call a custom script the root administrator will also have to make sure the script is present at the “/usr/bin” directory on the system VM and can be executed. Once there, it’s name needs to be passed in square brackets, like this: [script]. It also accepts list of values separated by comma.

Here’s where to find it in the CloudStack Console:

Pick a VR and expand the quick view options, then you’ll be able to see ‘Get Diagnostics Data’ button and click on it:

After that, a pop-up would appear, taking one argument ‘Files’. Leave blank to get the defaults or fill in absolute paths to files or commands in [brackets] as custom values
Defaults:

or a command:

Once executed, SSVM will gather all the content in an archive and you’ll be given an URL to download it from:

And here’s the containing of the tar archive defaults for Virtual router:

Here’s an example how to use it directly calling the API from cloudmonkey:

(localcloud) SBCM5> > get diagnosticsdata targetid=1ce6de39-b4ed-412f-aced-3a421924c477 files=[ifconfig]
{
"diagnostics": {
"url": "https://10-1-36-2.sbcloud.uk/userdata/0ff2e4ae-8b55-49a0-815e-22185e45a7d1.tar"
}
}

The following global settings were introduced with Get Diagnostics FR, which let you control and configure the feature and most specifically how it uses the secondary storage of your datacenter. It’s a good practice to keep the garbage collection enabled and running so you don’t end-up with secondary storage occupied with diagnostics logs. Following is list of configurations that the admin can use.

SettingDescriptionDefault Value
diagnostics.data.gc.enable
Enable the garbage collector background task to delete old files from secondary storage.  Requires management server restart
True
diagnostics.data.gc.interval
The interval at which the garbage collector background tasks in seconds. Requires management server restart
86400 (Once a day)
diagnostics.data.retrieval.timeout
Overall system VM script execution time out in seconds. Does not require management server restart.
3600
diagnostics.data.max.file.age
Sets the maximum time in seconds a file can stay in secondary storage before it is deleted. 
86400 (1 day)
diagnostics.data.disable.threshold
Sets the secondary storage disk utilisation percentage for file retrieval. Used to look for suitable secondary storage  with enough space, otherwise an exception is thrown when no secondary store is found.
0.95 (95 %)

Conclusion

Get Diagnostics (cloudstack-#3350) has been submitted against master and against the 4.13 milestone, so hopefully it’ll make it in the next LTS release. Run Diagnostics (cloudstack-#2833) has been merged as of 4.12 release. Both are available from the UI and API, independent of the hypervisor used in CloudStack, they are handy, neat and can save tons of time accessing the VRs and finding what you need from them. Furthermore, they could be used to monitor and automate some of the processes on the VR if required.

About the author

Boris Stoyanov is Software Engineer in testing at ShapeBlue, the Cloud Specialists. Bobby spends his time testing features for the Apache CloudStack Community and for ShapeBlue clients.

Blog by Ivet Petrova, StorPool.

On June 13th, StorPool had the honour and privilege to host and organize the European Cloud Infrastructure and CloudStack User Group together with its partner ShapeBlue. The event was a get together of the local IT infrastructure experts and CloudStack users. Main focus were talks presenting best practices and useful information on how to build an efficient public or private infrastructure. In addition, worlds leading experts and contributors to the open-source Apache CloudStack Project presented its latest functionalities and updates in the project.

What is CloudStack? Key features and use cases

CloudStack is a scalable cloud orchestration platform for delivering turnkey infrastructure as a service clouds. As it is relatively easy to deploy and manage, it attracts the attention of people considering which cloud management system to use. Firstly, its architecture is highly scalable and reliable. The most massive known production cloud with CloudStack installation was reaching approx. 35 000 physical hosts and was running smoothly. Secondly, CloudStack is hypervisor agnostic. It supports KVM, Xen, VMware, HyperV, OVM, etc. Moreover, it also presents a REST API and is used for cloud infrastructure as a service, containers as a service, and many more use cases in which enterprises need a reliable solution to manage complex infrastructure and virtualizations.

CloudStack supports different storage options, and StorPool has its CloudStack integration. More about the story of building StorPool’s CloudStack integration, you can read here. 

CloudStack Market Growth

The European Cloud Infrastructure and CloudStack User Day started with a keynote session of Giles Sirett, CEO of ShapeBlue and widely recognized contributor to the Apache project. Giles talked us through the history of CloudStack, its main advantages, and the value it can bring to companies. After that, he made an overview of interesting use case and shared information for its releases and user communities. According to him, the most significant value of CloudStack is that it is a user-driven project and community, which makes it vibrant and rapidly developed. In conclusion, Giles also shared that CloudStack adoption is quickly growing and now it is used by some of the biggest companies globally.

Achieving the ultimate performance with KVM

Next to the stage was Boyan Krosnov, CPO of StorPool. In his session, he discussed a private cloud setup with KVM achieving 1M IOPS per hyper-converged (storage+compute) node. Besides, Boyan answered the question: What is the optimum architecture and configuration for performance and efficiency? His session was a deep technical dive into the ways for building an efficient and high-performance cloud infrastructure. Furthermore, Boyan explained why performance matters and how many companies even do not understand they are struggling with performance issues … until the moment their customers notify them for this.

During the presentation, the CPO of StorPool covered essential aspects of building cloud infrastructure, part of which were:

  • why the same hardware can bring you 10 times better performance  than before
  • how hardware, compute and networking affect the performance
  • tips and trick for getting ultimate KVM performance
  • …and many more

Boyan advised all participants in the event to pay attention on their cloud performance, apply possible optimizations for accelerating it and closely monitor the cloud performance.

CloudStack: A Service Managers Perspective

After a short break, we welcomed Maria Barta from Itelligence Global Managed Services GmbH. Maria presented a different perspective on CloudStack – “A Service Managers Perspective”. Agile business processes are becoming increasingly important in successful IT services. Itelligence GmbH, provides many different ultra-flexible and highly adaptable cloud solutions. To ensure customer / user satisfaction (i.e. availability, data security and product transparency) and simultaneously facilitate effective agile product development within their team, the role of the service manager is steadily evolving. As a conclusion, the talk provided an insight to the benefits and limitations of CloudStack in relation to the service manager objectives and Maria’s attempt to overcome these in her specific internal IaaS solution.

What’s new in CloudStack 4.13

Paul Angus, VP Technology of ShapeBlue and current VP of CloudStack. He was one of the most awaited speakers at the event. Mainly because he is the most experienced person in the community, who has exceptional knowledge in CloudStack. His session was focused on the new release of CloudStack. The 4.13 version is due for release this summer. With 100s of updates and new features, Paul went through user features. He also talked about operator features and integrations, demonstrating just how much work and development is going into CloudStack.

Paul also shared that version 4.14 most probably will arrive at the end of 2019 / beginning of 2020. He enjoyed great attention from the European CloudStack community and managed to give valuable pieces of advice to the admins dealing with complex cloud issues.

Challenges with high-density networks 

Last, but not least, Marian Marinov from SiteGround web hosting company shared his experience in the problems when managing high-density networks. In cloud environments, people consider the network as a given and almost limitless resource. You get an interface and you are told its bandwidth capacity. From the perspective of the client, this is true. But from the perspective of the provider, this is far from the truth. In his talk, Marian took a look at some DataCenter network designs and what technologies / protocols were used to battle the problem with high-density clouds. All participants in the event had the chance to learn about VXLAN and “L3” switching.

After the final official talk, we managed to organize a great networking event between the speakers and the event attendees. One more opportunity to learn new things for cloud infrastructure and about building a cloud with StorPool and CloudStack.

For StorPool’s team, it was a pleasure to be host and co-organizer of the event and to put the beginning of a new CloudStack community in Bulgaria.

Our presenters’ slides can be found here:

Giles Sirett – CloudStack EU User Group 13 june 2019 – Sofia

Boyan Krosnov – Achieving the ultimate performance with KVM

Maria Barta – CS Day Sofia_ CS – A service manager perspective_20190613

Paul Angus – CSEUG19-What’s coming in CloudStack

Marian Marinov – Challenges with high-density networks

 

There was a definite feel of Christmas in the air in London as we made our way to last Thursday’s (December 13) winter meetup of the Cloudstack European User Group (CSEUG), and that only increased as we arrived at the BT Centre near St. Paul’s and saw the big Christmas tree in reception!

A great turnout for this, the last meetup of 2018, and a great representation of the CloudStack community in Europe with people travelling from Germany, Serbia, Glasgow, Switzerland and Latvia to name but a few. After a quick lunch we took our seats, and Giles Sirett (chairman of the user group) welcomed everyone and got the event started with introductions and CloudStack news.

Firstly, Giles spoke about software updates and new releases. CloudStack 4.11 is an LTS (long term support) release and included more than 250 new capabilities and a big step towards zero downtime upgrades, 4.11.2 has just been released (including 71 fixes), 4.11.3 is coming soon and 4.12 is in planning. Giles then mentioned CloudStack events starting with the recent CloudStack Collaboration Conference in September (Montreal), and events for 2019 – the next CSEUG in March (London), and the next Collaboration Conference in September (Las Vegas). During Giles’ presentation, Maurice Nettisheim (Head of Cloud Compute for BT) took to the stage to say a few words about BT’s ongoing use of CloudStack in their IaaS platform and their continued support and involvement in the CloudStack community.

Giles slides contain much more information:

After Giles, Paul Angus gave us an update on ShapeBlue’s CloudStack Container Service (CCS), giving us a walkthrough of the recently released update.This update brings CCS bang up-to-date by running the latest version of Kubernetes (v1.11.3) on the latest version of Container Linux. CCS also now makes use of CloudStack’s new CA framework to automatically secure the Kubernetes environments it creates. Paul’s talks and slides are always packed with detail:

Olivier Lambert of XCP-ng & Xen Orchestra took the floor next to tell us about the current state of the project. For those that are not familiar, XCP-ng is an opensource, community powered hypervisor based on Xen. It is easy to upgrade from XenServer (keeping all VMs, settings etc.), 100% API compatible, requires no license and has no feature restrictions.

Please take a look through Olivier’s slides for much more on this fascinating subject:

After a short break, we welcomed Ingo Jochim and Andre Walter (itelligence) with their talk entitled ‘How our cloud works’. They talked through full automation with Ansible for all infrastructure components of the cloud with CloudStack, check_mk, LDAP and more, with all functionality available through a customer portal, also covering how the setup is fully scalable for larger landscapes.
Ingo and Andre’s slides right here:

Next up was Adam Dagnall (Cloudian) with ‘Advanced S3 compatible storage integration in CloudStack’. To provide tighter integration between the S3 compatible object store and CloudStack, Cloudian has developed a connector to allow users and their applications to utilize the object store directly from within the CloudStack platform in a single sign-on manner with self-service provisioning. Additionally, CloudStack templates and snapshots are centrally stored within the object store and managed through the CloudStack service. The object store offers protection of these templates and snapshots across data centres using replication or erasure coding. Adam went into the feature-set in great detail, and his slides provide much more information:

Last talk of the day, and the honours fell to Andrija Panic (Hiag Data) with ‘CloudStack – 5 years in production’. Andrija shared real world experience of designing, deploying and managing a CloudStack public cloud, explaining how high availability for the CloudStack management components was implemented and discussing different storage technologies and networking models used, as well as the challenges faced. Andrija also presented alternate methods for deploying CloudStack as regards to regions / zones / pods, and also touched on physical networking, finally looking at the different CloudStack guest networking models available (from Basic Zone / Shared Networks to all the Advanced Zone’s networking models) and when to use each of them.
Andrija went into a lot of detail and I encourage you to look through his slides:

After Andrija had finished answering questions, Giles wrapped things up and we moved to a local pub, where I am pleased to say that conversation and collaboration continued into the night, with what rapidly became the unofficial ‘CloudStack Christmas Party’! Huge thanks to BT for providing a first-rate venue and lunch, and to all our speakers, who make these events so interesting and such a success.

The next CloudStack User Group meetup will be on Thursday, March 14, and will be hosted by our friends at Ticketmaster here in London. Please register here!

All the talks were recorded and will be made available shortly on the ShapeBlue YouTube channel.

Introduction

This blog describes a new feature to be introduced in the CloudStack 4.12 release (already in the current master branch of the CloudStack repository). This feature will provide support for the Data Plane Development Kit (DPDK) in conjunction with Open vSwitch (OVS) for guest VMs and is targeted at the KVM hypervisor.

The Data Plane Development Kit (https://www.dpdk.org/) is a set of libraries and NIC drivers for fast package processing in userspace. Using DPDK along with OVS brings benefits to networking performance on VMs and networking appliances. In this blog, we will introduce how DPDK can be used on on guest VMs once the feature is released.

Please note – DPDK support in CloudStack requires that the KVM hypervisor is running on DPDK compatible hardware.

Enable DPDK support

This feature extends the Open vSwitch feature in CloudStack with DPDK integration. As a prerequisite, Open vSwitch needs to be installed on KVM hosts and enabled in CloudStack. In addition, administrators need to install DPDK libraries on KVM hosts before configuring the CloudStack agents, and I will go into the configuration in detail.

KVM Agent Configuration

An administrator can follow this guide to enable DPDK on a KVM host:

Prerequisites

  • Install OVS on the target KVM host
  • Configure CloudStack agent by editing the /etc/cloudstack/agent/agent.properties file:
    • # network.bridge.type=openvswitch
      # libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver
      
  • Install DPDK. Installation guide can be found on this link: http://docs.openvswitch.org/en/latest/intro/install/dpdk/.

Configuration

Edit /etc/cloudstack/agent/agent.properties file, in which <OVS_PATH> is the path in which your OVS ports are created, typically /var/run/openvswitch/:

  • # openvswitch.dpdk.enable=true
    # openvswitch.dpdk.ovs.path=<OVS_PATH>
    

Restart CloudStack agent so that changes take effect:

# systemctl restart cloudstack-agent

DPDK inside guest VMs

Now that CloudStack agents have been configured, users are able to deploy their guest VMs using DPDK. In order to achieve this, they will need to pass extra configurations to enable DPDK:

  • Enable “HugePages” on the VM
  • NUMA node configuration

As of 4.12, passing extra configurations to VM deployments will be allowed. In the case of KVM, the extra configurations are added to the VM XML domain. The CloudStack API methods deployVirtualMachine and updateVirtualMachinewill support the new optional parameter extraconfigand will work in the following way:

 
# deploy virtualmachine ... extraconfig=<URL_UTF-8_ENCODED_CONFIGS>

CloudStack will expect a URL UTF-8 encoded string which can support multiple extra configurations. For example, if a user wants to enable DPDK, they will need to pass two extra configurations as we have mentioned above. An example of both configurations are the following:

 
dpdk-hugepages:
<memoryBacking> 
   <hugePages/> 
</memoryBacking> 

dpdk-numa: 
<cpu mode='host-passthrough'>
   <numa>
      <cell id='0' cpus='0' memory='9437184' unit='KiB' memAccess='shared'/>
   </numa> 
</cpu>

…which becomes this URL UTF-8 encoded string, and is the one that CloudStack will expect on VM deployments:

 
dpdk-hugepages%3A%20%3CmemoryBacking%3E%20%3ChugePages%2F%3E%20%3C%2FmemoryBacking%3E%20dpdk-numa%3A%20%3Ccpu%20mode%3D%22host-passthrough%22%3E%20%3Cnuma%3E%20%3Ccell%20id%3D%220%22%20cpus%3D%220%22%20memory%3D%229437184%22%20unit%3D%22KiB%22%20memAccess%3D%22shared%22%2F%3E%20%3C%2Fnuma%3E%20%3C%2Fcpu%3E

KVM networking verification

Administrators can verify how OVS ports are created with DPDK support on DPDK enabled hosts, in which users have deployed DPDK enabled guest VMs. These port names start with “csdpdk”:

 

# ovs-vsctl show
....
Port "csdpdk-1"
   tag: 30
   Interface "csdpdk-1"
      type: dpdkvhostuser
Port "csdpdk-4"
   tag: 30
   Interface "csdpdk-4"
      type: dpdkvhostuser

About the author

Nicolas Vazquez is a Senior Software Engineer at ShapeBlue, the Cloud Specialists, and is a committer in the Apache CloudStack project. Nicolas spends his time designing and implementing features in Apache CloudStack.

Introduction

We published the original blog post on KVM networking in 2016 – but in the meantime we have moved on a generation in CentOS and Ubuntu operating systems, and some of the original information is therefore out of date. In this revisit of the original blog post we cover new configuration options for CentOS 7.x as well as Ubuntu 18.04, both of which are now supported hypervisor operating systems in CloudStack 4.11. Ubuntu 18.04 has replaced the legacy networking model with the new Netplan implementation, and this does mean different configuration both for linux bridge setups as well as OpenvSwitch.

KVM hypervisor networking for CloudStack can sometimes be a challenge, considering KVM doesn’t quite have the same mature guest networking model found in the likes of VMware vSphere and Citrix XenServer. In this blog post we’re looking at the options for networking KVM hosts using bridges and VLANs, and dive a bit deeper into the configuration for these options. Installation of the hypervisor and CloudStack agent is pretty well covered in the CloudStack installation guide, so we’ll not spend too much time on this.

Network bridges

On a linux KVM host guest networking is accomplished using network bridges. These are similar to vSwitches on a VMware ESXi host or networks on a XenServer host (in fact networking on a XenServer host is also accomplished using bridges).

A KVM network bridge is a Layer-2 software device which allows traffic to be forwarded between ports internally on the bridge and the physical network uplinks. The traffic flow is controlled by MAC address tables maintained by the bridge itself, which determine which hosts are connected to which bridge port. The bridges allow for traffic segregation using traditional Layer-2 VLANs as well as SDN Layer-3 overlay networks.

KVMnetworking41

Linux bridges vs OpenVswitch

The bridging on a KVM host can be accomplished using traditional linux bridge networking or by adopting the OpenVswitch back end. Traditional linux bridges have been implemented in the linux kernel since version 2.2, and have been maintained through the 2.x and 3.x kernels. Linux bridges provide all the basic Layer-2 networking required for a KVM hypervisor back end, but it lacks some automation options and is configured on a per host basis.

OpenVswitch was developed to address this, and provides additional automation in addition to new networking capabilities like Software Defined Networking (SDN). OpenVswitch allows for centralised control and distribution across physical hypervisor hosts, similar to distributed vSwitches in VMware vSphere. Distributed switch control does require additional controller infrastructure like OpenDaylight, Nicira, VMware NSX, etc. – which we won’t cover in this article as it’s not a requirement for CloudStack.

It is also worth noting Citrix started using the OpenVswitch backend in XenServer 6.0.

Network configuration overview

For this example we will configure the following networking model, assuming a linux host with four network interfaces which are bonded for resilience. We also assume all switch ports are trunk ports:

  • Network interfaces eth0 + eth1 are bonded as bond0.
  • Network interfaces eth2 + eth3 are bonded as bond1.
  • Bond0 provides the physical uplink for the bridge “cloudbr0”. This bridge carries the untagged host network interface / IP address, and will also be used for the VLAN tagged guest networks.
  • Bond1 provides the physical uplink for the bridge “cloudbr1”. This bridge handles the VLAN tagged public traffic.

The CloudStack zone networks will then be configured as follows:

  • Management and guest traffic is configured to use KVM traffic label “cloudbr0”.
  • Public traffic is configured to use KVM traffic label “cloudbr1”.

In addition to the above it’s important to remember CloudStack itself requires internal connectivity from the hypervisor host to system VMs (Virtual Routers, SSVM and CPVM) over the link local 169.254.0.0/16 subnet. This is done over a host-only bridge “cloud0”, which is created by CloudStack when the host is added to a CloudStack zone.

 

KVMnetworking42

Linux bridge configuration – CentOS

In the following CentOS example we have changed the NIC naming convention back to the legacy “eth0” format rather than the new “eno16777728” format. This is a personal preference – and is generally done to make automation of configuration settings easier. The configuration suggested throughout this blog post can also be implemented using the new NIC naming format.

Across all CentOS versions the “NetworkManager” service is also generally disabled, since this has been found to complicate KVM network configuration and cause unwanted behaviour:

 
# systemctl stop NetworkManager
# systemctl disable NetworkManager

To enable bonding and bridging CentOS 7.x requires the modules installed / loaded:

 
# modprobe --first-time bonding
# yum -y install bridge-utils

If IPv6 isn’t required we also add the following lines to /etc/sysctl.conf:

net.ipv6.conf.all.disable_ipv6 = 1 
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

In CentOS the linux bridge configuration is done with configuration files in /etc/sysconfig/network-scripts/. Each of the four individual NIC interfaces are configured as follows (eth0 / eth1 / eth2 / eth3 are all configured the same way). Note there is no IP configuration against the NICs themselves – these purely point to the respective bonds:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
NAME=eth0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
HWADDR=00:0C:12:xx:xx:xx
NM_CONTROLLED=no

The bond configurations are specified in the equivalent ifcfg-bond scripts and specify bonding options as well as the upstream bridge name. In this case we’re just setting a basic active-passive bond (mode=1) with up/down delays of zero and status monitoring every 100ms (miimon=100). Note there are a multitude of bonding options – please refer to the CentOS / RedHat official documentation to tune these to your specific use case.

# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
NAME=bond0
TYPE=Bond
BRIDGE=cloudbr0
ONBOOT=yes
NM_CONTROLLED=no
BONDING_OPTS="mode=active-backup miimon=100 updelay=0 downdelay=0"

The same goes for bond1:

# vi /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
NAME=bond1
TYPE=Bond
BRIDGE=cloudbr1
ONBOOT=yes
NM_CONTROLLED=no
BONDING_OPTS="mode=active-backup miimon=100 updelay=0 downdelay=0"

Cloudbr0 is configured in the ifcfg-cloudbr0 script. In addition to the bridge configuration we also specify the host IP address, which is tied directly to the bridge since it is on an untagged VLAN:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
ONBOOT=yes
TYPE=Bridge
IPADDR=192.168.100.20
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
NM_CONTROLLED=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
DELAY=0

Cloudbr1 does not have an IP address configured hence the configuration is simpler:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=none
NM_CONTROLLED=no
DELAY=0
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no

Optional tagged interface for storage traffic

If a dedicated VLAN tagged IP interface is required for e.g. storage traffic this can be accomplished by created a VLAN on top of the bond and tying this to a dedicated bridge. In this case we create a new bridge on bond0 using VLAN 100:

# vi /etc/sysconfig/network-scripts/ifcfg-bond.100
DEVICE=bond0.100
VLAN=yes
BOOTPROTO=none
ONBOOT=yes
TYPE=Unknown
BRIDGE=cloudbr100

The bridge can now be configured with the desired IP address for storage connectivity:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr100
DEVICE=cloudbr100
ONBOOT=yes
TYPE=Bridge
VLAN=yes
IPADDR=10.0.100.20
NETMASK=255.255.255.0
NM_CONTROLLED=no
DELAY=0

Internal bridge cloud0

When using linux bridge networking there is no requirement to configure the internal “cloud0” bridge, this is all handled by CloudStack.

Network startup

Note – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

To make the configuration live restart the network service:

# systemctl restart network

To check the bridges use the brctl command:

# brctl show
bridge name bridge id STP enabled interfaces
cloudbr0 8000.000c29b55932 no bond0
cloudbr1 8000.000c29b45956 no bond1

The bonds can be checked with:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:xx:xx:xx:xx
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:xx:xx:xx:xx
Slave queue ID: 0

Linux bridge configuration – Ubuntu

With the 18.04 “Bionic Beaver” release Ubuntu have retired the legacy way of configuring networking through /etc/network/interfaces in favour of Netplan – https://netplan.io/reference. This changes how networking is configured – although the principles around bridge configuration are the same as in previous Ubuntu versions.

First of all ensure correct hostname and FQDN are set in /etc/hostname and /etc/hosts respectively.

To stop network bridge traffic from traversing IPtables / ARPtables also add the following lines to /etc/sysctl.conf, this prevents bridge traffic from traversing IPtables / ARPtables on the host.

# vi /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

Ubuntu 18.04 installs the “bridge-utils” and bridge/bonding kernel options by default, and the corresponding modules are also loaded by default, hence there are no requirements to add anything to /etc/modules.

In Ubuntu 18.04 all interface, bond and bridge configuration are configured using cloud-init and the Netplan configuration in /etc/netplan/XX-cloud-init.yaml. Same as for CentOS we are configuring basic active-passive bonds (mode=1) with status monitoring every 100ms (miimon=100), and configuring bridges on top of these. As before the host IP address is tied to cloudbr0:

# vi /etc/netplan/50-cloud-init.yaml
network:
    ethernets:
        eth0:
            dhcp4: no
        eth1:
            dhcp4: no
        eth2:
            dhcp4: no
        eth3:
            dhcp4: no
    bonds:
        bond0:
            dhcp4: no
            interfaces:
                - eth0
                - eth1
            parameters:
                mode: active-backup
                primary: eth0
        bond1:
            dhcp4: no
            interfaces:
                - eth2
                - eth3
            parameters:
                mode: active-backup
                primary: eth2
    bridges:
        cloudbr0:
            addresses:
                - 192.168.100.20/24
            gateway4: 192.168.100.1
            nameservers:
                search: [mycloud.local]
                addresses: [192.168.100.5,192.168.100.6]
            interfaces:
                - bond0
        cloudbr1:
            dhcp4: no
            interfaces:
                - bond1
    version: 2

Optional tagged interface for storage traffic

To add an options VLAN tagged interface for storage traffic add a VLAN and a new bridge to the above configuration:

# vi /etc/netplan/50-cloud-init.yaml
    vlans:
        bond100:
            id: 100
            link: bond0
            dhcp4: no
    bridges:
        cloudbr100:
            addresses:
               - 10.0.100.20/24
            interfaces:
               - bond100

Internal bridge cloud0

When using linux bridge networking the internal “cloud0” bridge is again handled by CloudStack, i.e. there’s no need for specific configuration to be specified for this.

Network startup

Note – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

To make the configuration reload Netplan with

# netplan apply

To check the bridges use the brctl command:

# brctl show
bridge name	bridge id		STP enabled	interfaces
cloud0		8000.000000000000	no
cloudbr0	8000.52664b74c6a7	no		bond0
cloudbr1	8000.2e13dfd92f96	no		bond1
cloudbr100	8000.02684d6541db	no		bond100

To check the VLANs and bonds:

# cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
bond100 | 100 | bond0
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 10
Permanent HW addr: 00:0c:xx:xx:xx:xx
Slave queue ID: 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 10
Permanent HW addr: 00:0c:xx:xx:xx:xx
Slave queue ID: 0

 

OpenVswitch bridge configuration – CentOS

The OpenVswitch version in the standard CentOS repositories is relatively old (version 2.0). To install a newer version either locate and install this from a third party CentOS/Fedora/RedHat repository, alternatively download and compile the packages from the OVS website http://www.openvswitch.org/download/ (notes on how to compile the packages can be found in http://docs.openvswitch.org/en/latest/intro/install/fedora/).

Once packages are available install and enable OVS with

# yum localinstall openvswitch-<version>.rpm
# systemctl start openvswitch
# systemctl enable openvswitch

In addition to this the bridge module should be blacklisted. Experience has shown that even blacklisting this module does not prevent it from being loaded. To force this set the module install to /bin/false. Please note the CloudStack agent install depends on the bridge module being in place, hence this step should be carried out after agent install.

echo "install bridge /bin/false" > /etc/modprobe.d/bridge-blacklist.conf

As with linux bridging above the following examples assumes IPv6 has been disabled and legacy ethX network interface names are used. In addition the hostname has been set in /etc/sysconfig/network and /etc/hosts.

Add the initial OVS bridges using the ovs-vsctl toolset:

# ovs-vsctl add-br cloudbr0
# ovs-vsctl add-br cloudbr1
# ovs-vsctl add-bond cloudbr0 bond0 eth0 eth1
# ovs-vsctl add-bond cloudbr1 bond1 eth2 eth3

This will configure the bridges in the OVS database, but the settings will not be persistent. To make the settings persistent we need to configure the network configuration scripts in /etc/sysconfig/network-scripts/, similar to when using linux bridges.

Each individual network interface has a generic configuration – note there is no reference to bonds at this stage. The following ifcfg-eth script applies to all interfaces:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
ONBOOT=yes
NM_CONTROLLED=no
HOTPLUG=no
HWADDR=00:0C:xx:xx:xx:xx

The bonds reference the interfaces as well as the upstream bridge. In addition the bond configuration specifies the OVS specific settings for the bond (active-backup, no LACP, 100ms status monitoring):

# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSBond
OVS_BRIDGE=cloudbr0
BOOTPROTO=none
BOND_IFACES="eth0 eth1"
OVS_OPTIONS="bond_mode=active-backup lacp=off other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100"
HOTPLUG=no
# vi /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSBond
OVS_BRIDGE=cloudbr1
BOOTPROTO=none
BOND_IFACES="eth2 eth3"
OVS_OPTIONS="bond_mode=active-backup lacp=off other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100"
HOTPLUG=no

The bridges are now configured as follows. The host IP address is specified on the untagged cloudbr0 bridge:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.100.20
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
HOTPLUG=no

Cloudbr1 is configured without an IP address:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=none
HOTPLUG=no

Internal bridge cloud0

Under CentOS7.x and CloudStack 4.11 the cloud0 bridge is automatically configured, hence no additional configuration steps required.

Optional tagged interface for storage traffic

If a dedicated VLAN tagged IP interface is required for e.g. storage traffic this is accomplished by creating a VLAN tagged fake bridge on top of one of the cloud bridges. In this case we add it to cloudbr0 with VLAN 100:

# ovs-vsctl add-br cloudbr100 cloudbr0 100
# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr100
DEVICE=cloudbr100
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=10.0.100.20
NETMASK=255.255.255.0
OVS_OPTIONS="cloudbr0 100"
HOTPLUG=no

Additional OVS network settings

To finish off the OVS network configuration specify the hostname, gateway and IPv6 settings:

vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=kvmhost1.mylab.local
GATEWAY=192.168.100.1
NETWORKING_IPV6=no
IPV6INIT=no
IPV6_AUTOCONF=no

VLAN problems when using OVS

Kernel versions older than 3.3 had some issues with VLAN traffic propagating between KVM hosts. This has not been observed in CentOS 7.5 (kernel version 3.10) – however if this issue is encountered look up the OVS VLAN splinter workaround.

Network startup

Note – as mentioned for linux bridge networking – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

To make the configuration live restart the network service:

# systemctl restart network

To check the bridges use the ovs-vsctl command. The following shows the optional cloudbr100 on VLAN 100:

# ovs-vsctl show
49cba0db-a529-48e3-9f23-4999e27a7f72
    Bridge "cloudbr0";
        Port "cloudbr0";
            Interface "cloudbr0"
                type: internal
        Port "cloudbr100"
            tag: 100
            Interface "cloudbr100"
                type: internal
        Port "bond0"
            Interface "veth0";
            Interface "eth0"
    Bridge "cloudbr1"
        Port "bond1"
            Interface "eth1"
            Interface "veth1"
        Port "cloudbr1"
            Interface "cloudbr1"
                type: internal
    Bridge "cloud0"
        Port "cloud0"
            Interface "cloud0"
                type: internal
    ovs_version: "2.9.2"

The bond status can be checked with the ovs-appctl command:

ovs-appctl bond/show bond0
---- bond0 ----
bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
active slave mac: 00:0c:xx:xx:xx:xx(eth0)

slave eth0: enabled
active slave
may_enable: true

slave eth1: enabled
may_enable: true

To ensure that only OVS bridges are used also check that linux bridge control returns no bridges:

# brctl show
bridge name	bridge id		STP enabled	interfaces

As a final note – the CloudStack agent also requires the following two lines added to /etc/cloudstack/agent/agent.properties after install:

network.bridge.type=openvswitch
libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver

OpenVswitch bridge configuration – Ubuntu

As discussed earlier in this blog post Ubuntu 18.04 introduced Netplan as a replacement to the legacy “/etc/network/interfaces” network configuration. Unfortunately Netplan does not support OVS, hence the first challenge is to revert Ubuntu to the legacy configuration method.

To disable Netplan first of all add “netcfg/do_not_use_netplan=true” to the GRUB_CMDLINE_LINUX option in /etc/default/grub. The following example also shows the use of legacy interface names as well as IPv6 being disabled:

GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 ipv6.disable=1 netcfg/do_not_use_netplan=true"

Then rebuild GRUB and reboot the server:

grub-mkconfig -o /boot/grub/grub.cfg

To set the hostname first of all edit “/etc/cloud/cloud.cfg” and set this to preserve the system hostname:

preserve_hostname: true

Thereafter set the hostname with hostnamectl:

hostnamectl set-hostname --static --transient --pretty <hostname>

Now remove Netplan, install OVS from the Ubuntu repositories as well the “ifupdown” package to get standard network functionality back:

apt-get purge nplan netplan.io
apt-get install openvswitch-switch
apt-get install ifupdown

As for CentOS we need to blacklist the bridge module to prevent standard bridges being created. Please note the CloudStack agent install depends on the bridge module being in place, hence this step should be carried out after agent install.

echo "install bridge /bin/false" > /etc/modprobe.d/bridge-blacklist.conf

To stop network bridge traffic from traversing IPtables / ARPtables also add the following lines to /etc/sysctl.conf:

# vi /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

Same as for CentOS we first of all add the OVS bridges and bonds from command line using the ovs-vsctl command line tools. In this case we also add the additional tagged fake bridge cloudbr100 on VLAN 100:

# ovs-vsctl add-br cloudbr0
# ovs-vsctl add-br cloudbr1
# ovs-vsctl add-bond cloudbr0 bond0 eth0 eth1 bond_mode=active-backup other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100
# ovs-vsctl add-bond cloudbr1 bond1 eth2 eth3 bond_mode=active-backup other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100
# ovs-vsctl add-br cloudbr100 cloudbr0 100

As for linux bridge all network configuration is applied in “/etc/network/interfaces”:

# vi /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual

auto cloudbr0
allow-ovs cloudbr0
iface cloudbr0 inet static
  address 192.168.100.20
  netmask 255.255.155.0
  gateway 192.168.100.100
  dns-nameserver 192.168.100.5
  ovs_type OVSBridge
  ovs_ports bond0

allow-cloudbr0 bond0 
iface bond0 inet manual 
  ovs_bridge cloudbr0 
  ovs_type OVSBond 
  ovs_bonds eth0 eth1 
  ovs_option bond_mode=active-backup other_config:miimon=100

auto cloudbr1
allow-ovs cloudbr1
iface cloudbr1 inet manual

allow-cloudbr1 bond1 
iface bond1 inet manual 
  ovs_bridge cloudbr1 
  ovs_type OVSBond 
  ovs_bonds eth2 eth3 
  ovs_option bond_mode=active-backup other_config:miimon=100

Network startup

Since Ubuntu 14.04 the bridges have started automatically without any requirement for additional startup scripts. Since OVS uses the same toolset across both CentOS and Ubuntu the same processes as described earlier in this blog post can be utilised.

# ovs-appctl bond/show bond0
# ovs-vsctl show

To ensure that only OVS bridges are used also check that linux bridge control returns no bridges:

# brctl show
bridge name	bridge id		STP enabled	interfaces

As mentioned earlier the following also needs added to the /etc/cloudstack/agent/agent.properties file:

network.bridge.type=openvswitch
libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver

Internal bridge cloud0

In Ubuntu there is no requirement to add additional configuration for the internal cloud0 bridge, CloudStack manages this.

Optional tagged interface for storage traffic

Additional VLAN tagged interfaces are again accomplished by creating a VLAN tagged fake bridge on top of one of the cloud bridges. In this case we add it to cloudbr0 with VLAN 100 at the end of the interfaces file:

# ovs-vsctl add-br cloudbr100 cloudbr0 100
# vi /etc/network/interfaces
auto cloudbr100
allow-cloudbr0 cloudbr100
iface cloudbr100 inet static
  address 10.0.100.20
  netmask 255.255.255.0
  ovs_type OVSIntPort
  ovs_bridge cloudbr0
  ovs_options tag=100

Conclusion

As KVM is becoming more stable and mature, more people are going to start looking at using it rather that the more traditional XenServer or vSphere solutions, and we hope this article will assist in configuring host networking. As always we’re happy to receive feedback , so please get in touch with any comments, questions or suggestions.

About The Author

Dag Sonstebo is  a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends most of his time designing, implementing and automating IaaS solutions based on Apache CloudStack.

As most people know, Apache CloudStack has gained a reputation as a solid, low maintenance dependable cloud orchestration platform. That’s why in last year’s Gartner Magic Quadrant so many leaders and challengers were organisations underpinning their services with Apache CloudStack. However, version upgrades – whilst being much simpler than many competing technologies – have always been the pain point for CloudStack operators. The irony is that upgrading CloudStack itself is usually relatively painless, but upgrading its distributed networking Virtual Routers often results in network downtime for users for a number of minutes, requiring user maintenance windows.

At ShapeBlue we have a vision that CloudStack based clouds – whatever their size and complexity – should be able to be upgraded with zero downtime. No maintenance windows, no service interruptions: zero downtime. Achieving this will allow all CloudStack users/operators to benefit from the vast array of new functionality being added by the CloudStack community in every release.

We set out on the journey towards zero downtime a number of months ago and have been working hard with the CloudStack community on the first steps (it is important to note that “we” includes many people in the CloudStack community who have contributed to this work). Below, I set out the detail of what we’ve achieved so far and what we hope to be achieving in the future, but if readers just want the headline: CloudStack 4.11.1 has up to an 80%+ reduction in network downtime during upgrades compared to CloudStack 4.9.3, and downtime is near eliminated when using redundant VRs.

What’s the problem when upgrading?

During upgrades, CloudStack’s virtual routers (VRs) have to be restarted and usually destroyed and recreated (this also sometimes has to be done during day-to-day operations, but is most apparent during upgrades). These restarts usually lead to downtime for users – in some cases up to several minutes. Whilst Redundant Virtual Routers can mitigate against this they do have some limitations with regards to backward compatibility and are therefore not always a solution to the problem.

Downtime reductions in CloudStack 4.11.1

With the changes made in 4.11.1 (described below) we have managed to achieve significant reductions in network downtime during VR restarts. Please note these improvements will vary from one environment to another, and will be dependent on hypervisor type, hypervisor version, storage backend as well as network bandwidth, so we suggest testing in your own environment to determine the benefits. We’ve tested with a typical VR configuration in our virtualised lab environment.

The testing setup used is as follows:

  • CloudStack 4.9.3 and 4.11.1 environments built in parallel. To maintain the same hypervisor versions across both tests the following hypervisor versions were used:
    • VMware vSphere 5.5
    • KVM on CentOS7
    • XenServer 7.0
  • Environment configuration: In each test we build a simple isolated network with:
    • 10 VMs
    • 10 IP addresses
    • Firewall rules configured on all IP addresses

Downtime was measured as follows:

  • For egress traffic we measured the total amount of time an outbound ping would fail from a hosted VM during the restart process.
  • For ingress traffic we assumed a hosted service on a CloudStack VM and measured the amount of time SSH would be unavailable for during the restart process.
  • In all tests we carried out a “restart network with cleanup” operation and measured the above times. Note – with the new parallel VR restart process (see below) we no longer care how long the overall process takes – we are only interested in how long the network is impacted for. As a result we’ve simply measured the sum of time services were unavailable for (note this time may in some cases be a sum of multiple downtime periods).
  • Tests were repeated multiple times and average number of seconds calculated for ingress and egress downtime across tests for each hypervisor. To illustrate our best case scenarios we’ve also included the shortest measured downtime figure.

Results are as follows:

EnvironmentACS 4.9.3 avgACS 4.11.1 avg (lowest)Reduction avg (highest)
VMware 5.5119s21s (12s)
82% (90%)
KVM / CentOS744s26s (9s)40% (80%)
XenServer 7.0181s33s (15s)82% (92%)

 

How these results were achieved

Existing improvements made in CloudStack 4.11

A number of changes were made in CloudStack 4.11 designed to improve VR restart performance:

  • The system VM has been upgraded from Debian 7 (init based) to Debian 9 (systemd based)
  • The patching process and boot times have been improved, and we have also eliminated reboots after patching
  • The system VM disk size has been reduced, leading to faster deployment time.
  • The VPN backend in the VR has been upgraded to Strongswan, which provides improved VPN performance
  • The redundant VR (RVR) mechanisms have been improved.
  • Code base has been refactored, and it is now easier to build and maintain VR code
  • A number of stability improvements made

 Changes  in CloudStack 4.11.1 – Parallel VR restarts

CloudStack 4.11.1 will ship with a new feature: Parallel VR Restarts, which  changes the behaviour of the “restart network with cleanup” option. In previous CloudStack versions this method would be a serial action where the original VR would be stopped and destroyed and then a new VR started. In CloudStack 4.11.1 this has now been changed to a parallel process where a “restart with cleanup” means:

  • A new VR is started in the background while the old one is still running and providing networking services.
  • Once the new VR is up and has checked in to CloudStack management the old VR is simply stopped and destroyed.
  • This is followed by a last configuration step where ARP caches at neighbours are updated.

With this method there is no negotiation between old and new VR, CloudStack simply orchestrates the parallel startup of the new VR. As a result this method does not have any pre-requisites around the version of the original VR – meaning it can be used for VR restarts after upgrade from considerable older CloudStack versions to 4.11.1.

It is worth noting that this 4.11.1 feature does not make large reductions in the actual  VR processing time itself – however with the parallel startup this doesn’t affect network downtime, and the downtime itself is more connected to the final handover of network processing from old to new VR.

In addition to the considerable reduction in normal VR restart downtime, this feature also introduces a much improved redundant VR restart – this comes close to eliminating network downtime when redundant VR networks are restarted, but does obviously mean the old and new VRs need to be version compatible. In our own testing we have seen downtime for redundant VR networks near eliminated.

Coming in future versions

Advanced  parallel restarts

The next step on the journey is to add further handshaking between old and new VR:

  • New VR will be started in parallel to old, but with some network services and / or network interfaces disabled.
  • Once new VR is up CloudStack management will do an external handover from old VR to new, i.e. handle VR connectivity via the hypervisor.

Fully negotiated redundant VR restarts

The last step on the journey will be aiming towards a fully redundant handover from old to new VR:

  • In this final step the end goal is to make all VRs redundant capable, which will reduce same version restart times as well as future upgrade restart times.
  • New VR will again be started in parallel to old, but will be configured with the redundancy options currently used in the RVR.
  • Once new VR is up the old and new VRs will internally negotiate the handover of all networking connectivity and services, before the old VR is shut down.

– – –

During this journey there are a number of tasks needing carried out – both to make the VR internal processing more efficient as well as improving the backend network restart mechanisms:

  • General speedup of IPtables rules application
  • Fix and improvement of the DNS / DHCP configuration to eliminate repetition of processing steps and cut down on processing time
  • Further improvements of redundant Virtual Router: VRRP2 configuration, and/ or move to a different VR HA solution
  • A move to make all VRs redundant capable by default
  • Move from python2 to python3
  • Consider a move from IPtables to NFtables
  • Converge and make network topologies flexible, refactor and merge VPC and non-VPC code base

Conclusion

With the changes implemented in 4.11.1 we have already made a huge step forward in reducing network downtime as part of VR restarts – whether this is during day to day operation or as part of a CloudStack version upgrade. With downtime reduced by up to 80% and average figures of less than 30 seconds this is a considerable improvement – and this is only the first step on the journey.

We have not yet achieved our goal of “zero downtime upgrades” but it is worth considering that the network interruptions that CloudStack can now achieve during an upgrade will be less than the timeouts for many applications and protocols.

In the coming CloudStack versions we hope to continue this development and further reduce the figures, working towards the ultimate goal of “zero downtime upgrades”. 

About The Author

Dag Sonstebo is a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends his time designing, implementing and automating IaaS solutions based around Apache CloudStack.

 

Many people find it challenging to get started with CloudStack’s networking. There are some basic concepts, which although not overly complicated, are not especially obvious either. This blog will try to explain these underlying concepts, in order to make getting started with CloudStack networking models much easier.

A number of security flaws were recently found in the DNSMasq tool. This tool is used by many systems to provide DNS and DHCP services, including by the CloudStack System VMs.
This advisory explains their affect on CloudStack and how to patch CloudStack against these flaws.