Tag Archive for: CloudStack How-To

A lot of work has gone into the CloudStack UI recently, and it is now a modern, role-based UI that not only gives a fresh look to CloudStack but also makes development and customisation much easier. In this blog, I provide guidance on how to customise the UI, and have classified customisation into two categories – basic and advanced.

Basic Customisations

Users can customise the UI by means of this configuration file: /etc/cloudstack/management/config.json to modify theme, logos, etc. as required. These changes can be made while the CloudStack management server is running, and the changes can be seen immediately with a browser refresh.

The configuration file provides the following properties for basic customisation:

Property Description
apiBase Changes the suffix for the API endpoint
docBase Changes the base URL for the documentation
appTitle Changes the title of the portal
footer Changes the footer text
logo Changes the logo top-left side image
banner Changes the login banner image
error.404 Changes the image of error Page not found
error.403 Changes the image of error Forbidden
error.500 Changes the image of error Internal Server Error.

 

To change the logo, login banner, error page icon, documentation base URL, etc. the following details can be edited in config.json:

"apiBase": "/client/api",
"docBase": "http://docs.cloudstack.apache.org/en/latest",
"appTitle": "CloudStack",
"footer": "Licensed under the <a href='http://www.apache.org/licenses/' target='_blank'>Apache License</a>, Version 2.0.",
"logo": "assets/logo.svg",
"banner": "assets/banner.svg",
"error": {
    "404": "assets/404.png",
    "403": "assets/403.png",
    "500": "assets/500.png"
}

Theme Customisation

The customisation of themes is also possible, such as modifying banner width or general color. This can be done by editing the “theme” section of the config.json file. This section provides the following properties for customisation:

Property Description
@logo-background- Changes the logo background color
@project-nav-text-color Changes the navigation menu background color of the project
@project-nav-text-color Changes the navigation menu background color of the project view.
@navigation-background-color Changes the navigation menu background color
@primary-color Changes the major background color of the page (background button, icon hover, etc).
@link-color Changes the link color
@link-hover-color Changes the link hover color
@loading-color Changes the message loading color and page loading bar at the top page
@success-color Changes success state color
@processing-color Changes processing state color. Exp: progress status
@warning-color Changes warning state color
@error-color Changes error state color
@heading-color Changes table header color
@text-color Change in major text color
@text-color-secondary Change of secondary text color (breadcrumb icon)
@disabled-color Disable state color (disabled button, switch, etc)
@border-color-base Change in major border color
@logo-width Change the width of the logo top-left side
@logo-height Change the height of the logo top-left side
@banner-width Changes the width of the login banner
@banner-height Changes the height of the login banner
@error-width Changes the width of the error image
@error-height Changes the height of the error image

 

Some example theme colors:

  • Blue: #1890FF
  • Red: #F5222D
  • Yellow: #FAAD14
  • Cyan: #13C2C2
  • Green: #52C41A
  • Purple: #722ED1

This example shows the configuration changes necessary in /etc/cloudstack/management/config.json to customise logo and colors:

{
  "apiBase": "/client/api",
  "docBase": "http://docs.cloudstack.apache.org/en/latest",
  "appTitle": "Shapeblue Cloud",
  ...
  "logo": "assets/customlogo.svg",
  ...
  "theme": {
    ...
    "@primary-": "#dd55ff",
    ...
    "@warning-color": "#ff2a7f",
    ...
    "@text-color": "#37c8ab
",

...

Basic Customizations on CloudStack UI

Links to Contextual Help

The UI provides support for showing links to contextual help in pages and forms. By default, the links are to the official CloudStack documentation. For each section item (menu items in the left pane of the UI such as Instances, Volumes, Templates, etc.) or UI form there is a suffix defined in the code in the Javascript file for the section as docHelp property. This suffix is added to the docBase property defined in the config file / and the resulting URL is set as the link for a contextual help button.

The docHelpMappings property can be used to provide a list of override mappings for different suffix values, and to change a particular help URL, a mapping can be added in the configuration using the suffix part of the URL. By default, `docHelpMappings` lists all existing documentation URL suffixes, mapped to themselves, in the configuration file that are used in the code. This list of documentation URL suffixes can also be found in the CloudStack documentation.

In the example below, we change the docBase and docHelpMappings values to show a custom link for contextual help. By default, docBase is set to http://docs.cloudstack.apache.org/en/latest and contextual help on Instances page links to http://docs.cloudstack.apache.org/en/latest/adminguide/virtual_machines.html.

To make Instances page link to http://mycustomwebsite.com/custom_vm_page.html, docBase can be set to http://mycustomwebsite.com and a docHelpMapping can be added for adminguide/virtual_machines.html as custom_vm_page.html.

Changes in /etc/cloudstack/management/config.json:

{
  ...
  "docBase": http://mycustomwebsite.com,
  ...
  "docHelpMappings": {
    "adminguide/virtual_machines.html": "custom_vm_page.html",
    ...
  },
  ...
}

 

Plugin support

The CloudStack UI also supports custom plugins. Changes in /etc/cloudstack/management/config.json can show a list of custom plugins that would allow showing custom in an iframe. Custom HTML pages can be used for showing some static content to the users while an HTTP service running on an internally deployed web server or an external website can be used to show any dynamic content.

The example below adds two custom plugins in the UI with their own navigation sections. The first plugin shows a custom HTML file. The second plugin shows CloudStack website within the UI.

...
  "plugins": [
    {
      "name": "ExamplePlugin",
      "icon": "appstore",
      "path": "example.html"
    },
    {
      "name": "ExamplePlugin1",
      "icon": "fire",
      "path": "https://cloudstack.apache.org/"
    }
  ]
}

Custom Plugin 1 Custom Plugin

An icon for the plugin can be chosen from Ant Design icons listed at Icon – Ant Design Vue.

For displaying custom HTML in the plugin, an HTML file can be stored in the CloudStack management server’s web application directory on the server, i.e., /usr/share/cloudstack-management/webapp and path can be set to the name of the file. For displaying an HTTP service or a web page, URL can be set as the path of the plugin.

 

Advanced Customisation

Advanced UI customisation is possible by changing source code files that define rules for different elements in the UI, and requires building CloudStack from the source code (available at github.com/apache/cloudstack). This will require some experience in JavaScript, VueJS and nodejs. Also, the UI is built using Ant Design components, so knowledge of ant-design-vue & its principles would help greatly. More information about Ant Design Vue can be found here.

The source code can be obtained either from the CloudStack website in tarball form or from the Apache Cloudstack Github repository. For example, using git, the repository can be cloned locally:

git clone https://github.com/apache/cloudstack.git
# To checkout specific release TAG
cd cloudstack
git fetch --tags
git checkout TAG
# CloudStack 4.15.0.0 has tag named 4.15.0.0 on Github, to checkout the same
git checkout 4.15.0.0

After obtaining the CloudStack source code, the UI code can be found in the UI sub-directory. For different customisations, changes can be made in the code and then npm can be used to create a static web application. Finally, one would copy the built UI to the webapp directory on the management server host. Building the UI will require installing dependencies for nodejs, npm and VueJS. The necessary steps for building UI from source code are dependent on the host operating system and can be found in the CloudStack UI development documentation. The instructions below have been tested on Ubuntu 20.04.

Install dependencies:

sudo apt-get install npm nodejs
# Install system-wide dev tools
sudo npm install -g @vue/cli npm-check-updates

 

Fetch npm package dependencies and build:

cd ui
npm install
npm run build

 

Copy built UI to webapp directory on the management server host:

cd dist
scp -rp ./ {user-on-management-server}@{management-server}:/usr/share/cloudstack-management/webapp/
# Access UI at {management-server}:8080/client in browser

 

Alternatively, packages can be rebuilt for the desired platform. UI will be packaged in cloudstack-ui package. Refer to the CloudStack packaging documentation for more details. For testing changes during development npm can be started without build:

cd ui
npm install
npm run serve
# Or run: npm start

Examples of advanced customisations can be seen below.

Icon changes

Custom icons can be added in the directory cloudstack/ui/src/assets/icons

Once a new icon file (preferably an SVG file) is placed in the directory it can be imported in the Javascript (.js) file for the corresponding section item (menu items in the left pane in the UI).

A list of available Ant Design icons can be found at https://www.antdv.com/components/icon/

The example below shows changing icon for Compute menu and Instances sub-menu:

New files added named customcompute.svg and custominstances.svg added in the cloudstack/ui/src/assets/icons/ directory:

⇒ ls cloudstack/ui/src/assets/icons/ -l
total 36
-rw-rw-r-- 1 shwstppr shwstppr 3008 Feb 16 15:55 cloudian.svg
-rw-r--r-- 1 shwstppr shwstppr 483 Oct 26 1985 customcompute.svg
-rw-r--r-- 1 shwstppr shwstppr 652 Oct 26 1985 custominstances.svg
-rw-rw-r-- 1 shwstppr shwstppr 10775 Feb 16 15:55 debian.svg
-rw-rw-r-- 1 shwstppr shwstppr 10001 Feb 16 15:55 kubernetes.svg

cloudstack/ui/src/config/section/compute.js updated to import and set a new icon for the menu items,

...
import kubernetes from '@/assets/icons/kubernetes.svg?inline'
import store from '@/store'
+import customcompute from '@/assets/icons/customcompute.svg?inline'
+import custominstances from '@/assets/icons/custominstances.svg?inline'

export default {
   name: 'compute',
   title: 'label.compute',
-  icon: 'cloud',
+  icon: 'customcompute',
   children: [
     {
       name: 'vm',
       title: 'label.instances',
-      icon: 'desktop',
+      icon: 'custominstances',
       docHelp: 'adminguide/virtual_machines.html',
       permission: ['listVirtualMachinesMetrics'],

...

After rebuilding and installing the new packages, the UI will show the new icon(s):

Localization

Language translation files for text in the UI are placed in cloudstack/ui/public/locales/. A copy of the file cloudstack/ui/public/locales/en.json can be made in the same directory to include all translation keys, following the naming convention for locales (for example, el will be for Greek, but el_CY and el_GR can be other variants as well).

Once string keys are translated, changes can be made in the file cloudstack/ui/src/components/header/TranslationMenu.vue for the new language to be displayed as an option in the Languages dropdown in the UI. This example shows a dummy locale being added in the UI.

 

New file added in cloudstack/ui/public/locales/ with name zz.json:

⇒ ls cloudstack/ui/public/locales/ -lr
total 3112
-rw-rw-r-- 1 shwstppr shwstppr 196471 Feb 25 11:42 zz.json
-rw-rw-r-- 1 shwstppr shwstppr 186117 Feb 16 15:55 zh_CN.json
-rw-rw-r-- 1 shwstppr shwstppr 354705 Feb 16 15:55 ru_RU.json
...

Changes necessary in cloudstack/ui/src/components/header/TranslationMenu.vue to add the new language with above translation file:

...
       :selectedKeys="[language]"
       @click="onClick">
       <a-menu-item key="en" value="enUS">English</a-menu-item>
+      <a-menu-item key="zz" value="hi">New Language</a-menu-item>
       <a-menu-item key="hi" value="hi">हिन्दी</a-menu-item>
       <a-menu-item key="ja_JP" value="jpJP">日本語</a-menu-item>
       <a-menu-item key="ko_KR" value="koKR">한국어</a-menu-item>
...

Upon re-building and installing new packages, the UI will show the newly added language in the Languages dropdown:

Other Modifications

There could be several use-cases that require tweaking the UI to enable/disable functionality or to hide or show different elements in the UI. For such modifications, a thorough understanding of Javascript and Vue.js will be required. The development section on the CloudStack repository can be referred to for making such advanced changes.

The following example shows hiding Register Template from URL action from Templates view in the UI for User role:

The templates sub-menu is defined in the Images section cloudstack/ui/src/config/section/image.js, and a list of actions can be defined for each child of the section item. The Register Template from URL action can be found in the actions property with label value label.action.register.template. To hide the action for User role accounts, we can use show property for the action. It can be set as follows:

...
       actions: [
 {
   api: 'registerTemplate',
   icon: 'plus',
   label: 'label.action.register.template',
   docHelp: 'adminguide/templates.html#uploading-templates-from-a-remote-http-server',
           listView: true,
           popup: true,
+          show: (record, store) => {
+            return (['Admin', 'DomainAdmin'].includes(store.userInfo.roletype))
+          },
           component: () => import('@/views/image/RegisterOrUploadTemplate.vue')
         },
...

After making these changes, the Register Template from URL action is shown only for Admin and Domain Admin accounts.

Custom Templates View CloudStack

It should be considered that removing elements from the UI does NOT restrict a users ability to access the functionality through the CloudStack API and should, therefore, only be used in a usability context, not a security context. CloudStacks Roles based security model should be used if a user is to be prohibited from accessing functionality.

 

Conclusion

The UI is no longer part of the core CloudStack Management server code (giving a much more modular and flexible approach) and is highly customisable, with even advanced changes possible with some knowledge of JS and Vue.js. The UI was designed keeping simplicity and user experience in mind.

The UI is designed to work across all browsers, tablets and phones. From a developer perspective, the codebase should be about a quarter that of the old UI and, most importantly, the Vue.JS framework is far easier for developers to work with.

 

 

IoTs have gained interest over recent times. In this article, Rohit Yadav, Principal Engineer at ShapeBlue, explores and shares his personal experience of setting up an Apache CloudStack based IaaS cloud on Raspberry Pi4, a popular single-board ARM64 IoT computer that can run GNU/Linux kernel with KVM. The article presents the use case of Apache CloudStack on RaspberryPi4 with Ubuntu 20.04 and KVM.

CloudStack support for ARM64/RaspberryPi4 is available from version 4.13.1.0+. This guide uses a custom CloudStack 4.16 repository that was created and tested specifically against the new RaspberryPi4 and Ubuntu 20.04 arm64 to set up an IaaS cloud computing platform.

Apache CloudStack Deafault Admin View

Getting Started

By default, KVM is not enabled in the arm64 pre-built images. I found this during my research where I built Linux kernel with KVM enabled on the ARM64 RPi4 board and shared my findings with the Ubuntu kernel team and influence the build team to have KVM enabled by default. With the changes, now Ubuntu 19.10+ ARM64 builds have KVM enabled.

To get started you need the following:

  • RPi4 board (cortex-72 quad-core 1.5Ghz processor) 4GB/8GB RAM model
  • Ubuntu 20.04.2 arm64 image installed on a Samsung EVO+ 128GB micro sd card (any 4GB+ class 10 u3/v30 sdcard will do).
  • (Optional) An external USB-based SSD storage with high iops for storage

Flash the image to your microSD card:

$ xzcat ubuntu-20.04.2-preinstalled-server-arm64+raspi.img.xz | sudo dd bs=4M of=/dev/mmcblk0
0+381791 records in
0+381791 records out
3259499520 bytes (3.3 GB, 3.0 GiB) copied, 131.749 s, 24.7 MB/s

Eject and insert the microSD card again to initiate volume mounts, then create an empty /boot/ssh file to enable headless ssh:

# find the mount point
mount -l | grep /dev/mmcblk0
# cd to the writable mount point, for example:
cd /media/rohit/writable
# create an empty ssh file
sudo touch boot/ssh

Next, check and ensure that 64-bit mode is enabled:

cd /media/rohit/system-boot

# Edit config.txt to have this:
[all]
arm_64bit=1
device_tree_address=0x03000000
dtoverlay=vc4-fkms-v3d
enable_gic=1

# Save file and unmount to safely eject the microSD card
sync
sudo umount /dev/mmcblk0p1
sudo umount /dev/mmcblk0p2

Next, eject and insert the microSD card in your Raspberry Pi4 and power on. Find the device via your router dhcp clients list and ssh into it using username ubuntu and password ubuntu.

ssh ubuntu@<ip>

Allow the root user for ssh access using password, fix /etc/ssh/sshd_config and set PermitRootLogin yes and restart ssh using systemctl restart ssh. Change and remember the root password:

passwd root

Next, install basic packages and setup time as the root user:

apt-get update
apt-get install ntpdate openssh-server sudo vim htop tar iotop
ntpdate time.nist.gov # update time
hostnamectl set-hostname cloudstack-mgmt

Ensure that KVM is available at /dev/kvm or by running kvm-ok:

# apt install cpu-checker

# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Disable automatic upgrades and unnecessary packages:

apt-get remove --purge unattended-upgrades snapd cloud-init
# Edit the files at /etc/apt/apt.conf.d/* with following
APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Unattended-Upgrade "1";

Tip: In case you suspect IO load, to reduce load on RaspberryPi4 micrSD card change the fs commit duration, for example in /etc/fstab: (however this adds risk of potential data loss)

LABEL=writable  /        ext4   defaults,commit=60      0 0
LABEL=system-boot       /boot/firmware  vfat    defaults        0       1

Network Setup

Next, setup host networking using Linux bridges that can handle CloudStack’s public, guest, management and storage traffic. For simplicity, a single bridge cloudbr0 to be used for all traffic types on the same physical network. Install bridge utilities:

apt-get install bridge-utils

Note: This part assumes that you’re in a 192.168.1.0/24 home network which is a typical RFC1918 private network.

Admins can now use netplan to configure networking with Ubuntu 20.04. The default installation creates a file at /etc/netplan/50-cloud-init.yaml which you can comment, and create a file at /etc/netplan/01-netcfg.yaml applying your network specific changes:

 network:
   version: 2
   renderer: networkd
   ethernets:
     eth0:
       dhcp4: false
       dhcp6: false
       optional: true
   bridges:
     cloudbr0:
       addresses: [192.168.1.10/24]
       gateway4: 192.168.1.1
       nameservers:
         addresses: [8.8.8.8]
       interfaces: [eth0]
       dhcp4: false
       dhcp6: false
       parameters:
         stp: false
         forward-delay: 0

Tip: If you want to use VXLAN based traffic isolation, make sure to increase the MTU setting of the physical nics by 50 bytes (because VXLAN header size is 50 bytes). For example:

  ethernets:
    enp2s0:
      match:
        macaddress: 00:01:2e:4f:f7:d0
      mtu: 1550
      dhcp4: false
      dhcp6: false
    enp3s0:
      mtu: 1550

Save the file and apply network config, finally reboot:

netplan generate
netplan apply
reboot

Management Server Setup

Install MySQL server: (run as root)

apt-get install mysql-server

Make a note of the MySQL server’s root user password. Configure InnoDB settings in /etc/mysql/mysql.conf.d/mysqld.cnf:

[mysqld]

server_id = 1
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_ENGINE_SUBSTITUTION"
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=1000
log-bin=mysql-bin
binlog-format = 'ROW'

default-authentication-plugin=mysql_native_password

Restart database:

systemctl restart mysql

Installing management server may give dependency errors, so download and manually install few packages as follows:

apt-get install python2
wget http://mirrors.kernel.org/ubuntu/pool/universe/m/mysql-connector-python/python-mysql.connector_2.1.6-1_all.deb
dpkg -i python-mysql.connector_2.1.6-1_all.deb

# Install management server
echo deb [trusted=yes] https://download.cloudstack.org/rpi4/4.16 / > /etc/apt/sources.list.d/cloudstack.list
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 484248210EE3D884
apt-get update
apt-get install cloudstack-management cloudstack-usage

# Stop the automatic start after install
systemctl stop cloudstack-management cloudstack-usage

Setup database:

cloudstack-setup-databases cloud:cloud@localhost --deploy-as=root: -i 192.168.1.10

Storage Setup

In my setup, I’m using an external USB SSD for storage. I plug in the USB SSD storage, with the partition formatted as ext4, here’s the /etc/fstab:

LABEL=writable  /        ext4   defaults        0 0
LABEL=system-boot       /boot/firmware  vfat    defaults        0       1
UUID="91175b3a-ee2c-47a7-a1e5-f4528e127523" /export ext4 defaults 0 0

Then mount the storage as:

mkdir -p /export
mount -a

Install NFS server:

apt-get install nfs-kernel-server quota

Create exports:

echo "/export  *(rw,async,no_root_squash,no_subtree_check)" > /etc/exports
mkdir -p /export/primary /export/secondary
exportfs -a

Configure and restart the NFS server:

sed -i -e 's/^RPCMOUNTDOPTS="--manage-gids"$/RPCMOUNTDOPTS="-p 892 --manage-gids"/g' /etc/default/nfs-kernel-server
sed -i -e 's/^STATDOPTS=$/STATDOPTS="--port 662 --outgoing-port 2020"/g' /etc/default/nfs-common
echo "NEED_STATD=yes" >> /etc/default/nfs-common
sed -i -e 's/^RPCRQUOTADOPTS=$/RPCRQUOTADOPTS="-p 875"/g' /etc/default/quota
service nfs-kernel-server restart

Seed systemvm template from the management server:

wget https://download.cloudstack.org/rpi4/systemvmtemplate/4.16/systemvmtemplate-4.16.0-kvm-arm64.qcow2
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
          -m /export/secondary -f systemvmtemplate-4.16.0-kvm-arm64.qcow2 -h kvm \
          -o localhost -r cloud -d cloud

KVM Host Setup

Install KVM and CloudStack agent, configure libvirt:

echo deb [trusted=yes] https://download.cloudstack.org/rpi4/4.16 / > /etc/apt/sources.list.d/cloudstack.list
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 484248210EE3D884
apt-get update
apt-get install qemu-kvm cloudstack-agent
systemctl stop cloudstack-agent

Enable VNC for console proxy:

sed -i -e 's/\#vnc_listen.*$/vnc_listen = "0.0.0.0"/g' /etc/libvirt/qemu.conf

Fix security driver issue:

echo 'security_driver = "none"' >> /etc/libvirt/qemu.conf

Enable libvirtd in listen mode:

sed -i -e 's/.*libvirtd_opts.*/libvirtd_opts="-l"/' /etc/default/libvirtd

Configure default libvirtd config:

echo 'listen_tls=0' >> /etc/libvirt/libvirtd.conf
echo 'listen_tcp=1' >> /etc/libvirt/libvirtd.conf
echo 'tcp_port = "16509"' >> /etc/libvirt/libvirtd.conf
echo 'mdns_adv = 0' >> /etc/libvirt/libvirtd.conf
echo 'auth_tcp = "none"' >> /etc/libvirt/libvirtd.conf

The traditional socket/listen based configuration may not be supported, we can get the old behaviour as follows:

systemctl mask libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
systemctl restart libvirtd

Ensure the following options in the /etc/cloudstack/agent/agent.properties:

guest.cpu.arch=aarch64
guest.cpu.mode=host-passthrough
host.reserved.mem.mb=640

Configure Firewall

Configure firewall:

# configure iptables
NETWORK=192.168.1.0/24
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 2049 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 32803 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 32769 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 662 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 8250 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 8080 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 9090 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 16514 -j ACCEPT

apt-get install iptables-persistent

# Disable apparmour on libvirtd
ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

Launch Cloud

Start your cloud:

cloudstack-setup-management
systemctl status cloudstack-management
tail -f /var/log/cloudstack/management/management-server.log

After management server is UP, proceed to http://192.168.1.10:8080/client/primate (change the IP suitably) and log in using the default credentials – username admin and password password.

Apache CloudStack Portal Log In

NOTE: I found some DB issue while deploying a zone, please check/run the following until the bug is fixed upstream, using MySQL client:

use cloud;
ALTER TABLE nics MODIFY COLUMN update_time timestamp NULL;

Example Setup

The following is an example of how you can set up an advanced zone in the 192.168.1.0/24 network.

Setup Zone

Go to Infrastructure > Zone and click on add zone button, select advanced zone and provide the following configuration:

Name - any name
Public DNS 1 - 8.8.8.8
Internal DNS1 - 192.168.1.1
Hypervisor - KVM
Set Up DNS Zones in Apache CloudStack

Setup Network

Use the default, which is VLAN isolation method on a single physical nic (on the host) that will carry all traffic types (management, public, guest etc).

Note: If you’ve iproute2 installed and host’s physical NIC MTUs configured, you can use VXLAN as well.

Public traffic configuration:

Gateway - 192.168.1.1
Netmask - 255.255.255.0
VLAN/VNI - (leave blank for vlan://untagged or in case of VXLAN use vxlan://untagged)
Start IP - 192.168.1.20
End IP - 192.168.1.50

Pod Configuration:

Name - any name
Gateway - 192.168.1.1
Start/end reserved system IPs - 192.168.1.51 - 192.168.1.80

Guest traffic:

VLAN/VNI range: 500-800

Add Resources

Create a cluster with the following:

Name - any name
Hypervisor - Choose KVM

Add your default/first host:

Hostname - 192.168.1.10
Username - root
Password - <password for root user, please enable root user ssh-access by password on the KVM host>

Note: root user ssh-access is disabled by default, please enable it.

Add primary storage:

Name - any name
Scope - zone-wide
Protocol - NFS
Server - 192.168.1.10
Path - /export/primary

Add secondary storage:

Provider - NFS
Name - any name
Server - 192.168.1.10
Path - /export/secondary

Next, click Launch Zone which will perform the following actions:

Create Zone
Create Physical networks:
  - Add various traffic types to the physical network
  - Update and enable the physical network
  - Configure, enable and update various network provider and elements such as the virtual network element
Create Pod
Configure public traffic
Configure guest traffic (vlan range for physical network)
Create Cluster
Add host
Create primary storage (also mounts it on the KVM host)
Create secondary storage
Complete zone creation

Finally, confirm and enable the zone. Wait for the system VMs to come up, then you can proceed with your IaaS usage.

You can build your own arm64 guest templates or deploy VMs using these guest templates at: https://download.cloudstack.org/rpi4/templates

About the Author

Rohit Yadav - Principal Engineer at ShapeBlueRohit Yadav is a Principal Engineer at ShapeBlue. He has been involved with Cloudstack since 2012 when he was first made a committer in the Apache CloudStack project and has made more code commits to Cloudstack than any other developer in its history. He has been a PMC member of the project since 2015. Rohit is the author & maintainer of the Cloudstack CloudMonkey project and has been instrumental in the development of many of CloudStack’s flagship features.

Rohit spends most of his time designing and implementing features in Apache CloudStack. In his free time, he has a passion for renewable energy technology and is always willing to help people passionate about open-source technologies.

Read more articles from Rohit on his personal blog.

 

Introduction

Upgrading CloudStack can sometimes be a little daunting – but as the 5P’s proverb goes – Proper Planning Prevents Poor Performance. With planning, testing and the right strategy upgrades will have a high chance of success and have minimal impact on your CloudStack end users.

The CloudStack upgrade process is documented in the release notes for each CloudStack version (e.g. version 4.6 to 4.9 upgrade guide). These upgrade notes should always be adhered to in detail since they cover both the base CloudStack upgrade as well as the required hypervisor upgrade changes. This blog article does however aim to add some advise and best practices to make the overall upgrade process easier.

The following, outlines some best practice steps which can – and should – be carried out in advance of the actual CloudStack upgrade. This ensures all steps are done accurately and without the pressure of an upgrade change window.

Strategy – in-place upgrade vs. parallel builds

The official CloudStack upgrade guides are based on an in-place upgrade process. This process works absolutely fine  – it does however have some drawbacks:

  • It doesn’t easily lend itself to technology refreshes – if you want to carry out hardware or OS level upgrades or configuration changes to your CloudStack management and MySQL servers at the time of upgrade this adds complexity and time.
  • To ensure you have rollback point in case of problems with the upgrade you need to either take VM snapshots of the infrastructure (if your management and MySQL servers are Virtual Machines) or otherwise back up all configurations.
  • In case of a rollback scenario you also have to ensure all upgraded system CloudStack logs as well as the database is backed up before reverting to any snapshots / backups – such that a post-mortem investigation can be carried out.

To get around these issues it is easier and a less risky to simple prepare and build new CloudStack management and MySQL servers in advance of the actual upgrade:

  • This means you can carry out the upgrade on brand new servers, whilst leaving the old servers offline for a much quicker rollback.
  • This also means in the case of a rollback scenario the newly built servers can simply be disabled and left for further investigation whilst the original infrastructure is brought back online.
  • Since most people run their CloudStack management infrastructure as Virtual Machines the cost of adding a handful of new VMs for this purpose is negligible compared to the benefits of a more efficient and less risky upgrade process.

By using this parallel build process the upgrade process will include a few more steps – these will be covered later in this article:

  • Upgrade starts with stopping and disabling CloudStack management on the original infrastructure.
  • The original database is then copied to the new MySQL server(s), before MySQL is disabled.
  • Once completed the new CloudStack management servers are configured to point to the new MySQL server(s).
  • The upgrade is then run on the new CloudStack management servers, pointing to the new MySQL server(s).
  • As an additional step this does require the CloudStack global “host” to be reconfigured.

New CloudStack management infrastructure build

Following the standard documented build procedure for the new / upgrade version of CloudStack:

Build new management servers but without carrying out the following steps:

  • Do not carry out the steps for seeding system templates.
  • Do not carry our the cloudstack-setup-databases step.
  • Do not run cloudstack-setup-management.

Build new MySQL servers:

  • Ensure the cloud username / password is configured.
  • Create new empty databases for cloud / cloud_usage / cloudbridge.
  • If possible configure the MySQL master-slave configuration.

 Upgrade lab testing

As for any technical project testing the upgrade prior to upgrading is important. If you don’t have a production equal lab to do this in it is worth the effort to build one.

A couple of things to keep in mind:

  • Try to make the upgrade lab as close to production as possible.
  • Make sure you use the same OS’es, the same patch level and the same software versions (especially Java and Tomcat).
  • Use the same hardware model for the hypervisors if possible. These don’t have to be the same CPU / memory spec. but should otherwise match as closely as possible.
  • Use the same storage back-ends for primary and secondary storage. If not possible at least use the same protocols (NFS, iSCSI. FC, etc.)
  • Try to match your production cloud infrastructure with regards to number of zones.
  • If your production infrastructure has multiple clusters or pods also try to match this.

Once the lab has been built make sure it is prepared with a test workload similar to production:

  • Configure a selection of test domain and accounts + users.
  • Create VM infrastructure similar to production – e.g. configure isolated and shared networks, VPCs, VPN connections etc.
  • Create a number of VMs across multiple guest OS’es – at the very minimum try to have both Windows and Linux VMs running.
  • Make sure you also have VMs with additional disks attached.
  • Keep in mind your post-upgrade testing may be destructive – i.e. you may delete or otherwise break VMs. To cover for this it is good practice to prepare a lot more VMs than you think you may need.

For the test upgrade:

  • Ensure you have rollback points – you may want run the test upgrade multiple times. An easy way is to take VM snapshots and SAN/NAS snapshots.
  • Using the parallel build process create a new management server and carry out the upgrade (see below).
  • Investigate and fix any problems encountered during the upgrade process and add this to the production upgrade plan.

Once upgraded carry out thorough functional, regression and user acceptance testing. The nature of these depend very much on the type, complexity and nature of the production cloud, but should at the very minimum include:

User actions
  • Logins with passwords created prior to the upgrade.
  • Creation of new domains / accounts / users.
  • Deletion of old and new domains / accounts / users.
All VM lifecycle actions
  • Create / delete new VMs as well as deletion of existing VMs.
  • Start / stop of new and existing VMs.
  • Additions and removals of disks.
  • Additions and removals of networks to VMs.
Network lifecycle actions
  • Creation of new isolated / shared / VPC networks.
  • Deletion of existing as well as new isolated / shared / VPC networks.
  • Connectivity testing from VMs across all network types.
Storage lifecycle actions
  • Additions and deletions of disks.
Other
  • Test any integrated systems – billing portals, custom GUIs / portals, etc.

If any problems are encountered these need to be investigated and addressed prior to the production upgrade.

Production DB upgrade test

The majority of the CloudStack upgrade is done against the underlying databases. As a result the lab testing discussed above will not highlight any issues resulting from:

  • Schema changes
  • Database or table sizes
  • Internal database constraints

To test for this it is good practice to carry out the following test steps. This process should be carried out as close in time to the actual upgrade as possible:

  • Build a completely network isolated management server using the new upgrade version of CloudStack. For this test simply use the same server for management and MySQL.
  • Import a copy of the production databases.
  • Start the upgrade by running the standard documented cloudstack-setup-databases configuration command as well as cloudstack-setup-management. The latter will complete the server configuration and start the cloudstack-management service.
  • Monitor the CloudStack logs (/var/log/cloudstack/management/management-server.log).

This will highlight any issues with the underlying SQL upgrade scripts. If any problems are encountered these need to be investigated thoroughly and fixed.

Please note – if any problems are found and subsequently fixed the original database needs to be imported again and the full upgrade restarted, otherwise the upgrade process will attempt to carry out upgrade steps which have already been completed – which in most cases will lead to additional errors.

Once all issues have been resolved and the upgrade completes without errors the time taken for the upgrade should be noted. The database upgrade process can take from minutes up to multiple hours depending on the size of the production databases – something which needs to be taken into account when planning for the production upgrade. If the upgrade takes longer than expected it may also be prudent to consider database housekeeping of event and usage tables to cut down on size and thereby speeding up the upgrade process.

To prevent re-occurrence of problems during the production upgrade it is key that all database upgrade fixes and workarounds are documented and added to the production upgrade plan.

Install system VM templates

The official upgrade documentation specifies which system VM templates to upload prior to the upgrade.

Please note – whether you do the upgrade in-place or on a parallel build the system VM templates need to be uploaded on the original CloudStack infrastructure (i.e. it can not be done on the new infrastructure when using parallel builds).

In other words – the template upload can and should be done in the days prior to the upgrade. Uploading these will not have any adverse effect on the existing CloudStack instance – since the new templates are simply uploaded in a similar fashion to any other user template upload. Carrying this out in advance will also ensure the templates should be fully uploaded by the time the production upgrade is carried out, thereby cutting down the upgrade process time.

Other things to consider prior to upgrade

  • Always make sure the production CloudStack infrastructure is healthy prior to the upgrade – an upgrade will very seldom fix underlying issues – these will however quickly cause problems during the upgrade and lead to extended downtime and rollbacks.
  • Once the upgrade has been completed it is very difficult to backport any post-upgrade user changes back to the original database in case of a rollback scenario. In other words – it is prudent to disable user access to the CloudStack infrastructure until the upgrade has been deemed a success.
  • It is good practice to at the very minimum have the CloudStack MySQL servers configured in a master-slave configuration. If this has not been done on the original infrastructure it can be configured on the newly built MySQL servers prior to any database imports, again cutting down on processing time during the actual production upgrade.

Upgrade

Step 1 – review all documentation

The official upgrade documentation is updated for each version of CloudStack (e.g. CloudStack 4.6 to 4.9 upgrade guide) – and should always be taken into consideration during an upgrade.

Step 2 – confirm all system VM templates are in place

In the CloudStack GUI make sure the previously uploaded system VM templates are successfully uploaded:

  • Status: Download complete
  • Ready: Yes

Step 3 – stop all running CloudStack services

On all existing CloudStack servers stop and disable the cloudstack-management service:

# service cloudstack-management stop;
# service cloudstack-usage stop;
# chkconfig cloudstack-management off;
# chkconfig cloudstack-usage off;

Step 4 – back up existing databases and disable MySQL

On the existing MySQL servers back up all CloudStack databases:

# mysqldump -u root -p cloud > /root/cloud-backup.sql;
# mysqldump -u root -p cloud_usage > /root/cloud_usage-backup.sql;
# mysqldump -u root -p cloudbridge > /root/cloudbridge-backup.sql;

# service mysqld stop;
# chkconfig mysqld off;

Step 5 – copy and import databases on the new MySQL master server

# scp root@<original MySQL server IP>:/root/cloud*.sql;
# mysql –u root –p cloud < cloud-backup.sql;
# mysql –u root –p cloud_usage < cloud_usage-backup.sql;
# mysql –u root –p cloudbridge < cloudbridge-backup.sql;

Step 6 – update the “host” global setting

The parallel builds method introduces new management servers whilst disabling the old ones. If no load balancers are being used the “host” global setting requires to be updated to specify which new management server the hypervisors should check into:

# mysql –p;
# update cloud.configuration set value=”<new management server IP address | new loadbalancer VIP>” where name=”host”;

Step 7 – carry out all hypervisor specific upgrade steps

Following the official upgrade documentation upgrade all hypervisors as specified.

Step 8 – configure and start the first management server

On the first management server only carry out the following steps to configure connectivity to the new MySQL (master) server and start the management service. Note the “cloudstack-setup-databases” command is executed without the “–deploy-as” option:

# cloudstack-setup-databases <cloud db username>:<cloud db password>@<cloud db host>; 
# cloudstack-setup-management;
# service cloudstack-management status;

Step 9 – monitor startup

  • Monitor the CloudStack service startup:
    # tail –f /var/log/cloudstack/management/management-server.log
  • Open the CloudStack GUI and check and confirm all hypervisors check in OK.

Step 10 – update system VMs

  • Destroy the CPVM and make sure this is recreated with the new system VM template.
  • Destroy the SSVM and again makes sure this comes back online.
  • Restart guest networks “with cleanup” to ensure all Virtual Routers are recreated with the new template.
  • Checking versions of system VMs / VRs can be done by either:
    • Checking /etc/cloudstack-release locally on each appliance.
    • By checking the vm_instance / vm_templates table in the cloud database.
    • Updating the “minreq.sysvmtemplate.version” global setting to the new system VM template version – which will show which VRs require updated in the CloudStack GUI.

Step 11 – configure and start additional management servers

Following the same steps as in step 8 above configure and start the cloudstack-management service on any additional management servers.

Rollback

Rollbacks of upgrades should only be carried out in worst case scenarios. A rollback is not an exact science and may require additional steps not covered in this article. In addition any rollback should be carried out as soon as possible after the original upgrade – otherwise the new infrastructure will simply have too many changes to VMs, networks, storage, etc. which can not be easily backported to the original database.

Step 1 – disable new CloudStack management / MySQL servers

On the new CloudStack management servers stop and disable the management service:

# service cloudstack-management stop;
# chkconfig cloudstack-management off;

 

On the new MySQL server stop and disable MySQL:

# service mysqld stop;
# chkconfig mysqld off;

Step 2 – tidy up new VMs on the hypervisor infrastructure

Using hypervisor specific tools identify all VMs (system VMs and VRs) created since the upgrade. These instances need to be deleted since the old management infrastructure and databases have no knowledge of them.

Step 3 – enable the old CloudStack management / MySQL servers:

On the original MySQL servers:

# service mysqld start;
# chkconfig mysqld on;

 

On the original CloudStack management servers:

# service cloudstack-management start;
# chkconfig cloudstack-management on;

Step 4 – restart system VMs and VRs

System VMs and VRs may restart automatically (since the original instances are gone). If these do not come online follow the procedure described in upgrade step 10 to recreate these.

Conclusion

Many CloudStack users are reluctant to upgrade their platforms – however the risk of not upgrading and falling behind on technology and security tends to be a greater risk in the long run. Following these best practices should reduce the overall workload, risk and stress of upgrades by both allowing a lot of work to be carried out upfront, as well as providing a relatively straight forward rollback procedure.

As always we’re happy to receive feedback , so please get in touch with any comments, questions or suggestions.

About The Author

Dag Sonstebo is a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends most of his time designing, implementing and automating IaaS solutions based on on Apache CloudStack.

While, in my opinion VMware’s vSphere is the best performing and most stable hypervisor available, vCenter obstinately remains a single point of failure when using vSphere and it’s no different when leveraging vCenter in a CloudStack environment.  Therefore, very occasionally there is a requirement to rebuild a vCentre server which was previously running in your CloudStack environment.  Working with one of our clients we found out (the hard way) how to do this.

Replacing a Pre-Existing vCenter Instance

Recreating a vCenter Instance

The recovery steps below apply to both the Windows-based vCenter server or an appliance.

The first step, which I won’t cover in detail here, is to build a new vCenter server and attach the hosts to it.  Follow VMware standard procedure when building your new vCenter server.  To save some additional steps later on, reuse the host name and IP address of the previous vCenter server. Ensure that the permissions have been reapplied allowing CloudStack to connect at the vCentre level with full administrative privileges and that datacenter and cluster names have been recreated accurately.  When you re-add the hosts, the VMs running on those hosts will automatically be pulled into the vCentre inventory.

Networking

The environment that we were working on was using standard vSwitches; therefore the configuration of those switches was held on each of the hosts independently and pulled into the vCenter inventory when the hosts were re-added to vCenter.  Distributed vSwitches (dvSwitches) have their configuration held in the vCenter.  We did not have to deal with this complication and so the recreation of dvSwitches is not covered here.

Roll-back

The changes that are to be made can easily be undone, as they make no change and trigger no change in the physical environment.  However, it is never a bad idea to have a backup, you shouldn’t ever need to roll back the whole database, but it’s good to have a record of your before/after states.

Update vCenter Password in CloudStack

If you have changed the vCenter password, it’ll need updating in the CloudStack database in its encrypted form.  The following instructions are taken from the CloudStack documentation:

  • Generate the encrypted equivalent of your vCenter password:
$ java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh input="_your_vCenter_password_" password="`cat /etc/cloudstack/management/key`" verbose=false
  • Store the output from this step, we need to add this in cluster_details table and vmware_data_center tables in place of the plain text password
  • Find the ID of the row of cluster_details table that you have to update:
$ mysql -u <USERNAME> -p<PASSWORD>
select * from cloud.cluster_details;
  • Update the password with the new encrypted one
update cloud.cluster_details set value = '_ciphertext_from_step_1_' where id = _id_from_step_2_;
  • Confirm that the table is updated:
select * from cloud.cluster_details;
  • Find the ID of the correct row of vmware_data_center that you want to update
select * from cloud.vmware_data_center;
  • update the plain text password with the encrypted one:
update cloud.vmware_data_center set password = '_ciphertext_from_step_1_' where id = _id_from_step_5_;
  • Confirm that the table is updated:
select * from cloud.vmware_data_center;

Reallocating the VMware Datacenters to CloudStack

The next step is to recreate the cloud.zone property within vSphere which is used to track VMware datacenters which have been connected to CloudStack.   Adding a VMware datacenter to CloudStack creates this custom attribute and sets it to “true”.  CloudStack will check for this attribute and value and show an error on the management log if it attempts to connect to a vCenter server which does not have this attribute set.  While the property can be edited using the full vSphere client, it must be created using PowerCLI utility.

Using the PowerCLI utility do the following:

Connect-VIServer <VCENTER_SERVER_IP>

(you’ll be prompted for a username and password with sufficient permissions for the vCenter Server)

New-CustomAttribute -Name "cloud.zone" -TargetType Datacenter

Get-Datacenter <DATACENTER_NAME> | Set-Annotation -CustomAttribute "cloud.zone" -Value true

Reconnecting the ESXi Hosts to CloudStack

CloudStack will now be connected to the vCenter server, but the hosts will still probably all be marked as down.  This is because vCenter uses an internal identifier for hosts, so that names and IP addresses can change, but it still can keep track of which host is which.

This identifier appears in two places in CloudStack’s database; The ‘guid’ in the cloud.host table and also in the cloud.host_details table against the ‘guid’ value. The host table can be queried as follows:

SELECT id,name,status,guid FROM cloud.host WHERE hypervisor_type = 'VMware' AND status != 'Removed';

The guid takes the form:

HostSystem<HOST_ID>@<VCENTER_NAME>

You can find the new Host ID either though the PowerCLI again, or via a utility like Robware’s RVTools.  RVTools is a very light-weight way of viewing information from a vCenter. It also allows export to csv so is very handy for manual reporting and analysis. The screen grap below shows a view called vHosts; note the Object ID.  This is vCenter’s internal ID of the host.

rvtools

Through PowerCLI you would use:

Connect-VIServer <VCENTER_SERVER_IP>;

Get-VMHost | ft -Property Name,Id -autosize

Now you can update the CloudStack tables with the new host ids for each of your hosts.

cloud.host.guid for  host ‘10.0.1.32’ on vCenter ‘10.0.1.11’ might previously have read:

HostSystem:host-99@10.0.1.11

should be updated to:

HostSystem:host-10@10.0.1.11

Remember to check the host and host_details table.

Once that you have updated the entries for all of the active hosts in the zone, you can start the management service again now and all of the hosts should reconnect, as their pointers have been fixed in the CloudStack database.

Summary

This article explains how to re-insert a lost or failed vCenter server into CloudStack.  This blog describes the use of PowerCLI to both set and retrieve values from vCenter.

This process could be used to migrate an appliance based vCenter to a Windows based environment in order to support additional capacity as well as replacing a broken vCenter installation.

About The Author

Paul Angus is VP Technology & Cloud Architect at ShapeBlue, The Cloud Specialists. He has designed and implemented numerous CloudStack environments for customers across 4 continents, based on Apache CloudstackCitrix Cloudplatform and Citrix CloudPortal.

When not building Clouds, Paul likes to create Ansible playbooks that build clouds

We recently came across a very unusual issue where a client had a major security breach on their network. As well as lots of other damage their CloudStack infrastructure was maliciously damaged beyond recovery. Luckily the hackers hadn’t manage to damage the backend XenServer hypervisors so they were quite happily still running user VMs and Virtual Routers, just not under CloudStack control.  The client had a good backup regime so the CloudStack database was available to recover some key CloudStack information from. As building a new CloudStack instance was the only option the challenge was now how to recover all the user VMs to such a state they could be imported into the new CloudStack instance under the original accounts.

Mapping out user VMs and VHD disks

The first challenge is to map out all the VMs and work out which VMs belong to which CloudStack account, and which VHD disks belong to which VMs. To do this first of all recover the original CloudStack database and then query the vm_instance, service_offering, account and domain  tables.

In short we are interested in:

  • VM instance ID and names
  • VM instance owner account ID and account name
  • VM instance owner domain ID and domain name
  • VM service offering, which determines the VM CPU / memory spec
  • VM volume ID, name, size, path, type (root disk or data) and state for all VM disks – root or data.

At the same time we are not interested in:

  • System VMs
  • User VMs in state “Expunging”, “Expunged”, “Destroyed” or “Error”. The “Error” state would indicate the VM was not healthy on the original infrastructure.
  • VM disk volumes which are in state “Expunged” or “Expunging”. Both of these would indicate the VM was in the process of being deleted on the original CloudStack instance.

From a SQL point of view we do this as follows:

SELECT
cloud.vm_instance.id as vmid,
cloud.vm_instance.name as vmname,
cloud.vm_instance.instance_name as vminstname,
cloud.vm_instance.display_name as vmdispname,
cloud.vm_instance.account_id as vmacctid,
cloud.account.account_name as vmacctname,
cloud.vm_instance.domain_id as vmdomainid,
cloud.domain.name as vmdomname,
cloud.vm_instance.service_offering_id as vmofferingid,
cloud.service_offering.speed as vmspeed,
cloud.service_offering.ram_size as vmmem,
cloud.volumes.id as volid,
cloud.volumes.name as volname,
cloud.volumes.size as volsize,
cloud.volumes.path as volpath,
cloud.volumes.volume_type as voltype,
cloud.volumes.state as volstate
FROM cloud.vm_instance
right join cloud.service_offering on (cloud.vm_instance.service_offering_id=cloud.service_offering.id)
right join cloud.volumes on (cloud.vm_instance.id=cloud.volumes.instance_id)
right join cloud.account on (cloud.vm_instance.account_id=cloud.account.id)
right join cloud.domain on (cloud.vm_instance.domain_id=cloud.domain.id)
where
cloud.vm_instance.type='User'
and not (cloud.vm_instance.state='Expunging' or cloud.vm_instance.state='Destroyed' or cloud.vm_instance.state='Error')
and not (cloud.volumes.state='Expunged' or cloud.volumes.state='Expunging')
order by cloud.vm_instance.id;

This will return a list of VMs and disks like the following:

vmidvmnamevminstnamevmdispnamevmacctidvmacctnamevmdomainidvmdomnamevmofferingidvmspeedvmmemvolidvolnamevolsizevolpathvoltypevolstate
24rootvm1i-2-24-VMrootvm12admin1ROOT150051230ROOT-242147483648034c8b964-4ecb-4463-9535-40afc0bd2117ROOTReady
25ppvm2i-5-25-VMppvm25peterparker2SPDM Inc150051231ROOT-252147483648010c12a4f-7bf6-45c4-9a4e-c1806c5dd54aROOTReady
26ppvm3i-5-26-VMppvm35peterparker2SPDM Inc150051232ROOT-26214748364807046409c-f1c2-49db-ad33-b2ba03a1c257ROOTReady
26ppvm3i-5-26-VMppvm35peterparker2SPDM Inc150051236ppdatavol5368709120b9a51f4a-3eb4-4d17-a36d-5359333a5d71DATADISKReady

This now gives us all the information required to import the VM into the right account once the VHD disk file has been recovered from the original primary storage pool.

Recovering VHD files

Using the information from the database query above we now know that e.g. the VM “ppvm3”:

  • Is owned by the account “peterparker” in domain “SPDM Inc”.
  • Used to have 1 vCPU @ 500MHz and 500MB vRAM.
  • Had two disks:
    • A root disk with ID 7046409c-f1c2-49db-ad33-b2ba03a1c257.
    • A data disk with ID b9a51f4a-3eb4-4d17-a36d-5359333a5d71.

If we now check the original primary storage repository we can see these disks:

-rw-r--r-- 1 root root 502782464 Nov 17 12:08 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd
-rw-r--r-- 1 root root 13312 Nov 17 11:49 b9a51f4a-3eb4-4d17-a36d-5359333a5d71.vhd

This should in theory make recovery easy. Unfortunately due to the nature of XenServer VHD disk chains it’s not that straight forward. If we tried to import the root VHD disks as a template this would succeed, but as soon as we try to spin up a new VM from this template we get the “insufficient resources” error from CloudStack. If we trace this back in the CloudStack management log or XenServer SMlog we will most likely find an error along the lines of “Got exception SR_BACKEND_FAILURE_65 ; Failed to load VDI”. The root cause of this is we have imported a VHD differencing disk, or in common terms a delta or “child” disk. These reference a parent VHD disk – which we so far have not recovered.

To fully recover healthy VHD images we have two options:

  1. If we have access to the original storage repository from a running XenServer we can use the “xe” command line tools to export each VDI image. This method is preferable as it involves less copy operations and less manual work.
  2. If we have no access from a running XenServer we can copy the disk images and use the “vhd-util” utility to merge files.

Recovery using XenServer

VHD file export using the built in XenServer tools is relatively straight forward. The “xe vdi-export” tool can be used to export and merge the disk in a single operation. The first step in the process is to map an external storage repository to the XenServer (normally the same repository which is used for upload of the VHD images to CloudStack later on), e.g. an external NFS share.

We now use the vdi-export option as follows:

# xe vdi-export uuid=7046409c-f1c2-49db-ad33-b2ba03a1c257 format=vhd filename=ppvm3root.vhd --progress
[|] ######################################################&amp;gt; (100% ETA 00:00:00)
Total time: 00:01:12
# xe vdi-export uuid=b9a51f4a-3eb4-4d17-a36d-5359333a5d71 format=vhd filename=ppvm3data.vhd --progress
[\] ######################################################&gt; (100% ETA 00:00:00)
Total time: 00:00:00
# ll
total 43788816
-rw------- 1 root root 12800 Nov 18 2015 ppvm3data.vhd
-rw------- 1 root root 1890038784 Nov 18 2015 ppvm3root.vhd

If we now utilise vhd-util to scan the disks we see they are both dynamic disks with no parent:

# vhd-util read -p -n ppvm3root.vhd
VHD Footer Summary:
-------------------
Cookie : conectix
Features : (0x00000002) &lt;RESV&gt;
File format version : Major: 1, Minor: 0
Data offset : 512
Timestamp : Sat Jan 1 00:00:00 2000
Creator Application : 'caml'
Creator version : Major: 0, Minor: 1
Creator OS : Unknown!
Original disk size : 20480 MB (21474836480 Bytes)
Current disk size : 20480 MB (21474836480 Bytes)
Geometry : Cyl: 41610, Hds: 16, Sctrs: 63
: = 20479 MB (21474754560 Bytes)
Disk type : Dynamic hard disk
Checksum : 0xffffefb4|0xffffefb4 (Good!)
UUID : 4fc66aa3-ad5e-44e6-a4e2-b7e90ae9c192
Saved state : No
Hidden : 0

VHD Header Summary:
-------------------
Cookie : cxsparse
Data offset (unusd) : 18446744073709
Table offset : 2048
Header version : 0x00010000
Max BAT size : 10240
Block size : 2097152 (2 MB)
Parent name :
Parent UUID : 00000000-0000-0000-0000-000000000000
Parent timestamp : Sat Jan 1 00:00:00 2000
Checksum : 0xfffff44d|0xfffff44d (Good!)

# vhd-util read -p -n ppvm3data.vhd
VHD Footer Summary:
-------------------
Cookie : conectix
Features : (0x00000002) &lt;RESV&gt;
File format version : Major: 1, Minor: 0
Data offset : 512
Timestamp : Sat Jan 1 00:00:00 2000
Creator Application : 'caml'
Creator version : Major: 0, Minor: 1
Creator OS : Unknown!
Original disk size : 5120 MB (5368709120 Bytes)
Current disk size : 5120 MB (5368709120 Bytes)
Geometry : Cyl: 10402, Hds: 16, Sctrs: 63
: = 5119 MB (5368430592 Bytes)
Disk type : Dynamic hard disk
Checksum : 0xfffff16b|0xfffff16b (Good!)
UUID : 03fd60a4-d9d9-44a0-ab5d-3508d0731db7
Saved state : No
Hidden : 0

VHD Header Summary:
-------------------
Cookie : cxsparse
Data offset (unusd) : 18446744073709
Table offset : 2048
Header version : 0x00010000
Max BAT size : 2560
Block size : 2097152 (2 MB)
Parent name :
Parent UUID : 00000000-0000-0000-0000-000000000000
Parent timestamp : Sat Jan 1 00:00:00 2000
Checksum : 0xfffff46b|0xfffff46b (Good!)

# vhd-util scan -f -m'*.vhd' -p
vhd=ppvm3data.vhd capacity=5368709120 size=12800 hidden=0 parent=none
vhd=ppvm3root.vhd capacity=21474836480 size=1890038784 hidden=0 parent=none

These files are now ready for upload to the new CloudStack instance.

Note: using the Xen API it is also in theory possible to download / upload a VDI image straight from XenServer, using the “export_raw_vdi” API call. This can be achieved using a URL like:

https://<account>:<password>@<XenServer IP or hostname>/export_raw_vdi?vdi=<VDI UUID>&format=vhd

At the moment this method unfortunately doesn’t download the VHD file as a sparse disk image, hence the VHD image is downloaded in it’s full original disk size, which makes this very space hungry method. It is also a relatively new addition to the Xen API and is marked as experimental. More information can be found on http://xapi-project.github.io/xen-api/snapshots.html.

Recovery using vhd-util

If all we have access to is the original XenServer storage repository we can utilise the “vhd-util” binary which can be downloaded from http://download.cloud.com.s3.amazonaws.com/tools/vhd-util (note this is a slightly different version to the one built in to XenServer).

If we run this with the “read” option we can find out more information about what kind of disk this is and if it has a parent. For the root disk this results in the following information:

# vhd-util read -p -n 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd
VHD Footer Summary:
-------------------
Cookie : conectix
Features : (0x00000002) &amp;amp;lt;RESV&amp;amp;gt;
File format version : Major: 1, Minor: 0
Data offset : 512
Timestamp : Sun Nov 15 23:06:44 2015
Creator Application : 'tap'
Creator version : Major: 1, Minor: 3
Creator OS : Unknown!
Original disk size : 20480 MB (21474836480 Bytes)
Current disk size : 20480 MB (21474836480 Bytes)
Geometry : Cyl: 41610, Hds: 16, Sctrs: 63
: = 20479 MB (21474754560 Bytes)
Disk type : Differencing hard disk
Checksum : 0xffffefe6|0xffffefe6 (Good!)
UUID : 2a2cb4fb-1945-4bad-9682-6ea059e64598
Saved state : No
Hidden : 0

VHD Header Summary:
-------------------
Cookie : cxsparse
Data offset (unusd) : 18446744073709
Table offset : 1536
Header version : 0x00010000
Max BAT size : 10240
Block size : 2097152 (2 MB)
Parent name : cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd
Parent UUID : f6f05652-20fa-4f5f-9784-d41734489b32
Parent timestamp : Fri Nov 13 12:16:48 2015
Checksum : 0xffffd82b|0xffffd82b (Good!)

From the above we notice two things about the root disk:

  • Disk type : Differencing hard disk
  • Parent name : cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd

I.e. the VM root disk is a delta disk which relies on parent VHD disk cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd.

If we run this against the data disk the story is slightly different:

# vhd-util read -p -n b9a51f4a-3eb4-4d17-a36d-5359333a5d71.vhd
VHD Footer Summary:
-------------------
Cookie : conectix
Features : (0x00000002) &amp;amp;lt;RESV&amp;amp;gt;
File format version : Major: 1, Minor: 0
Data offset : 512
Timestamp : Mon Nov 16 01:52:51 2015
Creator Application : 'tap'
Creator version : Major: 1, Minor: 3
Creator OS : Unknown!
Original disk size : 5120 MB (5368709120 Bytes)
Current disk size : 5120 MB (5368709120 Bytes)
Geometry : Cyl: 10402, Hds: 16, Sctrs: 63
: = 5119 MB (5368430592 Bytes)
Disk type : Dynamic hard disk
Checksum : 0xfffff158|0xfffff158 (Good!)
UUID : 7013e511-b839-4504-ba88-269b2c97394e
Saved state : No
Hidden : 0

VHD Header Summary:
-------------------
Cookie : cxsparse
Data offset (unusd) : 18446744073709
Table offset : 1536
Header version : 0x00010000
Max BAT size : 2560
Block size : 2097152 (2 MB)
Parent name :
Parent UUID : 00000000-0000-0000-0000-000000000000
Parent timestamp : Sat Jan 1 00:00:00 2000
Checksum : 0xfffff46d|0xfffff46d (Good!)

In other words the data disk is showing up with:

  • Disk type : Dynamic hard disk
  • Parent name : <blank>

This behaviour is typical for VHD disk chains. The root disk is created from an original template file, hence it has a parent disk, whilst the data disk was just created as a raw storage disk, hence has no parent.

Before moving forward with the recovery it is very important to make copies of both the differencing disks and parent disk to a separate location for further processing. 

The full recovery of the VM instance root disk relies on the differencing disk being coalesced or merged into the parent disk – but since the parent disk was a template disk it is used by a number of differencing disks the coalesce process will change this parent disk and render any other differencing disks unrecoverable.

Once we have copied the root VHD disk and it’s parent disk to a separate location we use the vhd-util “scan” option to verify we have all disks in the disk chain. The”scan” option will show an indented list of disks which gives a tree like view of disks and parent disks.

Please note if the original VM had a number of snapshots there might be more than two disks in the chain. If so use the process above to identify all the differencing disks and download them to the same folder.

# vhd-util scan -f -m'*.vhd' -p
vhd=cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd capacity=21474836480 size=1758786048 hidden=1 parent=none
vhd=7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd capacity=21474836480 size=507038208 hidden=0 parent=cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd

Once all differencing disks have been copied we can now use the vhd-util “coalesce” option to merge the child difference disk(s) into the parent disk:

# ls -l
-rw-r--r-- 1 root root 507038208 Nov 17 12:45 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd
-rw-r--r-- 1 root root 1758786048 Nov 17 12:47 cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd
# vhd-util coalesce -n 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd
# ls -l
-rw-r--r-- 1 root root 507038208 Nov 17 12:45 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd
-rw-r--r-- 1 root root 1863848448 Nov 17 13:36 cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd

Note the vhd-util coalesce option has no output. Also note the size change of the parent disk cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd.

Now the root disk has been merged it can be uploaded as a template to CloudStack to allow build of the original VM.

Import of VM into new CloudStack instance

We now have all details for the original VM:

  • Owner account details
  • VM virtual hardware specification
  • Merged root disk
  • Data disk

The import process is now relatively straight forward. For each VM:

  1. Ensure the account is created.
  2. In the context of the account (either via GUI or API):
    1. Import the root disk as a new template.
    2. Import the data disk as a new volume.
  3. Create a new instance from the uploaded template.
  4. Once the new VM instance is online attach the uploaded data disk to the VM.

In a larger CloudStack estate the above process is obviously both time consuming and resource intensive, but can to a certain degree be automated. As long as the VHD files were healthy to start off with it will however allow for successful recovery of XenServer based VMs between CloudStack instances.

About The Author

Dag Sonstebo is  a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends most of his time designing and implementing IaaS solutions based on on Apache CloudStack.

 

Recently we’ve seen a few clients being tripped up by the System VM upgrades during the CloudStack upgrades in multi-zone deployments.

The issue occurs when the System VM is pre-deployed individually to multiple zones, rather than being a single template deployed to multiple zones. This may be done in error or because the specific version of CloudStack/CloudPlatform which you are upgrading from does not allow you to provision the template as zone-wide.

The result is that only the system VMs in one zone recreate properly after the upgrade and the other zones do not recreate the SSVM or CPVM when they are destroyed.

The vm_template table will have entries like this for a given System VM template:

 SELECT id,name,type,cross_zones,state FROM cloud.vm_template WHERE name like '%systemvm-xenserver%' AND removed IS NULL; 

idnametypecross-zonesstate
634 systemvm-xenserver-4.5SYSTEM 0Active
635 systemvm-xenserver-4.5SYSTEM 0Active
636 systemvm-xenserver-4.5SYSTEM 0Active

And template_store_ref:

 SELECT template_store_ref.id,template_store_ref.store_id,template_store_ref.template_id,image_store.data_center_id,image_store.name FROM template_store_ref JOIN image_store ON template_store_ref.store_id=image_store.id WHERE template_store_ref.destroyed=0; 

idstore_idtemplate_iddata_center_idname
216341
326352
436363

The tables should actually show the same template ID in each store:

idnametypecross_zonesstate
634systemvm-xenserver-4.5SYSTEM1Active

and template_store_ref:

idstore_idtemplate_iddata_center_idname
21634 1
32634 2
43634 3

In the case of the first table, CloudStack will take the template with id of 636 as being THE region-wide System VM template, and will not be able to find it in zones 1 and 2. and therefore not be able to recreate System VMs in those zones.  The management server log will report that the zone is not ready as it can’t find the template.

The above example assumes that there are 3 zones, with 1 secondary storage pool in each zone, if there are multiple secondary storage pools, then only one in each zone will have initially received the template.

You could edit the path in the template_store_ref to point to the location where the template has been downloaded to on the secondary storage, however, this will leave the system in a non-standard state.

The safest way forward is to set one of the templates to cross_zones=1 and then coax CloudStack into creating an SSVM in each zone (one at a time), which will then download that cross-zone template.

Remediation

In this example I’ll use template 634 to be the ultimate region-wide template.

In order to download the cross_zone template we need to make one template the region-wide template at a time. So assuming that zone relating to secondary storage pool 3 upgraded properly, we first make template 635 the region-wide template. By changing the database to this:

UPDATE cloud.vm_template SET state='Inactive', cross_zones=1 WHERE id=634; UPDATE cloud.vm_template SET state='Active' WHERE id=635; UPDATE cloud.vm_template SET state='Inactive' WHERE id=636;

idnametypecross_zonesstate
634systemvm-xenserver-4.5SYSTEM 1Inactive
635systemvm-xenserver-4.5SYSTEM 0Active
636systemvm-xenserver-4.5SYSTEM 0Inactive

Zone 2 will now be able to recreate the CPVM and SSVM, while zone 3 CPVM and SSVM are already running so are unaffected.
Once the CPVM and SSVM in zone 2 are up, you can now make template 634 the region-wide system VM template, by editing the database like this.

UPDATE cloud.vm_template SET state='Active' WHERE id=634; UPDATE cloud.vm_template SET state='Inactive' WHERE id=635;

idnametypecross_zonestate
634systemvm-xenserver-4.5SYSTEM1Active
635systemvm-xenserver-4.5SYSTEM0Inactive
636systemvm-xenserver-4.5SYSTEM0Inactive

The CPVM and SSVM in zone 1 will now be able to start.

The SSVM in zones 2 and 3 will need their cloud service restarting in order to prompt the re-scanning of the templates table. This can be done by a restart or stopping then starting the cloud service on the SSVM – DO NOT DESTROY the SSVM in zones 2 or 3 as they don’t have the correct template yet. The SSVMs in zones 2 and 3 will now start downloading template id 634 to their secondary storage pools.

Once the template 634 has downloaded to zones 2 and 3 (it was already in zone 1), CloudStack will be able to recreate system VMs in any zones.

Summary

This article explains how to recover from the upgrade failure scenario when system VMs will not recreate in zones in a multi-zone deployment.

About The Author

Paul Angus is VP Technology & Cloud Architect at ShapeBlue, The Cloud Specialists. He has designed and implemented numerous CloudStack environments for customers across 4 continents, based on Apache Cloudstack ,Citrix Cloudplatform and Citrix Cloudportal.

When not building Clouds, Paul likes to create Ansible playbooks that build clouds

As a cloud infrastructure scales to hundreds or thousands of servers, high availability becomes a key requirement of the production environments supporting multiple applications and services. Since the management servers use a MySQL database to store the state of all its objects, the database could become a single point of failure. The CloudStack manual recommends MySQL replication with manual failover in the event of database loss.

We have worked with Severalnines to produce what we believe is a better way.

In this blog post, we’ll show you how to deploy redundant CloudStack management servers with MariaDB Galera Cluster on CentOS 6.5 64bit. We will have two load balancer nodes fronting the management servers and the database servers. Since CloudStack relies on MySQL’s GET_LOCK and RELEASE LOCK, which are not supported by Galera, we will redirect all database requests to only one MariaDB node and automatically failover to another node in case the former goes down. So, we’re effectively getting the HA benefits of Galera clustering (auto-failover, full consistency between DB nodes, no slave lag), while avoiding the Galera limitations as we’re not concurrently accessing all the nodes. We will deploy a two-node Galera Cluster (plus an arbitrator on a separate ClusterControl node).

Our setup will look like this:

Note that this blog post does not cover the installation of hypervisor and storage hosts. Our setup consists of 4 servers:

  • lb1: HAproxy + keepalived (master)
  • lb2: HAproxy + keepalived (backup) + ClusterControl + garbd
  • mgm1: CloudStack Management + database server
  • mgm2: CloudStack Management + database server

 

Our main steps would be:

  1. Prepare 4 hosts
  2. Deploy MariaDB Galera Cluster 10.x with garbd onto mgm1, mgm2 and lb2 from lb2
  3. Configure Keepalived and HAProxy for database and CloudStack load balancing
  4. Install CloudStack Management #1
  5. Install CloudStack Management #2

 

Preparing Hosts

1. Add the following hosts definition in /etc/hosts of all nodes:

192.168.1.10		virtual-ip mgm.cloustack.local
192.168.1.11		lb1 haproxy1
192.168.1.12		lb2 haproxy2 clustercontrol
192.168.1.21		mgm1.cloudstack.local mgm1 mysql1
192.168.1.22		mgm2.cloudstack.local mgm2 mysql2

2. Install NTP daemon:

$ yum -y install ntp
$ chkconfig ntpd on
$ service ntpd start

3. Ensure each host is using a valid FQDN, for example on mgm1:

$ hostname --fqdn
mgm1.cloudstrack.local

Deploying MariaDB Galera Cluster

** The deployment of the database cluster will be done from lb2, i.e., the ClusterControl node.

1. To set up MariaDB Galera Cluster, go to the Severalnines Galera Configurator to generate a deployment package. In the wizard, we used the following values when configuring our database cluster (take note that we specified one of the DB nodes twice under Database Servers’ textbox):

Vendor                   : MariaDB
MySQL Version            : 10.x
Infrastructure           : none/on-premises 
Operating System         : RHEL6 - Redhat 6.4/Fedora/Centos 6.4/OLN 6.4/Amazon AMI 
Number of Galera Servers : 3
Max connections	     	 : 350
OS user                  : root
ClusterControl Server    : 192.168.1.12
Database Servers         : 192.168.1.21 192.168.1.22 192.168.1.22

At the end of the wizard, a deployment package will be generated and emailed to you.

2. Download and extract the deployment package:

$ wget http://www.severalnines.com/galera-configurator3/tmp/wb06494200669221809/s9s-galera-mariadb-3.5.0-rpm.tar.gzz
$ tar -xzf s9s-galera-mariadb-3.5.0-rpm.tar.gz

3. Before we proceed with the deployment, we need to perform some customization to fit the CloudStack database environment. Go to the deployment script’s MySQL configuration file at ~/s9s-galera-mariadb-3.5.0-rpm/mysql/config/my.cnf and ensure the following options exist under the [MYSQLD] section:

innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
log-bin=mysql-bin

4. Then, go to ~/s9s-galera-mariadb-3.5.0-rpm/mysql/config/cmon.cnf.controller and remove the repeated value on mysql_server_addresses so it becomes as below:

mysql_server_addresses=192.168.1.21,192.168.1.22

5. Now we are ready to start the deployment:

$ cd ~/s9s-galera-codership-3.5.0-rpm/mysql/scripts/install/
$ bash ./deploy.sh 2>&1 | tee cc.log

6. The DB cluster deployment will take about 15 minutes, and once completed, the ClusterControl UI is accessible at https://192.168.1.12/clustercontrol .

7. It is recommended to run Galera on at least three nodes. So, install garbd, a lightweight arbitrator daemon for Galera on the ClusterControl node from the ClusterControl UI. Go to Manage > Load Balancer > Install Garbd > choose the ClusterControl node IP address from the dropdown > Install Garbd.

You will now see your MariaDB Galera Cluster with garbd installed and binlog enabled (master) as per below:

 

Load Balancer and Virtual IP

1. Before we start to deploy the load balancers, make sure lb1 is accessible using passwordless SSH from ClusterControl/lb2. On lb2, copy the SSH keys to 192.168.1.11:

$ ssh-copy-id -i ~/.ssh/id_rsa root@192.168.1.11

2. Login to ClusterControl, drill down to the database cluster and click Add Load Balancer button. Deploy HAProxy on lb1 and lb2 similar to below:

** Take note that for RHEL, ensure you check Build from source? to install HAProxy from source. This will install the latest version of HAProxy.

3. Install Keepalived on lb1(master) and lb2(backup) with 192.168.1.10 as virtual IP:

4. The load balancer nodes have now been installed, and are integrated with ClusterControl. You can verify this by checking out the ClusterControl summary bar:

5. By default, our script will configure the MySQL reverse proxy service to listen on port 33306 in active-active mode. We need to change this to active-passive multi master mode by declaring the second Galera node as backup, On lb1 and lb2, open /etc/haproxy/haproxy.cfg and append the word ‘backup’ into the last line:

	server 192.168.1.21 192.168.1.21:3306 check
	server 192.168.1.22 192.168.1.22:3306 check backup

6. We also need to add the load balancing definition for CloudStack. According to the documentation, we need to load balance port 8080 and 8025. To allow session stickiness, we will use source load balancing algorithm, where the same source address will be forwarded to the same management server unless it fails. On lb1 and lb2, open/etc/haproxy/haproxy.cfg and add the following lines:

frontend cloudstack_ui_8080
        bind *:8080
        mode http
        option httpchk OPTIONS /client
        option forwardfor
        option httplog
        balance source
        server mgm1.cloudstack.local 192.168.1.21:8080 maxconn 32 check inter 5000
        server mgm2.cloudstack.local 192.168.1.22:8080 maxconn 32 check inter 5000
frontend cloudstack_systemvm_8250
        bind *:8250
        mode tcp
        balance source
        server mgm1.cloudstack.local 192.168.1.21:8250 maxconn 32 check
        server mgm2.cloudstack.local 192.168.1.22:8250 maxconn 32 check

6. Restart HAProxy to apply the changes:

$ service haproxy restart

Or, you can just kill the haproxy process and let ClusterControl recover it.

7. Configure iptables to allow connections to port configured in HAProxy and Keepalived. Add the following lines:

$ iptables -I INPUT -m tcp -p tcp --dport 33306 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 8080 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 8250 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 80 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 443 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 9600 -j ACCEPT
$ iptables -I INPUT -i eth0 -d 224.0.0.0/8 -j ACCEPT
$ iptables -I INPUT -p 112 -i eth0 -j ACCEPT
$ iptables -I OUTPUT -p 112 -o eth0 -j ACCEPT

Save the iptables rules:

$ service iptables save

Installing CloudStack Management Server #1

** The following steps should be performed on mgm1.

1. Add the CloudStack repository, create /etc/yum.repos.d/cloudstack.repo and insert the following information.

[cloudstack]
name=cloudstack
baseurl=http://packages.shapeblue.com/cloudstack/upstream/centos/4.4/
enabled=1
gpgcheck=0

2. Install the CloudStack Management server:

$ yum -y install cloudstack-management

3. Create a root user with wildcard host for the CloudStack database setup. On mgm1, run the following statement:

$ mysql -u root -p

And execute the following statements:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO root@'%' IDENTIFIED BY 'password' WITH GRANT OPTION;
MariaDB [(none)]> FLUSH PRIVILEGES;

4. On mgm1, run the following command to configure the CloudStack databases:

$ cloudstack-setup-databases cloud:cloudpassword@192.168.1.10:33306 --deploy-as=root:password

5. Setup the CloudStack management application:

$ cloudstack-setup-management

** Allow some time for the CloudStack application to bootstrap on each startup. You can monitor the process at/var/log/cloudstack/management/catalina.out.

6. Open the CloudStack management UI at virtual IP, http://192.168.1.10:8080/client/ with default user ‘admin’ and password ‘password’. Configure your CloudStack environment by following the deployment wizard and let CloudStack build the infrastructure:

If completed successfully, you should then be redirected to the CloudStack Dashboard:

The installation of the first management server is now complete. We’ll now proceed with the second management server.

 

Installing CloudStack Management Server #2

** The following steps should be performed on mgm2

1. Add the CloudStack repository, create /etc/yum.repos.d/cloudstack.repo and insert the following information.

[cloudstack]
name=cloudstack
baseurl=http://cloudstack.apt-get.eu/rhel/4.4/
enabled=1
gpgcheck=0

2. Install the CloudStack Management server:

$ yum -y install cloudstack-management

3. Run the following command to setup the CloudStack database (note the absence of –deploy-as argument):

$ cloudstack-setup-databases cloud:cloudpassword@192.168.1.10:33306

4. Setup the CloudStack management application:

$ cloudstack-setup-management

** Allow some time for the CloudStack application to bootstrap on each startup. You can monitor the process at/var/log/cloudstack/management/catalina.out. At this point, this management server will automatically discover the other management server and form a cluster. Both management servers are load balanced and accessible via virtual IP, 192.168.1.10.

Lastly, change the management host IP address on every agent host at/etc/cloudstack/agent/agent.properties to the virtual IP address similar to below:

host=192.168.1.10

Restart the cloudstack agent service to apply the change:

$ service cloudstack-agent restart

Verify the Setup

1. Check the HAProxy statistics by logging into the HAProxy admin page at lb1 host port 9600. The default username/password is admin/admin. You should see the status of nodes from the HAProxy point-of-view. Our Galera cluster is in master-standby mode, while the CloudStack management servers are load balanced:

2. Check and observe the traffic on your database cluster from the ClusterControl overview page at https://192.168.1.12/clustercontrol

Reproduced with the kind permission of Severalnines
http://www.severalnines.com/blog/how-deploy-high-availability-cloudstack-mariadb-galera-cluster

 

Introduction

If you are new to Apache CloudStack and want to learn the concepts but do not have all the equipment required to stand-up a test environment, why not use your existing PC and VirtualBox.

VirtualBox is a cross platform virtualisation application which runs on OSX, Windows, Linux and Solaris, meaning no matter what OS you are running, you should be able to run VirtualBox.

The aim of this exercise is to build an Apache CloudStack environment which is as close to a Production deployment as possible, within the obvious constraints of running it all on a laptop. This deployment will support the following key functions of Apache CloudStack:

Production Grade Hypervisor: Citrix XenServer 6.2 with full VLAN support
Apache CloudStack on CentOS 6.5
NFS for Primary and Secondary Storage – each on a dedicated VLAN
Console Proxy and Secondary Storage VM
All Advanced Networking features such as Firewall, NAT, Port Forwarding, Load Balancing, VPC

To achieve all of this we need to deploy two VMs on VirtualBox, a CentOS VM for Apache Cloudstack, and a Citrix XenServer VM for our Hypervisor. The CloudStack VM will also act as our MySQL Server and NFS Server.

appliance

A key requirement of this test environment is to keep it completely self-contained so it can be used for training (insert link to Bootcamp) and demos etc.  To achieve this, and maintain the ability to deploy a new Zone and download the example CentOS Template to use in the system, we simulate the CloudStack Public Network and host the Default CentOS Template on the CloudStack Management Server VM using NGINX.

VirtualBox Configuration

Download and install the appropriate version from https://www.virtualbox.org/wiki/Downloads

Once VirtualBox is installed we need to configure it ready for this environment. The defaults are used where possible, but if you have been using VirtualBox already, you may have different settings which need to be adjusted.

We will be using three ‘Host Only’ networks, one ‘Nat’ network, and an ‘Internal’ network. By default VirtualBox has only one ‘Host Only’ network so we need to create two more.

  1. From the ‘file’ menu (windows) or VirtualBox menu (OSX), select ‘Preferences’ then ‘Network’ then ‘Host-only Networks’
  2. Add two more networks so you have at least 3 which we can use
  3. Setup the IP Schema for the 1st two networks as follows:

The naming conventions for Host Only Networks differs depending on the Host OS, I will simply refer to these as

‘Host Only Network 1’, 2 and 3 etc so please refer to the following comparison matrix to identify the correct Network.

This Guide Windows OSX
 Host Only Network 1    VirtualBox Host Only Ethernet Adapter  vboxnet0
 Host Only Network 2  VirtualBox Host Only Ethernet Adapter #2    vboxnet1
 Host Only Network 3  VirtualBox Host Only Ethernet Adapter #3  vboxnet2

Host Only Network 1:

IPv4 Address: 192.168.56.1
IPv4 Network Mask: 255.255.255.0

DHCP Server is optional as we don’t use it, but ensure the range does not clash with the static IPs we will be using which are 192.168.56.11 & 192.168.56.101

Host Only Network 2:

IPv4 Address: 172.30.0.1
IPv4 Network Mask: 255.255.255.0

By setting up these IP ranges, we ensure our host laptop has an IP on these Networks so we can access the VMs connected to them. We don’t need an IP on ‘Host Only Network 3’ as this will be used for storage and will also be running VLANs.

We use a NAT Network so that we can connect the CloudStack Management VM to the internet to enable the installation of the various packages we will be using.

Configure the VirtualBox ‘NatNetwork’ to use the following settings:

Network Name: NatNetwork
Network CIDR: 10.0.2.0/24

We disable DHCP as we cannot control the range to exclude our statically assigned IPs on our VMs.

Whilst this article focuses on creating a single CloudStack Management Server, you can easily add a second, and I have found that the DHCP allocated IPs from the NAT Network can change randomly, so setting up NAT Rules can be problematic, hence I always use statically assigned IPs.

The ‘Internal’ Network requires no configuration.

CloudStack VM

Create a VM for CloudStack Manager using the following Settings:

Name: CSMAN 4.4.1
Type: Linux
Version: Red Hat (64 bit)
RAM: 2048 (you cannot go lower than this for initial setup)
Hard Drive: VDI – Dynamic – 64 GB (we allocate this much as it will act as NFS Storage)

Note: VirtualBox seems to mix up the networks if you add them all at the same time so we add the 1st Network and install CentOS, then once fully installed, we add the additional networks, rebooting in-between. This appears to be a bug in the latest versions of VirtualBox (4.3.18 at the time of writing)

Modify the settings and assign ONLY the 1st network Adapter correct networks as follows:

csman-adapter-1

Install CentOS 6.5 64-bit minimal, set the Hostname to CSMAN, and IP address to 192.168.56.11/24 with a gateway of 192.168.56.1, and ensure the network is set to start on boot. Set DNS to public servers such as 8.8.8.8 & 8.8.4.4

Once the install is completed reboot the VM and confirm eth0 is active, then shutdown the VM and add the 2nd Network Adapter

csman-adapter-2

Boot the VM so it detects the NIC, then shut down and add the 3rd Adapter

csman-adapter-3

Boot the VM so it detects the NIC, then shut down and add the 4th Adapter

csman-adapter-4

Finally, boot the VM so it detects the last adapter and then we can configure the various interfaces with the correct IP schemas.

ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
IPADDR=192.168.56.11
PREFIX=24
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=MGMT

ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
IPADDR=10.0.2.11
GATEWAY=10.0.2.1
PREFIX=24
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
DEFROUTE=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=NAT

ifcfg-eth2

DEVICE=eth2
TYPE=Ethernet
IPADDR=172.30.0.11
PREFIX=24
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=PUBLIC 

ifcfg-eth3

DEVICE=eth3
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MTU=9000
VLAN=yes
USERCTL=no
MTU=9000

ifcfg-eth3.100

DEVICE=eth3.100
TYPE=Ethernet
IPADDR=10.10.100.11
PREFIX=24
ONBOOT=yes
BOOTPROTO=none
NAME=PRI-STOR
VLAN=yes
USERCTL=no
MTU=9000 

ifcfg-eth3.101

DEVICE=eth3.101
TYPE=Ethernet
IPADDR=10.10.101.11
PREFIX=24
ONBOOT=yes
BOOTPROTO=none
NAME=SEC-STOR
VLAN=yes
USERCTL=no
MTU=9000

Restart networking to apply the new settings, then apply all the latest updates

service networking restart
yum update -y

You can now connect via SSH using Putty to continue the rest of the configuration so you can copy and paste commands and settings etc

Installation and Configuration

With the base VM built we now need to install Apache CloudStack and all the other services this VM will be hosting. First we need to ensure the VM has the correct configuration.

Selinux

Selinux needs to be set to ‘permissive’, we can achieve this by running the following commands:

setenforce permissive
sed -i "/SELINUX=enforcing/ c\SELINUX=permissive" /etc/selinux/config

Hostname

The CloudStack Management Server should return its FQDN when you run hostname –fqdn, but as we do not have a working DNS installation it will probably return ‘unknown-host’ To resolve this we simply add an entry into the Hosts file, and while we are there, we may as well add one for the xenserver as well. Update /etc/hosts with the following, then reboot for it to take effect.

127.0.0.1 localhost localhost.cstack.local
192.168.56.11 csman.cstack.local csman
192.168.56.101 xenserver.cstack.local xenserver

 

Speed up SSH Connections

As you will want to use SSH to connect to the CloudStack VM its worth turning off the DNS Check to speed up the connection. Run the following commands

sed -i "/#UseDNS yes/ c\UseDNS no" /etc/ssh/sshd_config
service sshd restart

 

NTP

It’s always a good idea to install NTP so let’s add it now, and set it to start on boot (you can always configure this VM to act as the NTP Server for the XenServer, but that’s out of scope for this article)

yum install -y ntp
chkconfig ntpd on
service ntpd start

 

CloudStack Repo

Setup the CloudStack repo by running the following command:

echo "[cloudstack]
name=cloudstack
baseurl=http://packages.shapeblue.com/cloudstack/main/centos/4.4
enabled=1
gpgcheck=1" > /etc/yum.repos.d/cloudstack.repo

 

Import the ShapeBlue gpg release key: (Key ID 584DF93F, Key fingerprint = 7203 0CA1 18C1 A275 68B1 37C4 BDF0 E176 584D F93F)

yum install wget -y
wget http://packages.shapeblue.com/release.asc
sudo rpm --import release.asc

 

Install CloudStack and MySQL

Now we can install CloudStack and MySQL Server

yum install -y cloudstack-management mysql-server

 

Setup NFS Server

As the CSMAN VM will also be acting as the NFS Server we need to setup the NFS environment. Run the following commands to create the folders for Primary and Secondary Storage and then export them to the appropriate IP ranges.

mkdir /exports
mkdir -p /exports/primary
mkdir -p /exports/secondary
chmod 777 -R /exports
echo "/exports/primary 10.10.100.0/24(rw,async,no_root_squash)" > /etc/exports
echo "/exports/secondary 10.10.101.0/24(rw,async,no_root_squash)" >> /etc/exports
exportfs -a

 

We now need to update /etc/sysconfig/nfs with the settings to activate the NFS Server. Run the following command to update the required settings

sed -i -e '/#MOUNTD_NFS_V3="no"/ c\MOUNTD_NFS_V3="yes"' -e '/#RQUOTAD_PORT=875/ c\RQUOTAD_PORT=875' -e '/#LOCKD_TCPPORT=32803/ c\LOCKD_TCPPORT=32803' -e '/#LOCKD_UDPPORT=32769/ c\LOCKD_UDPPORT=32769' -e '/#MOUNTD_PORT=892/ c\MOUNTD_PORT=892' -e '/#STATD_PORT=662/ c\STATD_PORT=662' -e '/#STATD_OUTGOING_PORT=2020/ c\STATD_OUTGOING_PORT=2020' /etc/sysconfig/nfs

 

We also need to update the firewall settings to allow the XenServer to access the NFS exports so run the following to setup the required settings

sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 111 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 2049 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 2020 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 32803 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 32769 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 892 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 892 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 875 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 875 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 662 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 662 -j ACCEPT" /etc/sysconfig/iptables
service iptables restart

 

Then we set the nfs service to autostart on boot, and also start it now

chkconfig nfs on
service nfs start

 

Setup MySQL Server

The following command will adjust the MySQL Configuration for this environment

sed -i -e '/datadir/ a\innodb_rollback_on_timeout=1' -e '/datadir/ a\innodb_lock_wait_timeout=600' -e '/datadir/ a\max_connections=350' -e '/datadir/ a\log-bin=mysql-bin' -e "/datadir/ a\binlog-format = 'ROW'" -e "/datadir/ a\bind-address = 0.0.0.0" /etc/my.cnf

 

Then we set the mysqld service to autostart on boot, and also start it now

chkconfig mysqld on
service mysqld start

 

It’s always a good idea to secure a default install of MySQL and there is a handy utility to do this for you. Run the following command, setting a new password when prompted, (the current password will be blank) and accept all of the defaults to remove the anonymous user, test database and disable remote access etc.

mysql_secure_installation

 

Now we will login into MySQL and assign all privileges to the root account, this is so it can be used to create the ‘cloud’ account in a later step

mysql -u root -p  (enter password when prompted)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> quit

 

Setup Databases

With MySQL configured we can now setup the CloudStack Databases by running the following two commands, substituting your root password you setup earlier

cloudstack-setup-databases cloud:<password>@127.0.0.1 --deploy-as=root:<password>
cloudstack-setup-management

 

Nginx

There is a default example template which gets downloaded from the cloud.com web servers, but as this test system has no real public internet access we need to provide a way for the Secondary Storage VM to download this template. We achieve this by installing NGINX on the CSMAN VM, and use it to host the Template on our simulated ‘Public’ network.

First create the NGINX repo by running the following command:

echo "[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/
gpgcheck=0
enabled=1" > /etc/yum.repos.d/nginx.repo

 

Then install NGINX by running the following command

yum install nginx -y

 

Now we download the example CentOS Template for XenServer by running the following two commands

cd /usr/share/nginx/html
wget -nc http://download.cloud.com/templates/builtin/centos56-x86_64.vhd.bz2

We need to add a firewall rule to allow access via port 80 so run the following two commands

sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT" /etc/sysconfig/iptables
service iptables restart

 

Finally we start the nginx service, then test it by accessing http://192.168.56.11/ from the Host laptop

service nginx start

nginx

XenServer vhd-util

As we will be using Citrix XenServer as our Hypervisor we need to download a special utility which will get copied to every XenServer when it is added to the system. Run the following lines to download the file and update the permissions.

cd /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/
wget http://download.cloud.com.s3.amazonaws.com/tools/vhd-util
chmod 755 /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/vhd-util

 

Seed the CloudStack Default System VM Template

With now we need to seed the Secondary Storage with the XenServer System VM Template so run the following command

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /exports/secondary -u http://packages.shapeblue.com/systemvmtemplate/4.4/4.4.1/systemvm64template-4.4.1-7-xen.vhd.bz2 -h xenserver -F

 

CloudStack Usage Server

An optional step is to install the CloudStack Usage Service, to do so run the following command

yum install cloudstack-usage -y
service cloudstack-usage start

 

Customise the Configuration

For this test system to work within the limited resources available on a 4GB RAM Laptop, we need to make a number of modifications to the configuration.

Firstly we need to enable the use of a non HVM enabled XenServer. When you install XenServer on VirtualBox it warns you that it will only support PV and not HVM. To get around this we run the following SQL update command to add a new line into the Configuration table in the Cloud Database (remember to substitute your own MySQL Cloud password you used when you setup the CloudStack Database)

mysql -p<password> cloud -e \ "INSERT INTO cloud.configuration (category, instance, component, name, value, description) VALUES ('Advanced', 'DEFAULT', 'management-server', 'xen.check.hvm', 'false', 'Shoud we allow only the XenServers support HVM');"

 

The following MySQL commands update various global settings, and change the resources allocated to the system VMs so they will work within the limited resources available.

mysql -u cloud -p<password>
UPDATE cloud.configuration SET value='8096' WHERE name='integration.api.port';
UPDATE cloud.configuration SET value='60' WHERE name='expunge.delay';
UPDATE cloud.configuration SET value='60' WHERE name='expunge.interval';
UPDATE cloud.configuration SET value='60' WHERE name='account.cleanup.interval';
UPDATE cloud.configuration SET value='60' WHERE name='capacity.skipcounting.hours';
UPDATE cloud.configuration SET value='0.99' WHERE name='cluster.cpu.allocated.capacity.disablethreshold';
UPDATE cloud.configuration SET value='0.99' WHERE name='cluster.memory.allocated.capacity.disablethreshold';
UPDATE cloud.configuration SET value='0.99' WHERE name='pool.storage.capacity.disablethreshold';
UPDATE cloud.configuration SET value='0.99' WHERE name='pool.storage.allocated.capacity.disablethreshold';
UPDATE cloud.configuration SET value='60000' WHERE name='capacity.check.period';
UPDATE cloud.configuration SET value='1' WHERE name='event.purge.delay';
UPDATE cloud.configuration SET value='60' WHERE name='network.gc.interval';
UPDATE cloud.configuration SET value='60' WHERE name='network.gc.wait';
UPDATE cloud.configuration SET value='600' WHERE name='vm.op.cleanup.interval';
UPDATE cloud.configuration SET value='60' WHERE name='vm.op.cleanup.wait';
UPDATE cloud.configuration SET value='600' WHERE name='vm.tranisition.wait.interval';
UPDATE cloud.configuration SET value='60' WHERE name='vpc.cleanup.interval';
UPDATE cloud.configuration SET value='4' WHERE name='cpu.overprovisioning.factor';
UPDATE cloud.configuration SET value='8' WHERE name='storage.overprovisioning.factor';
UPDATE cloud.configuration SET value='192.168.56.11/32' WHERE name='secstorage.allowed.internal.sites';
UPDATE cloud.configuration SET value='192.168.56.0/24' WHERE name='management.network.cidr';
UPDATE cloud.configuration SET value='192.168.56.11' WHERE name='host';
UPDATE cloud.configuration SET value='false' WHERE name='check.pod.cidrs';
UPDATE cloud.configuration SET value='0' WHERE name='network.throttling.rate';
UPDATE cloud.configuration SET value='0' WHERE name='vm.network.throttling.rate';
UPDATE cloud.configuration SET value='GMT' WHERE name='usage.execution.timezone';
UPDATE cloud.configuration SET value='16:00' WHERE name='usage.stats.job.exec.time';
UPDATE cloud.configuration SET value='true' WHERE name='enable.dynamic.scale.vm';
UPDATE cloud.configuration SET value='9000' WHERE name='secstorage.vm.mtu.size';
UPDATE cloud.configuration SET value='60' WHERE name='alert.wait';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='domainrouter';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='elasticloadbalancervm';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='secondarystoragevm';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='internalloadbalancervm';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='consoleproxy';
UPDATE cloud.vm_template SET removed=now() WHERE id='2';
UPDATE cloud.vm_template SET url='http://192.168.56.11/centos56-x86_64.vhd.bz2' WHERE unique_name='centos56-x86_64-xen';
quit
service cloudstack-management restart

 

To enable access to the Un-Authenticated API which we have enabled on the default port of 8096, we need to add a firewall rule. Run the following commands to allow port 8096 through the firewall

sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 8096 -j ACCEPT" /etc/sysconfig/iptables
service iptables restart

 

Test the UI

Allow 1-2 mins for the cloudstack-management service to fully restart then login into the UI which should be accessible from the Host Laptop on http://192.168.56.11:8080/client/

The default credentials are

Username: admin
Password: password
Domain: <blank>

logon

Create Compute Offering

The default Compute Offerings are not suitable for this limited environment so we need to create a new compute offering using the following settings:

Name: Ultra Tiny
Description: Ultra Tiny – 1vCPU, 128MB RAM
Storage Type: Shared
Custom: No
# of CPU Cores: 1
CPU (in MHz): 500
Memory (in MB): 128
Network Rate (Mb/s): null
QoS Type: null
Offer HA: Yes
Storage Tags: null
Host Tags: null
CPU Cap: No
Public: Yes
Volatile: No
Deployment Planner: null
Planner mode: null
GPU: null

Reduce the amount of RAM

Following a successful login to the UI, the Databases will be fully deployed so now we can reduce the RAM to 1GB to free up memory for our XenServer VM. Shutdown the VM and change the settings to 1024 MB of RAM.

XenServer VM

To configure the XenServer you will need XenCenter running on your local Host if you are running Windows, but if your Host is running OSX or Linux, then you need to add a Windows VM which can run XenCenter. You can download XenCenter from http://downloadns.citrix.com.edgesuite.net/akdlm/8160/XenServer-6.2.0-XenCenter.msi

Create a VM for XenServer using the following settings:

Name: XenServer
Type: Linux
Version: Red Hat (64 bit)
vCPU: 2
RAM: 1536 (If your host has 8GB of RAM, consider allocating 3072)
Hard Drive: VDI – Dynamic – 24 GB

Note: VirtualBox seems to mix up the networks if you add them all at the same time so we add the 1st Network and install XenServer, then once fully installed, we add the additional networks, rebooting in-between. This appears to be a bug in the latest versions of VirtualBox (4.3.18 at the time of writing)

Modify the settings and assign ONLY the 1st network Adapter correct networks as follows:

xenserver-adapter-1

Note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

Now install XenServer 6.2 by downloading the ISO from http://downloadns.citrix.com.edgesuite.net/akdlm/8159/XenServer-6.2.0-install-cd.iso and booting the VM.

The XenServer installation wizard is straightforward, but you will get a warning about the lack of hardware virtualisation support, this is expected as VirtualBox does not support this. Accept the warning and continue.

Choose the appropriate regional settings and enter the following details when prompted: (we enter the IP of the CSMAN VM for DNS and NTP, whilst this guide does not cover setting up these services on the CSMAN VM, this gives you the option of doing so at a later date)

Enable Thin Provisioning: Yes
Install source: Local media
Supplemental Packs: No
Verification: Skip
Password: <password>
Static IP: 192.168.56.101/24 (no gateway required)
Hostname: xenserver
DNS: 192.168.56.11
NTP: 192.168.56.11

Once the XenServer Installation has finished, detach the ISO and reboot the VM.

We now need to change the amount of RAM allocated to Dom0 to its minimum recommended amount which is 400MB, we do this by running the following command on the XenServer console

 /opt/xensource/libexec/xen-cmdline --set-xen dom0_mem=400M,max:400M 

XenServer Patches

It’s important to install XenServer Patches and whilst XenCenter will inform you of the required patches, as we are using the OpenSource version of XenServer we have to install Patches via the command line. Fortunately there are a number of ways of automating this process.

Personally I always use PXE to deploy XenServer and the installation of patches is built into my deployment process. However that is out of scope for this article, but Tim Mackey has produced a great blog article on how to do this: http://xenserver.org/discuss-virtualization/virtualization-blog/entry/patching-xenserver-at-scale.html

Whilst Tim’s method of rebooting after every patch install is best practice, it can take a long time to install all Patches so an alternative approach I use in these non-production test environments is detailed here https://github.com/amesserl/xs_patcher  This installs all patches and requires only a single reboot.

The configuration file ‘clearwater’ is now a little out of date, and should contain the following (and the cache folder should contain the associated patch files):

XS62E014|78251ea4-e4e7-4d72-85bd-b22bc137e20b|downloadns.citrix.com.edgesuite.net/8736/XS62E014.zip|support.citrix.com/article/CTX140052

XS62ESP1|0850b186-4d47-11e3-a720-001b2151a503|downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip|support.citrix.com/article/CTX139788

XS62ESP1003|c208dc56-36c2-4e91-b8d7-0246575b1828|downloadns.citrix.com.edgesuite.net/9031/XS62ESP1003.zip|support.citrix.com/article/CTX140416

XS62ESP1005|1c952800-c030-481c-a0c1-d1b45aa19fcc|downloadns.citrix.com.edgesuite.net/9058/XS62ESP1005.zip|support.citrix.com/article/CTX140553

XS62ESP1009|a24d94e1-326b-4eaa-8611-548a1b5f8bd5|downloadns.citrix.com.edgesuite.net/9617/XS62ESP1009.zip|support.citrix.com/article/CTX141191

XS62ESP1013|b22d6335-823d-43a6-ba26-28793717125b|downloadns.citrix.com.edgesuite.net/9703/XS62ESP1013.zip|support.citrix.com/article/CTX141480

XS62ESP1014|4fc82e62-b938-407d-a2c6-68c8922f3ec2|downloadns.citrix.com.edgesuite.net/9708/XS62ESP1014.zip|support.citrix.com/article/CTX141486

Once you have your XenServer fully patched shut it down and then add the 2nd Adapter, again note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

xenserver-adapter-2

Boot the VM and then using XenCenter perform a ‘Rescan’ on the NICs to detect this new NIC, then shutdown and add the 3rd Adapter, again note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

xenserver-adapter-3

Boot the VM and then using XenCenter perform a ‘Rescan’ on the NICs to detect this new NIC, then shutdown and add the 4th Adapter, again note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

xenserver-adapter-4

Boot the VM and then using XenCenter perform a ‘Rescan’ on the NICs to detect this final NIC, then one final reboot to make sure they are all activated and connected.

Configure XenServer Networks

Now we are ready to configure the XenServer Networks. We should have the following four networks present, and it’s worth just checking the MACs line up with the Adapters in VirtualBox.

xenserver-networks-1
We need to rename the networks using a more logical naming convention, and also create the two Storage Networks, and assign their VLANs etc.

First of all start by renaming them all setting the MTU of the Storage Network to 9000 (the rest remain at the default of 1500)

Network 0 – MGMT
Network 1 – GUEST
Network 2 – PUBLIC
Network 3 – STORAGE (and MTU of 9000)

xenserver-networks-2

Next we add the Primary Storage Network using the following settings:

Type: External Network
Name: PRI-STORAGE
NIC: NIC 3
VLAN: 100
MTU: 9000

Then the Secondary Storage Network:

Type: External Network
Name: SEC-STORAGE
NIC: NIC 3
VLAN: 101
MTU: 9000

xenserver-storage-networks

Finally we add the IP addresses for the Primary and Secondary Storage Networks so the XenServer can access them

Name: PRI-STOR
Network: PRI-STORAGE
IP address: 10.10.100.101
Subnet mask: 255.255.255.0
Gateway: <blank>

Name: SEC-STOR
Network: SEC-STORAGE
IP address: 10.10.101.101
Subnet mask: 255.255.255.0
Gateway: <blank>

xenserver-ips

That is all the configuration required for XenServer so now we can proceed with deploying our first Zone. However before we do, it’s worth taking a snapshot of both of the VMs so you can roll back and start again if required.

Zone Deployment

We now add an Advanced Zone by going to ‘Infrastructure/Zones/Add Zone’ and creating a new Zone of type ‘Advanced’ without Security Groups

Zone Name – Test
IPv4 DNS1 – 8.8.8.8
Internal DNS 1 – 192.168.56.11
Hypervisor – XenServer
Guest CIDR – 10.1.1.0/24

Next we need to setup the XenServer Traffic Labels to match the names we allocated to each Network on our XenServer, and we also need to add the optional Storage Network by dragging it onto the Physical Network.

xenserver-physical-networks

Edit each Traffic Type and set the following Labels:

Management Network – MGMT
Public Network – PUBLIC
Guest Network – GUEST
Storage Network – SEC-STORAGE

Then continue through the add zone wizard using the following settings

Public Traffic

Gateway – 172.30.0.1
Netmask – 255.255.255.0
VLAN – <blank>
Start IP – 172.30.0.21
End IP -172.30.0.30

POD Settings

POD Name – POD1
Reserved System Gateway – 192.168.56.1
Reserved System Netmask – 255.255.255.0
Start Reserved System IP – 192.168.56.21
End Reserved System IP – 192.168.56.30

Guest Traffic

VLAN Range – 600 – 699

Storage Traffic

Gateway – 10.10.101.1
Netmask – 255.255.255.0
VLAN – <blank>
Start IP – 10.10.101.21
End IP – 10.10.101.30

Cluster Settings

Hypervisor – XenServer
Cluster Name – CLU1

Host Settings

Host Name – 192.168.56.101
Username – root
Password – <password>

Primary Storage Settings

Name – PRI1
Scope – Cluster
Protocol – nfs
Server – 10.10.100.11
Path – /exports/primary
Provider: DefaultPrimary
Storage Tags: <BLANK>

Secondary Storage Settings

Provider – NFS
Name – SEC1
Server – 10.10.101.11
Path – /exports/secondary

At the end of it, activate the Zone, then allow approx. 5 minutes for the System VMs to deploy and the default CentOS Template to be ‘downloaded’ into the system. You are now ready to deploy your first Guest VM.

 

 

So you have a Cluster of Citrix XenServers and you want to upgrade them to a new version, for example to go from XenServer 6.0.2 to XenServer 6.2, or simply apply the latest Hotfixes.  As this is a cluster that is being managed by CloudStack it is not as simple as using the Rolling Pool Upgrade feature in XenCenter – in fact this is the LAST thing you want to do, and WILL result in a broken Cluster.

This article walks you through the steps required to perform the upgrade, but as always you must test this yourself in your own test environment before attempting on a production system.

We need to change the default behaviour of CloudStack with respect to how it manages XenServer Clusters before continuing.  Edit /etc/cloudstack/management/environment.properties and add the following line:

# vi /etc/cloudstack/management/environment.properties

Add > manage.xenserver.pool.master=false

Now restart the CloudStack Management Service

# service cloudstack-management restart

Repeat for all CloudStack Management servers

It is vital that you upgrade the XenServer Pool Master first before any of the Slaves.  To do so you need to empty the Pool Master of all CloudStack VMs, and you do this by putting the Host into Maintenance Mode within CloudStack to trigger a live migration of all VMs to alternate Hosts (do not place the Host into Maintenance Mode using XenCenter as this will cause a new Master to be elected and we do not want that). 

Next you need to ‘Unmanage’ the Cluster, as this prevents users from being able to interact (stop/start) VMs you will need to arrange a ‘Maintenance Window’ but only long enough to update the Pool Master.  All Customer VMs will continue to run during the upgrade process unless you are using Local Storage, in which case VMs on the Hosts being upgraded will have to shut down.  After ‘Unmanaging’ the Cluster, all Hosts will go into a ‘Disconnected’ state, this is expected and is not a cause for concern.

Now you can upgrade your Pool Master, either upgrading to a newer version, or simply applying XenServer Hotfixes as required.  Once the Pool Master has been fully upgraded re-manage the Cluster and then wait for all of the Hosts in the Cluster to come back online within CloudStack. 

Monitor the status of your NFS Storage via XenCenter and wait for all Volumes to reconnect on the upgraded Host.  Once storage has reconnected and all Hosts are back online, take the Pool Master you just upgraded out of CloudStack Maintenance Mode.

Edit /etc/cloudstack/management/environment.properties and remove the following line which you added earlier:

# vi /etc/cloudstack/management/environment.properties

Delete > manage.xenserver.pool.master=false

Now restart the CloudStack Management Service

# service cloudstack-management restart

Repeat for all CloudStack Management servers

You can now upgrade each Slave by simply placing it into Maintenance Mode in CloudStack, apply the upgrade / Hotfixes and when completed, bringing out of Maintenance Mode before starting on the next Host.

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy and an Apache CloudStack Committer. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.

In this post Rohit Yadav, Software Architect, at ShapeBlue talks about setting up a Apache CloudStack (ACS)  cloud on a single host with KVM and basic networking. This can be done on a VM or a physical host. Such a deployment can be useful in evaluating CloudStack locally and can be done in less than 30 minutes.

Note: this should work for ACS 4.3.0 and above. This how-to post may get outdated in future, so please read the latest docs and/or read the latest docs on KVM host installation.

First install Ubuntu 14.04 LTS x86_64 on a baremetal host or a VM that has at least 2G RAM (preferably 4GB RAM) and with a real or virtual 64-bit CPU that has Intel VT-x or AMD-V enabled. I personally use VMWare Fusion which can provide VMs 64-bit CPU with Intel VT-x. Such as CPU is needed by KVM for HVM or full-virtualization. Too bad VirtualBox cannot do this yet, or one can say KVM cannot do paravirtualization like Xen can.

Next, we need to do bunch of things:

  • Setup networking, IPs, create bridge
  • Install cloudstack-management and cloudstack-common
  • Install and setup MySQL server
  • Setup NFS for primary and secondary storages
  • Preseed systemvm templates
  • Prepare KVM host and install cloudstack-agent
  • Configure Firewall
  • Start your cloud!

 

Let’s start by installing some basic packages, assuming you’re root or have sudo powers:

apt-get install openntpd openssh-server sudo vim htop tar build-essential

Make sure root is able to ssh using password, fix in /etc/ssh/sshd_config.

Reset root password and remember this password:

passwd root

Networking

Next, we’ll be setting up bridges. CloudStack requires that KVM hosts have two bridges cloudbr0 and cloudbr1 which is because these names are hard coded in the code and on the KVM hosts we need to have a way to let VMs communicate to the host, between themselves and reach the outside world etc. Add network rules and configure IPs as applicable.

apt-get install bridge-utils
cat /etc/network/interfaces # an example bridge configuration

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

# Public network
auto cloudbr0
iface cloudbr0 inet static
    address 172.16.154.10
    netmask 255.255.255.0
    gateway 172.16.154.2
    dns-nameservers 172.16.154.2 8.8.8.8
    bridge_ports eth0
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# Private network
auto cloudbr1
iface cloudbr1 inet manual
    bridge_ports none
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

Notice, we’re not using cloudbr1 because the intention is to setup basic zone, basic networking, so all networking going through one bridge only.

We’re done with setting up networking, just note the cloudbr0 IP. In my case, it was 172.16.154.10. You may notice that we’re not configuring eth0 at all, it’s because we’ve a bridge now and we expose this bridge to the outside networking using this cloudbr0’s IP. By not configuring eth0 (static or dhcp), we get Ubuntu to use cloudbr0 as its default interface and use cloudbr0’s gateway as its default gateway and route. You need to reboot your VM or host now.

Management server and MySQL

Setup CloudStack repo, you may use something that I host (the link is unreliable, let me know if it stops working for you). You may use any other Debian repo as well. One can also build from source and host their own repositories.

We need to install the CloudStack management server, MySQL server and setup the management server database:

echo deb http://packages.bhaisaab.org/cloudstack/upstream/debian/4.3 ./ >> /etc/apt/sources.list.d/acs.list
apt-get update -y
apt-get install cloudstack-management cloudstack-common mysql-server
# pick any suitable root password for MySQL server

You don’t need to explicitly install cloudstack-common because the management package depends on it. This is to point out that many tools, scripts can be found in this package, such as tools to setup database, preseed systemvm template etc.

You may put following rules on your /etc/mysql/my.cnf, they are mostly to configure innodb settings and have MySQL use the bin-log “ROW” format which can be useful for replication etc. Since we’re doing only test setup we may skip this, even though CloudStack docs say that you put only this but I think on production systems you may need to configure many more options (perhaps 400 of those).

[mysqld]
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'

Now, let’s setup managment server database;

service mysql restart
cloudstack-setup-databases cloud:cloudpassword@localhost --deploy-as=root:passwordOfRoot -i <stick your cloudbr0 IP here>

Storage

We’ll setup NFS and preseed systemvm.

mkdir -p /export/primary /export/secondary
apt-get install nfs-kernel-server quota
echo /export  *(rw,async,no_root_squash,no_subtree_check) > /etc/exports
exportfs -a
sed -i -e 's/^RPCMOUNTDOPTS=--manage-gids$/RPCMOUNTDOPTS="-p 892 --manage-gids"/g' /etc/default/nfs-kernel-server
sed -i -e 's/^NEED_STATD=$/NEED_STATD=yes/g' /etc/default/nfs-common
sed -i -e 's/^STATDOPTS=$/STATDOPTS="--port 662 --outgoing-port 2020"/g' /etc/default/nfs-common
sed -i -e 's/^RPCRQUOTADOPTS=$/RPCRQUOTADOPTS="-p 875"/g' /etc/default/quota
service nfs-kernel-server restart

I prefer to download the systemvm first and then preseed it:

wget http://people.apache.org/~bhaisaab/cloudstack/systemvmtemplates/systemvm64template-2014-09-11-4.3-kvm.qcow2.bz2
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
          -m /export/secondary -f systemvm64template-2014-09-11-4.3-kvm.qcow2.bz2 -h kvm \
          -o localhost -r cloud -d cloudpassword

KVM and agent setup

Time to setup cloudstack-agent, libvirt and KVM:

apt-get install qemu-kvm cloudstack-agent
sed -i -e 's/listen_tls = 1/listen_tls = 0/g' /etc/libvirt/libvirtd.conf
echo 'listen_tcp=1' >> /etc/libvirt/libvirtd.conf
echo 'tcp_port = "16509"' >> /etc/libvirt/libvirtd.conf
echo 'mdns_adv = 0' >> /etc/libvirt/libvirtd.conf
echo 'auth_tcp = "none"' >> /etc/libvirt/libvirtd.conf
sed -i -e 's/\# vnc_listen.*$/vnc_listen = "0.0.0.0"/g' /etc/libvirt/qemu.conf
sed -i -e 's/libvirtd_opts="-d"/libvirtd_opts="-d -l"/' /etc/init/libvirt-bin.conf
service libvirt-bin restart

Firewall

Finally punch in holes on the firewall, substitute your network in the following:

# configure iptables
NETWORK=172.16.154.0/24
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 2049 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 32803 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 32769 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 662 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 662 -j ACCEPT

apt-get install iptables-persistent

# Disable apparmour on libvirtd
ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

# Configure ufw
ufw allow mysql
ufw allow proto tcp from any to any port 22
ufw allow proto tcp from any to any port 1798
ufw allow proto tcp from any to any port 16509
ufw allow proto tcp from any to any port 5900:6100
ufw allow proto tcp from any to any port 49152:49216

Launch Cloud

All set! Make sure tomcat is not running, start the agent and management server:

/etc/init.d/tomcat6 stop
/etc/init.d/cloudstack-agent start
/etc/init.d/cloudstack-management start

If all goes well, open http://cloudbr0-IP:8080/client and you’ll see the ACS login page. Use username admin and password password to log in. Now setup a basic zone, in the following steps change the IPs as applicable:

  • Pick zone name, DNS 172.16.154.2, External DNS 8.8.8.8, basic zone + SG
  • Pick pod name, gateway 172.16.154.2, netmask 255.255.255.0, IP range 172.16.154.200-250
  • Add guest network, gateway 172.16.154.2, netmask 255.255.255.0, IP range 172.16.154.100-199
  • Pick cluster name, hypervisor KVM
  • Add the KVM host, IP 172.16.154.10, user root, password whatever-the-root-password-is
  • Add primary NFS storage, IP 172.16.154.10, path /export/primary
  • Add secondary NFS storage, IP 172.16.154.10, path /export/secondary
  • Hit launch, if everything goes well launch your zone!

Keep an eye on your /var/log/cloudstack/management/management-server.log and/var/log/cloudstack/agent/agent.log for possible issues. Read the admin docs for more cloudy admin tasks. Have fun playing with your CloudStack cloud.