Posts

Introduction

If you are new to Apache CloudStack and want to learn the concepts but do not have all the equipment required to stand-up a test environment, why not use your existing PC and VirtualBox.

VirtualBox is a cross platform virtualisation application which runs on OSX, Windows, Linux and Solaris, meaning no matter what OS you are running, you should be able to run VirtualBox.

The aim of this exercise is to build an Apache CloudStack environment which is as close to a Production deployment as possible, within the obvious constraints of running it all on a laptop. This deployment will support the following key functions of Apache CloudStack:

Production Grade Hypervisor: Citrix XenServer 6.2 with full VLAN support
Apache CloudStack on CentOS 6.5
NFS for Primary and Secondary Storage – each on a dedicated VLAN
Console Proxy and Secondary Storage VM
All Advanced Networking features such as Firewall, NAT, Port Forwarding, Load Balancing, VPC

To achieve all of this we need to deploy two VMs on VirtualBox, a CentOS VM for Apache Cloudstack, and a Citrix XenServer VM for our Hypervisor. The CloudStack VM will also act as our MySQL Server and NFS Server.

appliance

A key requirement of this test environment is to keep it completely self-contained so it can be used for training (insert link to Bootcamp) and demos etc.  To achieve this, and maintain the ability to deploy a new Zone and download the example CentOS Template to use in the system, we simulate the CloudStack Public Network and host the Default CentOS Template on the CloudStack Management Server VM using NGINX.

VirtualBox Configuration

Download and install the appropriate version from https://www.virtualbox.org/wiki/Downloads

Once VirtualBox is installed we need to configure it ready for this environment. The defaults are used where possible, but if you have been using VirtualBox already, you may have different settings which need to be adjusted.

We will be using three ‘Host Only’ networks, one ‘Nat’ network, and an ‘Internal’ network. By default VirtualBox has only one ‘Host Only’ network so we need to create two more.

  1. From the ‘file’ menu (windows) or VirtualBox menu (OSX), select ‘Preferences’ then ‘Network’ then ‘Host-only Networks’
  2. Add two more networks so you have at least 3 which we can use
  3. Setup the IP Schema for the 1st two networks as follows:

The naming conventions for Host Only Networks differs depending on the Host OS, I will simply refer to these as

‘Host Only Network 1’, 2 and 3 etc so please refer to the following comparison matrix to identify the correct Network.

This Guide Windows OSX
 Host Only Network 1    VirtualBox Host Only Ethernet Adapter  vboxnet0
 Host Only Network 2  VirtualBox Host Only Ethernet Adapter #2    vboxnet1
 Host Only Network 3  VirtualBox Host Only Ethernet Adapter #3  vboxnet2

Host Only Network 1:

IPv4 Address: 192.168.56.1
IPv4 Network Mask: 255.255.255.0

DHCP Server is optional as we don’t use it, but ensure the range does not clash with the static IPs we will be using which are 192.168.56.11 & 192.168.56.101

Host Only Network 2:

IPv4 Address: 172.30.0.1
IPv4 Network Mask: 255.255.255.0

By setting up these IP ranges, we ensure our host laptop has an IP on these Networks so we can access the VMs connected to them. We don’t need an IP on ‘Host Only Network 3’ as this will be used for storage and will also be running VLANs.

We use a NAT Network so that we can connect the CloudStack Management VM to the internet to enable the installation of the various packages we will be using.

Configure the VirtualBox ‘NatNetwork’ to use the following settings:

Network Name: NatNetwork
Network CIDR: 10.0.2.0/24

We disable DHCP as we cannot control the range to exclude our statically assigned IPs on our VMs.

Whilst this article focuses on creating a single CloudStack Management Server, you can easily add a second, and I have found that the DHCP allocated IPs from the NAT Network can change randomly, so setting up NAT Rules can be problematic, hence I always use statically assigned IPs.

The ‘Internal’ Network requires no configuration.

CloudStack VM

Create a VM for CloudStack Manager using the following Settings:

Name: CSMAN 4.4.1
Type: Linux
Version: Red Hat (64 bit)
RAM: 2048 (you cannot go lower than this for initial setup)
Hard Drive: VDI – Dynamic – 64 GB (we allocate this much as it will act as NFS Storage)

Note: VirtualBox seems to mix up the networks if you add them all at the same time so we add the 1st Network and install CentOS, then once fully installed, we add the additional networks, rebooting in-between. This appears to be a bug in the latest versions of VirtualBox (4.3.18 at the time of writing)

Modify the settings and assign ONLY the 1st network Adapter correct networks as follows:

csman-adapter-1

Install CentOS 6.5 64-bit minimal, set the Hostname to CSMAN, and IP address to 192.168.56.11/24 with a gateway of 192.168.56.1, and ensure the network is set to start on boot. Set DNS to public servers such as 8.8.8.8 & 8.8.4.4

Once the install is completed reboot the VM and confirm eth0 is active, then shutdown the VM and add the 2nd Network Adapter

csman-adapter-2

Boot the VM so it detects the NIC, then shut down and add the 3rd Adapter

csman-adapter-3

Boot the VM so it detects the NIC, then shut down and add the 4th Adapter

csman-adapter-4

Finally, boot the VM so it detects the last adapter and then we can configure the various interfaces with the correct IP schemas.

ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
IPADDR=192.168.56.11
PREFIX=24
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=MGMT

ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
IPADDR=10.0.2.11
GATEWAY=10.0.2.1
PREFIX=24
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
DEFROUTE=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=NAT

ifcfg-eth2

DEVICE=eth2
TYPE=Ethernet
IPADDR=172.30.0.11
PREFIX=24
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=PUBLIC 

ifcfg-eth3

DEVICE=eth3
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MTU=9000
VLAN=yes
USERCTL=no
MTU=9000

ifcfg-eth3.100

DEVICE=eth3.100
TYPE=Ethernet
IPADDR=10.10.100.11
PREFIX=24
ONBOOT=yes
BOOTPROTO=none
NAME=PRI-STOR
VLAN=yes
USERCTL=no
MTU=9000 

ifcfg-eth3.101

DEVICE=eth3.101
TYPE=Ethernet
IPADDR=10.10.101.11
PREFIX=24
ONBOOT=yes
BOOTPROTO=none
NAME=SEC-STOR
VLAN=yes
USERCTL=no
MTU=9000

Restart networking to apply the new settings, then apply all the latest updates

service networking restart
yum update -y

You can now connect via SSH using Putty to continue the rest of the configuration so you can copy and paste commands and settings etc

Installation and Configuration

With the base VM built we now need to install Apache CloudStack and all the other services this VM will be hosting. First we need to ensure the VM has the correct configuration.

Selinux

Selinux needs to be set to ‘permissive’, we can achieve this by running the following commands:

setenforce permissive
sed -i "/SELINUX=enforcing/ c\SELINUX=permissive" /etc/selinux/config

Hostname

The CloudStack Management Server should return its FQDN when you run hostname –fqdn, but as we do not have a working DNS installation it will probably return ‘unknown-host’ To resolve this we simply add an entry into the Hosts file, and while we are there, we may as well add one for the xenserver as well. Update /etc/hosts with the following, then reboot for it to take effect.

127.0.0.1 localhost localhost.cstack.local
192.168.56.11 csman.cstack.local csman
192.168.56.101 xenserver.cstack.local xenserver

 

Speed up SSH Connections

As you will want to use SSH to connect to the CloudStack VM its worth turning off the DNS Check to speed up the connection. Run the following commands

sed -i "/#UseDNS yes/ c\UseDNS no" /etc/ssh/sshd_config
service sshd restart

 

NTP

It’s always a good idea to install NTP so let’s add it now, and set it to start on boot (you can always configure this VM to act as the NTP Server for the XenServer, but that’s out of scope for this article)

yum install -y ntp
chkconfig ntpd on
service ntpd start

 

CloudStack Repo

Setup the CloudStack repo by running the following command:

echo "[cloudstack]
name=cloudstack
baseurl=http://packages.shapeblue.com/cloudstack/main/centos/4.4
enabled=1
gpgcheck=1" > /etc/yum.repos.d/cloudstack.repo

 

Import the ShapeBlue gpg release key: (Key ID 584DF93F, Key fingerprint = 7203 0CA1 18C1 A275 68B1 37C4 BDF0 E176 584D F93F)

yum install wget -y
wget http://packages.shapeblue.com/release.asc
sudo rpm --import release.asc

 

Install CloudStack and MySQL

Now we can install CloudStack and MySQL Server

yum install -y cloudstack-management mysql-server

 

Setup NFS Server

As the CSMAN VM will also be acting as the NFS Server we need to setup the NFS environment. Run the following commands to create the folders for Primary and Secondary Storage and then export them to the appropriate IP ranges.

mkdir /exports
mkdir -p /exports/primary
mkdir -p /exports/secondary
chmod 777 -R /exports
echo "/exports/primary 10.10.100.0/24(rw,async,no_root_squash)" > /etc/exports
echo "/exports/secondary 10.10.101.0/24(rw,async,no_root_squash)" >> /etc/exports
exportfs -a

 

We now need to update /etc/sysconfig/nfs with the settings to activate the NFS Server. Run the following command to update the required settings

sed -i -e '/#MOUNTD_NFS_V3="no"/ c\MOUNTD_NFS_V3="yes"' -e '/#RQUOTAD_PORT=875/ c\RQUOTAD_PORT=875' -e '/#LOCKD_TCPPORT=32803/ c\LOCKD_TCPPORT=32803' -e '/#LOCKD_UDPPORT=32769/ c\LOCKD_UDPPORT=32769' -e '/#MOUNTD_PORT=892/ c\MOUNTD_PORT=892' -e '/#STATD_PORT=662/ c\STATD_PORT=662' -e '/#STATD_OUTGOING_PORT=2020/ c\STATD_OUTGOING_PORT=2020' /etc/sysconfig/nfs

 

We also need to update the firewall settings to allow the XenServer to access the NFS exports so run the following to setup the required settings

sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 111 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 2049 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 2020 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 32803 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 32769 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 892 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 892 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 875 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 875 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 662 -j ACCEPT" /etc/sysconfig/iptables
sed -i -e "/:OUTPUT/ a\-A INPUT -p udp -m udp --dport 662 -j ACCEPT" /etc/sysconfig/iptables
service iptables restart

 

Then we set the nfs service to autostart on boot, and also start it now

chkconfig nfs on
service nfs start

 

Setup MySQL Server

The following command will adjust the MySQL Configuration for this environment

sed -i -e '/datadir/ a\innodb_rollback_on_timeout=1' -e '/datadir/ a\innodb_lock_wait_timeout=600' -e '/datadir/ a\max_connections=350' -e '/datadir/ a\log-bin=mysql-bin' -e "/datadir/ a\binlog-format = 'ROW'" -e "/datadir/ a\bind-address = 0.0.0.0" /etc/my.cnf

 

Then we set the mysqld service to autostart on boot, and also start it now

chkconfig mysqld on
service mysqld start

 

It’s always a good idea to secure a default install of MySQL and there is a handy utility to do this for you. Run the following command, setting a new password when prompted, (the current password will be blank) and accept all of the defaults to remove the anonymous user, test database and disable remote access etc.

mysql_secure_installation

 

Now we will login into MySQL and assign all privileges to the root account, this is so it can be used to create the ‘cloud’ account in a later step

mysql -u root -p  (enter password when prompted)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> quit

 

Setup Databases

With MySQL configured we can now setup the CloudStack Databases by running the following two commands, substituting your root password you setup earlier

cloudstack-setup-databases cloud:<password>@127.0.0.1 --deploy-as=root:<password>
cloudstack-setup-management

 

Nginx

There is a default example template which gets downloaded from the cloud.com web servers, but as this test system has no real public internet access we need to provide a way for the Secondary Storage VM to download this template. We achieve this by installing NGINX on the CSMAN VM, and use it to host the Template on our simulated ‘Public’ network.

First create the NGINX repo by running the following command:

echo "[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/
gpgcheck=0
enabled=1" > /etc/yum.repos.d/nginx.repo

 

Then install NGINX by running the following command

yum install nginx -y

 

Now we download the example CentOS Template for XenServer by running the following two commands

cd /usr/share/nginx/html
wget -nc http://download.cloud.com/templates/builtin/centos56-x86_64.vhd.bz2

We need to add a firewall rule to allow access via port 80 so run the following two commands

sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT" /etc/sysconfig/iptables
service iptables restart

 

Finally we start the nginx service, then test it by accessing http://192.168.56.11/ from the Host laptop

service nginx start

nginx

XenServer vhd-util

As we will be using Citrix XenServer as our Hypervisor we need to download a special utility which will get copied to every XenServer when it is added to the system. Run the following lines to download the file and update the permissions.

cd /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/
wget http://download.cloud.com.s3.amazonaws.com/tools/vhd-util
chmod 755 /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/vhd-util

 

Seed the CloudStack Default System VM Template

With now we need to seed the Secondary Storage with the XenServer System VM Template so run the following command

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /exports/secondary -u http://packages.shapeblue.com/systemvmtemplate/4.4/4.4.1/systemvm64template-4.4.1-7-xen.vhd.bz2 -h xenserver -F

 

CloudStack Usage Server

An optional step is to install the CloudStack Usage Service, to do so run the following command

yum install cloudstack-usage -y
service cloudstack-usage start

 

Customise the Configuration

For this test system to work within the limited resources available on a 4GB RAM Laptop, we need to make a number of modifications to the configuration.

Firstly we need to enable the use of a non HVM enabled XenServer. When you install XenServer on VirtualBox it warns you that it will only support PV and not HVM. To get around this we run the following SQL update command to add a new line into the Configuration table in the Cloud Database (remember to substitute your own MySQL Cloud password you used when you setup the CloudStack Database)

mysql -p<password> cloud -e \ "INSERT INTO cloud.configuration (category, instance, component, name, value, description) VALUES ('Advanced', 'DEFAULT', 'management-server', 'xen.check.hvm', 'false', 'Shoud we allow only the XenServers support HVM');"

 

The following MySQL commands update various global settings, and change the resources allocated to the system VMs so they will work within the limited resources available.

mysql -u cloud -p<password>
UPDATE cloud.configuration SET value='8096' WHERE name='integration.api.port';
UPDATE cloud.configuration SET value='60' WHERE name='expunge.delay';
UPDATE cloud.configuration SET value='60' WHERE name='expunge.interval';
UPDATE cloud.configuration SET value='60' WHERE name='account.cleanup.interval';
UPDATE cloud.configuration SET value='60' WHERE name='capacity.skipcounting.hours';
UPDATE cloud.configuration SET value='0.99' WHERE name='cluster.cpu.allocated.capacity.disablethreshold';
UPDATE cloud.configuration SET value='0.99' WHERE name='cluster.memory.allocated.capacity.disablethreshold';
UPDATE cloud.configuration SET value='0.99' WHERE name='pool.storage.capacity.disablethreshold';
UPDATE cloud.configuration SET value='0.99' WHERE name='pool.storage.allocated.capacity.disablethreshold';
UPDATE cloud.configuration SET value='60000' WHERE name='capacity.check.period';
UPDATE cloud.configuration SET value='1' WHERE name='event.purge.delay';
UPDATE cloud.configuration SET value='60' WHERE name='network.gc.interval';
UPDATE cloud.configuration SET value='60' WHERE name='network.gc.wait';
UPDATE cloud.configuration SET value='600' WHERE name='vm.op.cleanup.interval';
UPDATE cloud.configuration SET value='60' WHERE name='vm.op.cleanup.wait';
UPDATE cloud.configuration SET value='600' WHERE name='vm.tranisition.wait.interval';
UPDATE cloud.configuration SET value='60' WHERE name='vpc.cleanup.interval';
UPDATE cloud.configuration SET value='4' WHERE name='cpu.overprovisioning.factor';
UPDATE cloud.configuration SET value='8' WHERE name='storage.overprovisioning.factor';
UPDATE cloud.configuration SET value='192.168.56.11/32' WHERE name='secstorage.allowed.internal.sites';
UPDATE cloud.configuration SET value='192.168.56.0/24' WHERE name='management.network.cidr';
UPDATE cloud.configuration SET value='192.168.56.11' WHERE name='host';
UPDATE cloud.configuration SET value='false' WHERE name='check.pod.cidrs';
UPDATE cloud.configuration SET value='0' WHERE name='network.throttling.rate';
UPDATE cloud.configuration SET value='0' WHERE name='vm.network.throttling.rate';
UPDATE cloud.configuration SET value='GMT' WHERE name='usage.execution.timezone';
UPDATE cloud.configuration SET value='16:00' WHERE name='usage.stats.job.exec.time';
UPDATE cloud.configuration SET value='true' WHERE name='enable.dynamic.scale.vm';
UPDATE cloud.configuration SET value='9000' WHERE name='secstorage.vm.mtu.size';
UPDATE cloud.configuration SET value='60' WHERE name='alert.wait';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='domainrouter';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='elasticloadbalancervm';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='secondarystoragevm';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='internalloadbalancervm';
UPDATE cloud.service_offering SET ram_size='128', speed='128' WHERE vm_type='consoleproxy';
UPDATE cloud.vm_template SET removed=now() WHERE id='2';
UPDATE cloud.vm_template SET url='http://192.168.56.11/centos56-x86_64.vhd.bz2' WHERE unique_name='centos56-x86_64-xen';
quit
service cloudstack-management restart

 

To enable access to the Un-Authenticated API which we have enabled on the default port of 8096, we need to add a firewall rule. Run the following commands to allow port 8096 through the firewall

sed -i -e "/:OUTPUT/ a\-A INPUT -p tcp -m tcp --dport 8096 -j ACCEPT" /etc/sysconfig/iptables
service iptables restart

 

Test the UI

Allow 1-2 mins for the cloudstack-management service to fully restart then login into the UI which should be accessible from the Host Laptop on http://192.168.56.11:8080/client/

The default credentials are

Username: admin
Password: password
Domain: <blank>

logon

Create Compute Offering

The default Compute Offerings are not suitable for this limited environment so we need to create a new compute offering using the following settings:

Name: Ultra Tiny
Description: Ultra Tiny – 1vCPU, 128MB RAM
Storage Type: Shared
Custom: No
# of CPU Cores: 1
CPU (in MHz): 500
Memory (in MB): 128
Network Rate (Mb/s): null
QoS Type: null
Offer HA: Yes
Storage Tags: null
Host Tags: null
CPU Cap: No
Public: Yes
Volatile: No
Deployment Planner: null
Planner mode: null
GPU: null

Reduce the amount of RAM

Following a successful login to the UI, the Databases will be fully deployed so now we can reduce the RAM to 1GB to free up memory for our XenServer VM. Shutdown the VM and change the settings to 1024 MB of RAM.

XenServer VM

To configure the XenServer you will need XenCenter running on your local Host if you are running Windows, but if your Host is running OSX or Linux, then you need to add a Windows VM which can run XenCenter. You can download XenCenter from http://downloadns.citrix.com.edgesuite.net/akdlm/8160/XenServer-6.2.0-XenCenter.msi

Create a VM for XenServer using the following settings:

Name: XenServer
Type: Linux
Version: Red Hat (64 bit)
vCPU: 2
RAM: 1536 (If your host has 8GB of RAM, consider allocating 3072)
Hard Drive: VDI – Dynamic – 24 GB

Note: VirtualBox seems to mix up the networks if you add them all at the same time so we add the 1st Network and install XenServer, then once fully installed, we add the additional networks, rebooting in-between. This appears to be a bug in the latest versions of VirtualBox (4.3.18 at the time of writing)

Modify the settings and assign ONLY the 1st network Adapter correct networks as follows:

xenserver-adapter-1

Note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

Now install XenServer 6.2 by downloading the ISO from http://downloadns.citrix.com.edgesuite.net/akdlm/8159/XenServer-6.2.0-install-cd.iso and booting the VM.

The XenServer installation wizard is straightforward, but you will get a warning about the lack of hardware virtualisation support, this is expected as VirtualBox does not support this. Accept the warning and continue.

Choose the appropriate regional settings and enter the following details when prompted: (we enter the IP of the CSMAN VM for DNS and NTP, whilst this guide does not cover setting up these services on the CSMAN VM, this gives you the option of doing so at a later date)

Enable Thin Provisioning: Yes
Install source: Local media
Supplemental Packs: No
Verification: Skip
Password: <password>
Static IP: 192.168.56.101/24 (no gateway required)
Hostname: xenserver
DNS: 192.168.56.11
NTP: 192.168.56.11

Once the XenServer Installation has finished, detach the ISO and reboot the VM.

We now need to change the amount of RAM allocated to Dom0 to its minimum recommended amount which is 400MB, we do this by running the following command on the XenServer console

 /opt/xensource/libexec/xen-cmdline --set-xen dom0_mem=400M,max:400M 

XenServer Patches

It’s important to install XenServer Patches and whilst XenCenter will inform you of the required patches, as we are using the OpenSource version of XenServer we have to install Patches via the command line. Fortunately there are a number of ways of automating this process.

Personally I always use PXE to deploy XenServer and the installation of patches is built into my deployment process. However that is out of scope for this article, but Tim Mackey has produced a great blog article on how to do this: http://xenserver.org/discuss-virtualization/virtualization-blog/entry/patching-xenserver-at-scale.html

Whilst Tim’s method of rebooting after every patch install is best practice, it can take a long time to install all Patches so an alternative approach I use in these non-production test environments is detailed here https://github.com/amesserl/xs_patcher  This installs all patches and requires only a single reboot.

The configuration file ‘clearwater’ is now a little out of date, and should contain the following (and the cache folder should contain the associated patch files):

XS62E014|78251ea4-e4e7-4d72-85bd-b22bc137e20b|downloadns.citrix.com.edgesuite.net/8736/XS62E014.zip|support.citrix.com/article/CTX140052

XS62ESP1|0850b186-4d47-11e3-a720-001b2151a503|downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip|support.citrix.com/article/CTX139788

XS62ESP1003|c208dc56-36c2-4e91-b8d7-0246575b1828|downloadns.citrix.com.edgesuite.net/9031/XS62ESP1003.zip|support.citrix.com/article/CTX140416

XS62ESP1005|1c952800-c030-481c-a0c1-d1b45aa19fcc|downloadns.citrix.com.edgesuite.net/9058/XS62ESP1005.zip|support.citrix.com/article/CTX140553

XS62ESP1009|a24d94e1-326b-4eaa-8611-548a1b5f8bd5|downloadns.citrix.com.edgesuite.net/9617/XS62ESP1009.zip|support.citrix.com/article/CTX141191

XS62ESP1013|b22d6335-823d-43a6-ba26-28793717125b|downloadns.citrix.com.edgesuite.net/9703/XS62ESP1013.zip|support.citrix.com/article/CTX141480

XS62ESP1014|4fc82e62-b938-407d-a2c6-68c8922f3ec2|downloadns.citrix.com.edgesuite.net/9708/XS62ESP1014.zip|support.citrix.com/article/CTX141486

Once you have your XenServer fully patched shut it down and then add the 2nd Adapter, again note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

xenserver-adapter-2

Boot the VM and then using XenCenter perform a ‘Rescan’ on the NICs to detect this new NIC, then shutdown and add the 3rd Adapter, again note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

xenserver-adapter-3

Boot the VM and then using XenCenter perform a ‘Rescan’ on the NICs to detect this new NIC, then shutdown and add the 4th Adapter, again note how we have set the ‘Promiscuous Mode’ to ‘Allow All’

xenserver-adapter-4

Boot the VM and then using XenCenter perform a ‘Rescan’ on the NICs to detect this final NIC, then one final reboot to make sure they are all activated and connected.

Configure XenServer Networks

Now we are ready to configure the XenServer Networks. We should have the following four networks present, and it’s worth just checking the MACs line up with the Adapters in VirtualBox.

xenserver-networks-1
We need to rename the networks using a more logical naming convention, and also create the two Storage Networks, and assign their VLANs etc.

First of all start by renaming them all setting the MTU of the Storage Network to 9000 (the rest remain at the default of 1500)

Network 0 – MGMT
Network 1 – GUEST
Network 2 – PUBLIC
Network 3 – STORAGE (and MTU of 9000)

xenserver-networks-2

Next we add the Primary Storage Network using the following settings:

Type: External Network
Name: PRI-STORAGE
NIC: NIC 3
VLAN: 100
MTU: 9000

Then the Secondary Storage Network:

Type: External Network
Name: SEC-STORAGE
NIC: NIC 3
VLAN: 101
MTU: 9000

xenserver-storage-networks

Finally we add the IP addresses for the Primary and Secondary Storage Networks so the XenServer can access them

Name: PRI-STOR
Network: PRI-STORAGE
IP address: 10.10.100.101
Subnet mask: 255.255.255.0
Gateway: <blank>

Name: SEC-STOR
Network: SEC-STORAGE
IP address: 10.10.101.101
Subnet mask: 255.255.255.0
Gateway: <blank>

xenserver-ips

That is all the configuration required for XenServer so now we can proceed with deploying our first Zone. However before we do, it’s worth taking a snapshot of both of the VMs so you can roll back and start again if required.

Zone Deployment

We now add an Advanced Zone by going to ‘Infrastructure/Zones/Add Zone’ and creating a new Zone of type ‘Advanced’ without Security Groups

Zone Name – Test
IPv4 DNS1 – 8.8.8.8
Internal DNS 1 – 192.168.56.11
Hypervisor – XenServer
Guest CIDR – 10.1.1.0/24

Next we need to setup the XenServer Traffic Labels to match the names we allocated to each Network on our XenServer, and we also need to add the optional Storage Network by dragging it onto the Physical Network.

xenserver-physical-networks

Edit each Traffic Type and set the following Labels:

Management Network – MGMT
Public Network – PUBLIC
Guest Network – GUEST
Storage Network – SEC-STORAGE

Then continue through the add zone wizard using the following settings

Public Traffic

Gateway – 172.30.0.1
Netmask – 255.255.255.0
VLAN – <blank>
Start IP – 172.30.0.21
End IP -172.30.0.30

POD Settings

POD Name – POD1
Reserved System Gateway – 192.168.56.1
Reserved System Netmask – 255.255.255.0
Start Reserved System IP – 192.168.56.21
End Reserved System IP – 192.168.56.30

Guest Traffic

VLAN Range – 600 – 699

Storage Traffic

Gateway – 10.10.101.1
Netmask – 255.255.255.0
VLAN – <blank>
Start IP – 10.10.101.21
End IP – 10.10.101.30

Cluster Settings

Hypervisor – XenServer
Cluster Name – CLU1

Host Settings

Host Name – 192.168.56.101
Username – root
Password – <password>

Primary Storage Settings

Name – PRI1
Scope – Cluster
Protocol – nfs
Server – 10.10.100.11
Path – /exports/primary
Provider: DefaultPrimary
Storage Tags: <BLANK>

Secondary Storage Settings

Provider – NFS
Name – SEC1
Server – 10.10.101.11
Path – /exports/secondary

At the end of it, activate the Zone, then allow approx. 5 minutes for the System VMs to deploy and the default CentOS Template to be ‘downloaded’ into the system. You are now ready to deploy your first Guest VM.

 

 

So you have a Cluster of Citrix XenServers and you want to upgrade them to a new version, for example to go from XenServer 6.0.2 to XenServer 6.2, or simply apply the latest Hotfixes.  As this is a cluster that is being managed by CloudStack it is not as simple as using the Rolling Pool Upgrade feature in XenCenter – in fact this is the LAST thing you want to do, and WILL result in a broken Cluster.

This article walks you through the steps required to perform the upgrade, but as always you must test this yourself in your own test environment before attempting on a production system.

We need to change the default behaviour of CloudStack with respect to how it manages XenServer Clusters before continuing.  Edit /etc/cloudstack/management/environment.properties and add the following line:

# vi /etc/cloudstack/management/environment.properties

Add > manage.xenserver.pool.master=false

Now restart the CloudStack Management Service

# service cloudstack-management restart

Repeat for all CloudStack Management servers

It is vital that you upgrade the XenServer Pool Master first before any of the Slaves.  To do so you need to empty the Pool Master of all CloudStack VMs, and you do this by putting the Host into Maintenance Mode within CloudStack to trigger a live migration of all VMs to alternate Hosts (do not place the Host into Maintenance Mode using XenCenter as this will cause a new Master to be elected and we do not want that). 

Next you need to ‘Unmanage’ the Cluster, as this prevents users from being able to interact (stop/start) VMs you will need to arrange a ‘Maintenance Window’ but only long enough to update the Pool Master.  All Customer VMs will continue to run during the upgrade process unless you are using Local Storage, in which case VMs on the Hosts being upgraded will have to shut down.  After ‘Unmanaging’ the Cluster, all Hosts will go into a ‘Disconnected’ state, this is expected and is not a cause for concern.

Now you can upgrade your Pool Master, either upgrading to a newer version, or simply applying XenServer Hotfixes as required.  Once the Pool Master has been fully upgraded re-manage the Cluster and then wait for all of the Hosts in the Cluster to come back online within CloudStack. 

Monitor the status of your NFS Storage via XenCenter and wait for all Volumes to reconnect on the upgraded Host.  Once storage has reconnected and all Hosts are back online, take the Pool Master you just upgraded out of CloudStack Maintenance Mode.

Edit /etc/cloudstack/management/environment.properties and remove the following line which you added earlier:

# vi /etc/cloudstack/management/environment.properties

Delete > manage.xenserver.pool.master=false

Now restart the CloudStack Management Service

# service cloudstack-management restart

Repeat for all CloudStack Management servers

You can now upgrade each Slave by simply placing it into Maintenance Mode in CloudStack, apply the upgrade / Hotfixes and when completed, bringing out of Maintenance Mode before starting on the next Host.

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy and an Apache CloudStack Committer. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.

In this post Rohit Yadav, Software Architect, at ShapeBlue talks about setting up a Apache CloudStack (ACS)  cloud on a single host with KVM and basic networking. This can be done on a VM or a physical host. Such a deployment can be useful in evaluating CloudStack locally and can be done in less than 30 minutes.

Note: this should work for ACS 4.3.0 and above. This how-to post may get outdated in future, so please read the latest docs and/or read the latest docs on KVM host installation.

First install Ubuntu 14.04 LTS x86_64 on a baremetal host or a VM that has at least 2G RAM (preferably 4GB RAM) and with a real or virtual 64-bit CPU that has Intel VT-x or AMD-V enabled. I personally use VMWare Fusion which can provide VMs 64-bit CPU with Intel VT-x. Such as CPU is needed by KVM for HVM or full-virtualization. Too bad VirtualBox cannot do this yet, or one can say KVM cannot do paravirtualization like Xen can.

Next, we need to do bunch of things:

  • Setup networking, IPs, create bridge
  • Install cloudstack-management and cloudstack-common
  • Install and setup MySQL server
  • Setup NFS for primary and secondary storages
  • Preseed systemvm templates
  • Prepare KVM host and install cloudstack-agent
  • Configure Firewall
  • Start your cloud!

 

Let’s start by installing some basic packages, assuming you’re root or have sudo powers:

apt-get install openntpd openssh-server sudo vim htop tar build-essential

Make sure root is able to ssh using password, fix in /etc/ssh/sshd_config.

Reset root password and remember this password:

passwd root

Networking

Next, we’ll be setting up bridges. CloudStack requires that KVM hosts have two bridges cloudbr0 and cloudbr1 which is because these names are hard coded in the code and on the KVM hosts we need to have a way to let VMs communicate to the host, between themselves and reach the outside world etc. Add network rules and configure IPs as applicable.

apt-get install bridge-utils
cat /etc/network/interfaces # an example bridge configuration

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

# Public network
auto cloudbr0
iface cloudbr0 inet static
    address 172.16.154.10
    netmask 255.255.255.0
    gateway 172.16.154.2
    dns-nameservers 172.16.154.2 8.8.8.8
    bridge_ports eth0
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# Private network
auto cloudbr1
iface cloudbr1 inet manual
    bridge_ports none
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

Notice, we’re not using cloudbr1 because the intention is to setup basic zone, basic networking, so all networking going through one bridge only.

We’re done with setting up networking, just note the cloudbr0 IP. In my case, it was 172.16.154.10. You may notice that we’re not configuring eth0 at all, it’s because we’ve a bridge now and we expose this bridge to the outside networking using this cloudbr0’s IP. By not configuring eth0 (static or dhcp), we get Ubuntu to use cloudbr0 as its default interface and use cloudbr0’s gateway as its default gateway and route. You need to reboot your VM or host now.

Management server and MySQL

Setup CloudStack repo, you may use something that I host (the link is unreliable, let me know if it stops working for you). You may use any other Debian repo as well. One can also build from source and host their own repositories.

We need to install the CloudStack management server, MySQL server and setup the management server database:

echo deb http://packages.bhaisaab.org/cloudstack/upstream/debian/4.3 ./ >> /etc/apt/sources.list.d/acs.list
apt-get update -y
apt-get install cloudstack-management cloudstack-common mysql-server
# pick any suitable root password for MySQL server

You don’t need to explicitly install cloudstack-common because the management package depends on it. This is to point out that many tools, scripts can be found in this package, such as tools to setup database, preseed systemvm template etc.

You may put following rules on your /etc/mysql/my.cnf, they are mostly to configure innodb settings and have MySQL use the bin-log “ROW” format which can be useful for replication etc. Since we’re doing only test setup we may skip this, even though CloudStack docs say that you put only this but I think on production systems you may need to configure many more options (perhaps 400 of those).

[mysqld]
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'

Now, let’s setup managment server database;

service mysql restart
cloudstack-setup-databases cloud:cloudpassword@localhost --deploy-as=root:passwordOfRoot -i <stick your cloudbr0 IP here>

Storage

We’ll setup NFS and preseed systemvm.

mkdir -p /export/primary /export/secondary
apt-get install nfs-kernel-server quota
echo /export  *(rw,async,no_root_squash,no_subtree_check) > /etc/exports
exportfs -a
sed -i -e 's/^RPCMOUNTDOPTS=--manage-gids$/RPCMOUNTDOPTS="-p 892 --manage-gids"/g' /etc/default/nfs-kernel-server
sed -i -e 's/^NEED_STATD=$/NEED_STATD=yes/g' /etc/default/nfs-common
sed -i -e 's/^STATDOPTS=$/STATDOPTS="--port 662 --outgoing-port 2020"/g' /etc/default/nfs-common
sed -i -e 's/^RPCRQUOTADOPTS=$/RPCRQUOTADOPTS="-p 875"/g' /etc/default/quota
service nfs-kernel-server restart

I prefer to download the systemvm first and then preseed it:

wget http://people.apache.org/~bhaisaab/cloudstack/systemvmtemplates/systemvm64template-2014-09-11-4.3-kvm.qcow2.bz2
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
          -m /export/secondary -f systemvm64template-2014-09-11-4.3-kvm.qcow2.bz2 -h kvm \
          -o localhost -r cloud -d cloudpassword

KVM and agent setup

Time to setup cloudstack-agent, libvirt and KVM:

apt-get install qemu-kvm cloudstack-agent
sed -i -e 's/listen_tls = 1/listen_tls = 0/g' /etc/libvirt/libvirtd.conf
echo 'listen_tcp=1' >> /etc/libvirt/libvirtd.conf
echo 'tcp_port = "16509"' >> /etc/libvirt/libvirtd.conf
echo 'mdns_adv = 0' >> /etc/libvirt/libvirtd.conf
echo 'auth_tcp = "none"' >> /etc/libvirt/libvirtd.conf
sed -i -e 's/\# vnc_listen.*$/vnc_listen = "0.0.0.0"/g' /etc/libvirt/qemu.conf
sed -i -e 's/libvirtd_opts="-d"/libvirtd_opts="-d -l"/' /etc/init/libvirt-bin.conf
service libvirt-bin restart

Firewall

Finally punch in holes on the firewall, substitute your network in the following:

# configure iptables
NETWORK=172.16.154.0/24
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 2049 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 32803 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 32769 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 662 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 662 -j ACCEPT

apt-get install iptables-persistent

# Disable apparmour on libvirtd
ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

# Configure ufw
ufw allow mysql
ufw allow proto tcp from any to any port 22
ufw allow proto tcp from any to any port 1798
ufw allow proto tcp from any to any port 16509
ufw allow proto tcp from any to any port 5900:6100
ufw allow proto tcp from any to any port 49152:49216

Launch Cloud

All set! Make sure tomcat is not running, start the agent and management server:

/etc/init.d/tomcat6 stop
/etc/init.d/cloudstack-agent start
/etc/init.d/cloudstack-management start

If all goes well, open http://cloudbr0-IP:8080/client and you’ll see the ACS login page. Use username admin and password password to log in. Now setup a basic zone, in the following steps change the IPs as applicable:

  • Pick zone name, DNS 172.16.154.2, External DNS 8.8.8.8, basic zone + SG
  • Pick pod name, gateway 172.16.154.2, netmask 255.255.255.0, IP range 172.16.154.200-250
  • Add guest network, gateway 172.16.154.2, netmask 255.255.255.0, IP range 172.16.154.100-199
  • Pick cluster name, hypervisor KVM
  • Add the KVM host, IP 172.16.154.10, user root, password whatever-the-root-password-is
  • Add primary NFS storage, IP 172.16.154.10, path /export/primary
  • Add secondary NFS storage, IP 172.16.154.10, path /export/secondary
  • Hit launch, if everything goes well launch your zone!

Keep an eye on your /var/log/cloudstack/management/management-server.log and/var/log/cloudstack/agent/agent.log for possible issues. Read the admin docs for more cloudy admin tasks. Have fun playing with your CloudStack cloud.

CloudStack 4.3 provided further enhancements to the LDAP integration, and in this article we will look at how you configure CloudStack to authenticate against a Microsoft Active Directory Server.

Enable AD Integration

First step is to tell CloudStack about your Active Directory Servers (yes we can now have more than one)

Go to Global Settings then choose ‘LDAP Configuration’ from the Select View Dropdown, then click the ‘ + Configure LDAP’ button on the top right

LDAP-001

Populate the form with the details of your LDAP Server(s) – I will be adding just the one today

LDAP-002

LDAP Global Settings

Now go back to Global Settings and filter for ‘LDAP’ using the search box in the top right. These are the settings we need to configure in order to use LDAP.

LDAP-003

These are the settings I used in our Lab

ldap.basedn           DC=sbdemo1,DC=local

The following two settings specify a normal AD User Account which is used to query the list of users within AD, it does not require Domain Admin rights. Note how you need to use its ‘distinguishedName’ to identify it.

ldap.bind.password           xxxxxxxxx

ldap.bind.principal          CN=cloudstack-ldap,CN=Users,DC=sbdemo1,DC=local

ldap.email.attribute          mail          (default)

ldap.firstname.attribute          givenname          (default)

ldap.group.object          groupOfUniqueNames          (default)

ldap.group.user.uniquemember          uniquemember          (default)

dap.lastname.attribute          sn          (default)

The following setting is used by the Add Account UI element to filter the list of Users in the selection list so it only shows accounts which belong to the specified Group. In my case the Group is called CloudStack, but you need to use the ‘distinguishedName’ value to identify it.

ldap.search.group.principle          CN=CloudStack,CN=Users,DC=sbdemo1,DC=local

ldap.truststore          (blank)

ldap.truststore.password          (blank)

ldap.user.object          user          (default was inetOrgPerson)

ldap.username.attribute          samaccountname          (default was uid)

LDAP-004

After updating the various settings (adjusting them for your environment), restart the CloudStack Management Service to activate the settings.

Adding LDAP Accounts

To add a new LDAP Account, go to the Accounts Tab, the click the ‘Add LDAP Account’ button at the top right

LDAP-007

CloudStack will then list all LDAP Accounts which have not yet been added to CloudStack, and are in the Group specified in the ‘ldap.search.group.principle’ Global Setting’

Chose the AD User you wish to create the new Account for, then select the appropriate Domain,

LDAP-005

Password Management

Any accounts which were already configured in CloudStack will still use local CloudStack authentication, however you will not be able to change the user’s password using the CloudStack UI once LDAP is enabled (dual authentication is coming in release 4.5)

LDAP-006

You can still change the user password using the ‘updateUser’ API call.

Users with LDAP Accounts will no longer need to change their password via CloudStack, as their password will be managed by Windows AD.

Bulk Import

If you want to Bulk Import all of the users within LDAP who have not yet been added to CloudStack, you can do so by using the ‘importLdapUsers’ API command.

An example of the command using the unauthenticated API port would be:

http://192.168.0.3:8096/client/api?command=importLdapUsers

&accounttype=0

&domainid=b7e70c6f%2D8619%2D5641%2Dcd41%2Bafbd8147b438

This will import all users from AD, who are not currently in CloudStack, creating a new Account for each user, and adding them to the Domain specified by the domainid parameter. Both the Account Name and User Name will be the same as the AD ‘User Logon Name’

Summary

LDAP Integration has become even easier with CloudStack 4.3, bringing the ability to bulk import multiple users and create unique accounts for each user.  The API is still required for some features, such as Bulk Import, or Password Resets of CloudStack Local Accounts etc, but each release brings further improvements.

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.

 

In this article, Paul Angus Cloud Architect at ShapeBlue looks into a few interesting settings when using CloudStack to orchestrate VMware environments

Working with CloudStack 4.3 and VMware 5.5 in our lab recently, I came to using some very interesting global settings which renewed a project I had on the back burner…

vm.instancename.flag

This global setting changes the VM name as seen in vCenter or on the ESXi host. Instead of the VM appearing as i-34-1234-VM, which is the account ID followed by the sequential VM number, the VM will appear with the name given to it when creating the instance i.e. SB-TestVC01. In a public cloud this could be a bit of a nightmare as each name has to be unique, but in a private cloud it makes a lot more sense to see VMs with the same naming convention as in the rest of the environment.

vmware.create.full.clone

The biggest thing to note about this global setting is that the default is ‘true’ meaning that when using VMware, guest instances created from templates are full copies of the original template, not simply difference disks (deltas), as opposed to XenServer which isn’t currently configurable and always creates linked clones of the template.

If the speed of deployment and primary storage usage are your main concerns then you may want to change this setting to ‘false’ as less data is written to disks.

However there are the potential issues with linked clones:

1. As the original template is a parent of all instances based upon that template, then you have a single point of failure, which, if it becomes corrupted will make all of the VMs based on that template instance also corrupt.

2. There is a performance loss when the hypervisor has to figure out which (parent) file a disk read has to be performed on.

3. Rescuing VM disks from messed up storage becomes extremely tricky because the vDisk is not in one handy piece.

If these are a concern, then leave the global variable as it is.

vmware.nested.virtualization

This setting is very exciting for those of us who build a lot of testing and teaching environments. For the uninitiated, nested virtualisation is the ability to run fully functional hypervisors as virtual instances. It requires certain features on the processor and chipset, and I dependant on the version of ESXi/vCenter you’re running, but if you have those features you are able to deploy virtual KVM, XenServer, ESXi, Hyper-V instances.

Using a ‘parent’ CloudStack, you can then deploy these hypervisors and virtualised CloudStack management farms (with a bit of configuration management wizardry) from templates, all within self-contained VLANs on these ESXi ‘super-hosts’. Deployment of environments is now a whole lot quicker and easier…

However we’re still missing an element, and that is being able to create interfaces which allow packets which have been VLAN tagged by a virtual host to pass through to other virtual hosts. This is required if we want guests on different hypervisor hosts to be able to communicate with each other.

So what we want is our parent CloudStack to set the port group on a vSwitch of the ESXi super-hosts to trunk VLANs between virtual hosts. As it happens, ESXi uses the VLAN ID 4095 as the ‘code’ for ‘trunk all tagged VLAN packets’. So if you create a shared network with a VLAN ID of 4095, then connect the guest interfaces on your virtual hosts to that network and they’ll pass your virtual guest traffic.

About the author

Paul Angus is a Senior Consultant & Cloud Architect at ShapeBlue the leading independent CloudStack integrator & consultancy. He has designed numerous CloudStack environments for customers across 4 continents, based on Apache Cloudstack ,Citrix CloudPlatform and Citrix Cloudportal. Paul has spoken at all Apache Cloudstack collaboration conferences and is an active contributor to the CloudStack community. When not building Clouds, Paul likes to create scripts that build clouds……..and he very occasionally can be seen trying to hit a golf ball.

 

 

 

UPDATE: 09-Apr-2014 – The proper upgrade command is “apt-get install openssl libssl1.0.0”. If you’ve just updated openssl, please go back and update libssl as well.

UPDATE: 10-Apr-2014 – Added detailed verification steps / Apache CloudStack 4.0 – 4.1 are not vulnerable, they use older Debian/openssl.

Thanks to all involved for helping to put together and update this information

 

Earlier this week, a security vulnerability was disclosed in OpenSSL, one of the software libraries that Apache CloudStack uses to encrypt data sent over network connections. As the vulnerability has existed in OpenSSL since early 2012, System VMs in Apache CloudStack versions 4.0.0-incubating-4.3 are running software using vulnerable versions of OpenSSL. This includes CloudStack’s Virtual Router VMs, Console Proxy VMs, and Secondary Storage VMs.

The CloudStack community are actively working on creating updated System VM templates for each recent version of Apache CloudStack, and for each of the hypervisor platforms which Apache CloudStack supports. Due to testing and QA processes, this will take several days. In the meantime, a temporary workaround is available for currently running System VMs.

If you are running Apache CloudStack 4.0.0-incubating through the recent 4.3 release, the following steps will help ensure the security of your cloud infrastructure until an updated version of the System VM template is available:

Logon to each Secondary Storage VM, Console Proxy VM and Virtual Router and update openssl

XenServer & KVM

  1. Use the GUI to identify the Link Local IP and Host of the VM
  2. Connect to the Hypervisor Host using SSH
  3. From the Host, Connect to the VM using the following command, replacing n.n.n.n with the Link Local IP identified in step 1.
  4. On the System VM,When updating Secondary Storage VMs, run /etc/init.d/apache2 restart
    • run apt-get update
    • then run apt-get install openssl libssl1.0.0
    • If a dialog appears asking to restart programs, accept its request
  5. Log out of the System VM and host server
  6. Repeat for all Secondary Storage, Console Proxy and Virtual Router VMs

 

 VMware

  1. Use the GUI to identify the Management / Private IP of the VM
  2. SSH onto a CloudStack Management Server
  3. From the Management Server, connect to the VM using the following command, replacing n.n.n.n with the Managemnt/Private IP identified in step 1.
    • ​ssh -i /var/lib/cloud/management/.ssh/id_rsa -p 3922 root@n.n.n.n
  4. On the System VM,When updating Secondary Storage VMs, run /etc/init.d/apache2 restart
    • run apt-get update
    • then run apt-get install openssl libssl1.0.0
    • If a dialog appears asking to restart programs, accept its request
  5. Log out of the System VM and host server
  6. Repeat for all Secondary Storage, Console Proxy and Virtual Router VMs

 

Verification

On each System VM, you can test if it has non-vulnerable openssl packages installed by listing installed packages and looking at the installed versions of openssl and libssl. As in the example below, for a system to be non-vulnerable, the packages need to be at or above version 1.0.1e-2+deb7u6:

root@v-14-VM:~# dpkg -l|grep ssl
ii  libssl1.0.0:i386                1.0.1e-2+deb7u6                  i386         SSL shared libraries
ii  openssl                              1.0.1e-2+deb7u6                  i386         Secure Socket Layer (SSL) binary and related cryptographic tools

We realise that for larger installations where System VMs are being actively created and destroyed based on customer demand, this is a very rough stop-gap. The Apache CloudStack security team is actively working on a more permanent fix and will be releasing that to the community as soon as possible.

For Apache CloudStack installations that secure the web-based user-interface with SSL, these may also be vulnerable to HeartBleed, but that is outside the scope of this blog post. We recommend testing your installation with [1] to determine if you need to patch/upgrade the SSL library used by any web servers (or other SSL-based services) you use.

Information originally posted on https://blogs.apache.org/cloudstack/entry/how_to_mitigate_openssl_heartbleed

1: http://filippo.io/Heartbleed/

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy and an Apache CloudStack Committer. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.

 

In this article, Paul Angus Cloud Architect at ShapeBlue takes a look at using Ansible to Deploy an Apache CloudStack cloud.

What is Ansible

Ansible is a deployment and configuration management tool similar in intent to Chef and Puppet. It allows (usually) DevOps teams to orchestrate the deployment and configuration of their environments without having to re-write custom scripts to make changes.

Like Chef and Puppet, Ansible is designed to be idempotent, these means that you determine the state you want a host to be in and Ansible will decide if it needs to act in order to achieve that state.

There’s already Chef and Puppet, so what’s the fuss about Ansible?

Let’s take it as a given that configuration management makes life much easier (and is quite cool), Ansible only needs an SSH connection to the hosts that you’re going to manage to get started. While Ansible requires Python 2.4 or greater to on the host you’re going to manage in order to leverage the vast majority of its functionality, it is able to connect to hosts which don’t have Python installed in order to then install Python, so it’s not really a problem. This greatly simplifies the deployment procedure for hosts, avoiding the need to pre-install agents onto the clients before the configuration management can take over.

Ansible will allow you to connect as any user to a managed host (with that user’s privileges) or by using public/private keys – allowing fully automated management.

There also doesn’t need to be a central server to run everything, as long as your playbooks and inventories are in-sync you can create as many Ansible servers as you need (generally a bit of Git pushing and pulling will do the trick).

Finally – its structure and language is pretty simple and clean. I’ve found it a bit tricky to get the syntax correct for variables in some circumstances, but otherwise I’ve found it one of the easier tools to get my head around.

So let’s see something

For this example we’re going to create an Ansible server which will then deploy a CloudStack server. Both of these servers will be CentOS 6.4 virtual machines.

Installing Ansible

Installing Ansible is blessedly easy. We generally prefer to use CentOS so to install Ansible you run the following commands on the Ansible server.

# rpm -ivh http://www.mirrorservice.org/sites/dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# yum install -y ansible

And that’s it.

(There is a commercial version which has more features such as callback to request configurations and a RESTful API and also support. The installation of this is different)

By default Ansible uses /etc/ansible to store your playbooks, I tend to move it, but there’s no real problem with using the default location. Create yourself a little directory structure to get started with. The documentation recommends something like this:

ansible1

Playbooks

Ansible uses playbooks to specify the state in which you wish the target host to be in to be able to accomplish its role. Ansible playbooks are written in YAML format.

Modules

To get Ansible to do things you specify the hosts a playbook will act upon and then call modules and supply arguments which determine what Ansible will do to those hosts.

To keep things simple, this example is a cut-down version of a full deployment. This example creates a single management server with a local MySQL server and assumes you have your secondary storage already provisioned somewhere. For this example I’m also not going to include securing the MySQL server, configuring NTP or using Ansible to configure the networking on the hosts either. Although normally we’d use Ansible to do exactly that.

The pre-requisites to this CloudStack build are:

  • A CentOS 6.4 host to install CloudStack on
  • An IP address already assigned on the ACS management host
  • The ACS management host should have a resolvable FQDN (either through DNS or the host file on the ACS management host)
  • Internet connectivity on the ACS management host

Planning

The first step I use is to list all of the tasks I think I’ll need and group them or split them into logical blocks. So for this deployment of CloudStack I’d start with:

  • Configure selinux
  • (libselinux-python required for Ansible to work with selinux enabled hosts)
  • Install and configure MySQL
  • (Python MySQL-DB required for Ansible MySQL module)
  • Install cloud-client
  • Seed secondary storage

Ansible is built around the idea of hosts having roles, so generally you would group or manage your hosts by their roles. So now to create some roles for these tasks

I’ve created:

  • cloudstack-manager
  • mysql

First up we need to tell Ansible where to find our CloudStack management host. In the root Ansible directory there is a file called ‘hosts’ (/etc/Ansible/hosts) add a section like this:

[acs-manager]

xxx.xxx.xxx.xxx

where xxx.xxx.xxx.xxx is the ip address of your ACS management host.

MySQL

So let’s start with the MySQL server.  We’ll need to create a task within the mysql role directory called main.yml. The ‘task’ in this case to have MySQL running and configured on the target host. The contents of the file will look like this:

name: Ensure mysql server is installed

  yum: name=mysql-server state=present 

– name: Ensure mysql python is installed

  yum: name=MySQL-python state=present

 

– name: Ensure selinux python bindings are installed

  yum: name=libselinux-python state=present 

– name: Ensure cloudstack specfic my.cnf lines are present

  lineinfile: dest=/etc/my.cnf regexp=’$item’ insertafter=”symbolic-links=0″ line=’$item’

  with_items:

  – skip-name-resolve

  – default-time-zone=’+00:00′

  – innodb_rollback_on_timeout=1

  – innodb_lock_wait_timeout=600

  – max_connections=350

  – log-bin=mysql-bin

  – binlog-format = ‘ROW’

 

– name: Ensure MySQL service is started

  service: name=mysqld state=started 

– name: Ensure MySQL service is enabled at boot

  service: name=mysqld enabled=yes

 

– name: Ensure root password is set

  mysql_user: user=root password=$mysql_root_password host=localhost

  ignore_errors: true 

– name: Ensure root has sufficient privileges

  mysql_user: login_user=root login_password=$mysql_root_password user=root host=% password=$mysql_root_password priv=*.*:GRANT,ALL state=present

This needs to be saved as /etc/ansible/roles/mysql/tasks/main.yml

As explained earlier, this playbook in fact describes the state of the host rather than setting out commands to be run. For instance, we specify certain lines which must be in the my.cnf file and allow Ansible to decide whether or not it needs to add them.

Most of the modules are self-explanatory once you see them, but to run through them briefly;

The ‘yum’ module is used to specify which packages are required, the ‘service’ module controls the running of services, while the ‘mysql_user’ module controls mysql user configuration. The ‘lineinfile’ module controls the contents in a file.

We have a couple of variables which need declaring.  You could do that within this playbook or its ‘parent’ playbook, or as a higher level variable. I’m going to declare them in a higher level playbook. More on this later.

That’s enough to provision a MySQL server. Now for the management server.

 

CloudStack Management server service

For the management server role we create a main.yml task like this:

– name: Ensure selinux python bindings are installed

  yum: name=libselinux-python state=present

 

– name: Ensure the Apache Cloudstack Repo file exists as per template

  template: src=cloudstack.repo.j2 dest=/etc/yum.repos.d/cloudstack.repo

 

– name: Ensure selinux is in permissive mode

  command: setenforce permissive

 

– name: Ensure selinux is set permanently

  selinux: policy=targeted state=permissive

 

name: Ensure CloudStack packages are installed

  yum: name=cloud-client state=present

 

– name: Ensure vhdutil is in correct location

  get_url: url=http://download.cloud.com.s3.amazonaws.com/tools/vhd-util dest=/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/vhd-util mode=0755

 

Save this as /etc/ansible/roles/cloudstack-management/tasks/main.yml

Now we have some new elements to deal with. The Ansible template module uses Jinja2 based templating.  As we’re doing a simplified example here, the Jinja template for the cloudstack.repo won’t have any variables in it, so it would simply look like this:

 

[cloudstack]

name=cloudstack

baseurl=http://cloudstack.apt-get.eu/rhel/4.2/

enabled=1

gpgcheck=0

This is saved in /etc/ansible/roles/cloudstack-manager/templates/cloudstack.repo.j2

That gives us the packages installed, we need to set up the database. To do this I’ve created a separate task called setupdb.yml

– name: cloudstack-setup-databases

  command: /usr/bin/cloudstack-setup-databases cloud:{{ mysql_cloud_password }}@localhost –deploy-as=root:{{ mysql_root_password }}

 

– name: Setup CloudStack manager

  command: /usr/bin/cloudstack-setup-management

 

Save this as: /etc/ansible/roles/cloudstack-management/tasks/setupdb.yml

As there isn’t (as yet) a CloudStack module, Ansible doesn’t inherently know whether or not the databases have already been provisioned, therefore this step is not currently idempotent and will overwrite any previously provisioned databases.

There are some more variables here for us to declare later.

 

System VM Templates:

 

Finally we would want to seed the system VM templates into the secondary storage.  The playbook for this would look as follows:

– name: Ensure secondary storage mount exists

  file: path={{ tmp_nfs_path }} state=directory

 

– name: Ensure  NFS storage is mounted

  mount: name={{ tmp_nfs_path }} src={{ sec_nfs_ip }}:{{ sec_nfs_path }} fstype=nfs state=mounted opts=nolock

 

– name: Seed secondary storage

  command: /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp_nfs_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2 -h kvm -F

  command: /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp_nfs_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2 -h xenserver -F

  command: /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp_nfs_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.ova -h vmware -F

 

Save this as: /etc/ansible/roles/cloudstack-manager/tasks/seedstorage.yml

Again, there isn’t a CloudStack module so Ansible will always run this even if the secondary storage already has the templates in it.

 

Bringing it all together

Ansible can use playbooks which run other playbooks, this allows us to group these playbooks together and declare variables across all of the individual playbooks. So in the Ansible playbook directory create a file called deploy-cloudstack.yml, which would look like this:

hosts: acs-manager

  vars:

    mysql_root_password: Cl0ud5tack

    mysql_cloud_password: Cl0ud5tack

    tmp_nfs_path: /mnt/secondary

    sec_nfs_ip: IP_OF_YOUR_SECONDARY_STORAGE

    sec_nfs_path: PATH_TO_YOUR_SECONDARY_STORAGE_MOUNT

 

roles:

    – mysql

    – cloudstack-manager

 

  tasks:

 

  – include: /etc/ansible/roles/cloudstack-manager/tasks/setupdb.yml

  – include: /etc/ansible/roles/cloudstack-manager/tasks/seedstorage.yml

 

Save this as: /etc/ansible/deploy-cloudstack.yml  inserting the IP address and path for your secondary storage and changing the passwords if you wish to.

 

To run this go to the Ansible directory (cd /etc/ansible ) and run:

# ansible-playbook deploy-cloudstack.yml -k

‘-k’ tells Ansible to ask you for the root password to connect to the remote host.

Now log in to the CloudStack UI on the new management server.

 

How is this example different from a production deployment?

In a production deployment, the Ansible playbooks would configure multiple management servers connected to master/slave replicating MySQL databases along with any other infrastructure components required and deploy and configure the hypervisor hosts. We would also have a dedicated file describing the hosts in the environment and a dedicated file containing variables which describe the environment.

The advantage of using a configuration management tool such as Ansible is that we can specify components like the MySQL database VIP once and use it multiple times when configuring the MySQL server itself and other components which need to use that information.

 

Acknowledgements

Thanks to Shanker Balan for introducing me to Ansible and a load of handy hints along the way.

 

Summary

In this blog we have covered the basic principles of Ansible and gone through a simple example which will build a CloudStack management server including a MySQL server instance with the CloudStack databases deployed on it.

About the Author

Paul Angus is a Senior Consultant & Cloud Architect at ShapeBlue, The Cloud Specialists. He has designed numerous CloudStack environments for customers across 4 continents, based on Apache Cloudstack ,Citrix Cloudplatform and Citrix Cloudportal.

When not building Clouds, Paul likes to create scripts that build clouds……..and he very occasionally can be seen trying to hit a golf ball.

 

What Is Apache CloudStack™

Apache CloudStack™ is an open source software platform that pools computing resources to build Public, Private, and Hybrid Infrastructure as a Service (IaaS) Clouds. Apache CloudStack manages the Network, Storage, and Compute nodes that make up a Cloud infrastructure.

The Story So Far.

CloudStack started life as VMOps, a company founded in 2008 with product development spearheaded by Sheng Liang, who developed the Java Virtual Machine at Sun.  Whilst early versions were very much focused on the Xen Hypervisor, the team realised the benefits of multi-hypervisor support.  In early 2010, the company achieved a massive marketing win when they acquired the domain name cloud.com and formally launched CloudStack, which was 98% open source.  In July 2011, CloudStack was acquired by Citrix Systems, who released the remaining code as open source under GPLv3.

The big news came in April 2012, when Citrix donated CloudStack to the Apache Software Foundation where is was accepted into the Apache Incubator.  At the same time Citrix also ceased their involvement in the OpenStack Initiative.  Apache CloudStack has now been promoted to a Top-Level Project of the Apache Software Foundation, a measure of the maturity of the code and its community.

Cloud Types

CloudStack works within multiple enterprise strategies and mandates, as well as supporting multiple cloud strategies from a service provider perspective.  As an initial step beyond traditional server virtualization, many organizations are looking to private cloud implementations as a means to satisfy flexibility while still retaining control over service delivery.  The private cloud may be hosted by the IT organization itself, or sourced from a managed service provider, but the net goals of total control and security without compromising SLAs are achieved.

For some organizations, the managed service model is stepped up one level with all resources sourced from a hosted solution.  SLA guarantees and security concerns often dictate the types of providers an enterprise will look towards.  At the far end of the spectrum are public cloud providers with pay as you go pricing structures and elastic scaling.  Since public clouds often abstract details such as network topology, a hybrid cloud strategy allows IT to retain control over key aspects of their operations such as data, while leveraging the benefits of elastic public cloud capacity.

Open Flexible Platform

Multiple Hypervisor Support

CloudStack works with a variety of hypervisors, and a single cloud deployment can contain multiple hypervisor implementations. The current release of CloudStack supports pre-packaged enterprise solutions like Citrix XenServer and VMware vSphere, as well as OVM and KVM or Xen running on Ubuntu or CentOS.  Support for Hyper-V is currently being developed and should be available in a future release.

Massively Scalable Infrastructure Management CloudStack can manage tens of thousands of host servers installed in multiple geographically distributed datacentres. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause a cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud.

Automatic Configuration Management CloudStack automatically configures each guest virtual machine’s networking and storage settings.  CloudStack internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN, console access, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a CloudStack deployment.

Graphical User Interface CloudStack offers an administrator’s Web interface, used for provisioning and managing the cloud, as well as an end-user’s Web interface, used for running VMs and managing VM templates.  The UI can be customized to reflect the desired service provider or enterprise look and feel.

API and Extensibility CloudStack provides an API that gives programmatic access to all the management features available in the UI. This API enables the creation of command line tools and new user interfaces to suit particular needs. The CloudStack pluggable allocation architecture allows the creation of new types of allocators for the selection of storage and Hosts.

CloudStack can translate Amazon Web Services (AWS) EC2 & S3 API calls to native CloudStack API calls so that users can continue using existing AWS-compatible tools. CloudMonkey is a Command Line Interface (CLI) for CloudStack written in Python.  CloudMonkey brings the ability to easily create scripts to automate complex or repetitive admin and management tasks from simply adding multiple users, to deploying a complete CloudStack architecture.

More information on CloudMonkey can be found at http://goo.gl/ESp8ha

Access to the API, either directly or by using CloudMonkey is protected by a combination of API & Secret Keys and a Signature Hash.  Users can re-generate new random API & Secret Keys (as well as their UI Password) at any time, providing maximum security and peace of mind.

High Availability

CloudStack Multi-Node Deployment

CloudStack has a number of features to increase the availability of the system. The Management Server itself may be deployed in a multi-node installation where the servers are load balanced across data centres.  MySQL may be configured to use replication to provide for a failover in the event of database loss. For the hosts, CloudStack supports NIC bonding and the use of separate networks for storage as well as iSCSI Multipath.

CloudStack Deployment Architecture

Deployment Architecture

CloudStack has 6 key Building Blocks:

Regions are very similar to an AWS Region, and are the 1st and largest unit of scale for a CloudStack Cloud.  A Region consists of multiple Availability Zones, which are the 2nd largest unit of scale. Typically there is one Zone per Data Centre, and each Zone contains PODs, Clusters, Hosts and Storage.  A Cloud can contain multiple Regions, and even if one region should go offline, VMs in other Regions are still accessible as each Region has dedicated Management Servers, located in one or more of its Zones.

PODs are the 3rd unit of scale, and are often a single rack which house Networking, Compute and Storage.  PODs also have logical as well as physical properties with components such as IP Addressing and VM allocations being influenced by the PODs within a Zone.

Clusters are the 4th unit of scale, and are simply groups of homogenous Compute hardware combined with Primary Storage.  Each Cluster will run a common Hypervisor, but a Zone can consist of combinations of all of the supported Hypervisors.

Hosts are the 5th unit of scale and provide the actual compute layer on which Virtual Machines will run.

Storage is the final building block and there are two key types within CloudStack, Primary and Secondary.  Primary Storage is where Virtual Machines reside, and can be Local Storage within a Compute Host or Shared File/Block Storage using NFS, iSCSI, Fibre Channel etc.

Secondary Storage is where Virtual Machine Templates, ISO images, and Snapshots reside and is currently always presented over NFS.  Swift can also be used to replicate Secondary Storage between Zones, ensuring users always have access to their Snapshots even if a Zone is offline. There is a lot of development work currently underway with regards to Storage and some great new features coming in the next release of CloudStack thanks to a new Storage Subsystem.

Networking

The ‘Glue’ that brings all of the building blocks together is the Network layer.  CloudStack has two principle models for Networking, referred to as Basic and Advanced. Basic Networking is very similar to the model used by AWS, and can be deployed in 3 slightly different ways, with each adding to the features of the previous.

A true ‘Flat’ network where all VMs share a common Network Range with no form of Isolation.

Using Security Groups which utilise Layer-3 IP Address Filtering to isolate VMs from one another.

Elastic IP and Elastic Load Balancing – A Citrix NetScaler provides Public IP and Load Balancing functionality, and is completely orchestrated by CloudStack.

All three of these Basic Network models allow massive scale as the IP range used by VMs is contained within a POD.  The Zone can be scaled horizontally by simply adding more PODs, consisting of Clusters of Hosts and their associated Top of Rack Switching and Primary Storage. The Advanced Networking model brings a raft of features which place a massive amount of power right into the hands of the end users.  VLANs are the standard method of isolation but Software Defined Networking (SDN) offerings from Nicira, BigSwitch and soon Midokura bring the possibility of massive scale by overcoming any VLAN limitations.

CloudStack makes excellent use of System Virtual Machines to provide control and automation of Storage and Networking.  One such System VM is the CloudStack Virtual Router.  The key difference over a Basic Network, is that in the Advanced mode, users can create CloudStack Guest Networks, with each Network having a dedicated Virtual Router.

This innocuous sounding VM provides the following features:  DNS & DHCP, Firewall, Client IPSEC VPN, Load Balancing, Source / Static NAT, Port Forwarding, and all of them are configurable by end users from either the GUI or the CloudStack API.

Virtual Router Configuration Options Screen Shot

Virtual Router Static NAT Screen Shot

When a user creates a new Guest Network, and then deploys Guest VMs onto that Network, the VMs are attached to a dedicated L2 Broadcast Domain, isolated by VLANS and fronted by a Virtual Router.  They have full control of all traffic entering and leaving the network, with a direct connection to the Public Internet.

Firewall and Port Forwarding rules enable the mapping of Live IPs to any number of Internal VMs.  Load Balancing functionality with Round-Robin, Least Connections and Source Based Algorithms along with Source Based, App Cookie or LB Cookie Stickiness Policies is available straight out of the box.

Another powerful feature of the Advanced Network model is the Virtual Private Cloud (VPC).  A VPC enables the user to create a multi-tiered network configuration, placing VMs within their own VLAN.  ACLs enable the users to control the flow of traffic between each Network Tier and also the Internet.  A typical VPC may contain 3 Network Tiers, Web, App and DB, with only the Web Tier having Internet Access.

VPCs also bring additional features such as Site-2-Site VPN, enabling a persistent connection with infrastructure running in alternate locations such as other Data Centres or even alternate Clouds.  A VPC Private Gateway is a feature the Cloud Admins can leverage to provide a 2nd Gateway out of the VPC Virtual Router.  The connection can be used to connect the VMs running within the VPC to other infrastructure via for example a MPLS Network rather than over the Public Internet.

CloudStack optimises the use of the underlying network architecture within a DC by enabling the Cloud Admins to split up the various types of Network Traffic and map them to different sets of Bonded NICs within each Compute Host.

There are four types of Physical Network which can be configured, and they can be setup to all use a single NIC, or multiple bonds, depending on how many NICs are available in the Host Server.  The four networks are:

Management: Used by the CloudStack Management Servers and various other components within the system, sometimes referred to as the Orchestration Network.

Guest: Used by all Guest VMs when communicating with other Guest VMs or Gateway Devices such as Virtual Routers, Juniper SRX Firewalls, F5 Load Balancers etc.  In an Advanced configuration, multiple Guest Networks can be created, allowing certain NICs to be dedicated to a particular user or function.

Public:  In an Advanced Network configuration the Public Network connects the Virtual Routers to the Public Internet.  It only exists in a Basic Network when a Citrix NetScaler is used to provide Elastic IP and Elastic LB services.

Storage:  Used by the special Secondary Storage System VM and Host Servers when connecting to Secondary Storage devices.  It enables the optimisation of traffic used for deploying new VMs from Templates, and in particular for handling Snapshot traffic which can get network intensive, without negatively impacting the Guest & Management Traffic.

The traffic associated with Primary Storage, where the actual VMs reside, can also be split out onto dedicated NICs or HBAs etc, again allowing for optimal performance and High Availability.

Network Service Providers

In addition to the Virtual Router and VPC Virtual Router, CloudStack can also leverage the power of real hardware, bringing even more functionality and greater scale.  Currently supported devices are Citrix NetScaler, F5 Big IP, and Juniper SRX but with many more on the way. Once a device has been integrated by the Cloud Admins, the users have control of the features via the standard GUI or API.  For example, if a Juniper SRX is deployed, when a user configures a Firewall Rule within CloudStack UI, CloudStack uses the Juniper API to apply that configuration on the physical SRX.

When a Citrix NetScaler is deployed, in addition to Load Balancing, NAT & Port Forwarding it also enables AutoScaling. AutoScaling is a method of monitoring the performance of your existing Guest VMs, and then automatically deploying new VMs as the load increases.  After the load has dropped off the extra VMs can be destroyed, bringing your usage, and costs back down to a base level.  This level of flexibility and scalability is a key driving force in the adoption of cloud computing.

Management

CloudStack is actually quite easy to setup and administer thanks to its great Graphcal User Interface, API and CLI tools such as CloudMonkey.  A Wizard take you through the configuration and deployment of your first Zone, Networking, POD, Cluster, Host and Storage, meaning you can be up and running within a matter of hours.

Admin UI Screen Shot

A simple Role Based Access Control (RBAC) system presents the different levels of users with the features they are entitled to, and the standard allocations can be fine-tuned as required.  The authentication can also be passed off to LDAP enabling integration with Enterprise systems including Open LDAP and MS Active Directory.

Admins setup new User Accounts which are grouped together into Domains, allowing a hierarchical structure to be built up.  By grouping users into Domains, Admins can make certain sub-sets of the infrastructure available to a particular group of users.

A set of system parameters called Global Settings allows the Admins to control all of the features and setup controls like limits and thresholds, smtp alerts and a whole host of other settings, and again all from an easy to use GUI.

Service Offerings enable Admins to setup the parameters which control the end user environment such as number of vCPUs, RAM, Network Bandwidth and Features, Preferred Hardware based on VM Operating System, Tiered Storage and much more.

Admins have full control over the infrastructure, and can initiate the live migration of any VM, between Hosts in the same Cluster.  Stopped VMs can be migrated across different Clusters by moving their associated Volumes to different storage.  Storage devices and Hosts can be taken offline for Maintenance and upgrades, and admins can steer VMs to a particular set of Hosts using either the API or Tags.

User Experience

A big selling point of CloudStack, is the well thought out Graphical User Interface.  The majority of the features available to end users are available via the GUI, with just a few of the more advanced newer features available via the API.  Because of this easy to learn GUI, new users can get their first VMs up and running within a matter of minutes of their first login.

User UI Screen Shot

The process for creating a new VM is handled by a very intuitive graphical wizard which steps you through the process in 6 easy steps:

Choose the Availability Zone Choose a pre-built Template of mount an ISO for full custom install Choose the Compute Offering which controls the amount of CPU, RAM, Network Bandwidth, & Storage Tier Add an additional Data Volume and set its size Add to an existing Network or a VPC, or if none are available create a new Network automatically Allocate a name which will also be used as the VMs Hostname, then launch the VM

Once the users have their VMs up and running they can then start to explore the other features available to them. Snapshots provide a simple and effective way for a user to protect their VMs by taking instant Snapshots of any Disk Volume, or setting up an automatic schedule, such as Hourly, Daily, Weekly etc.

Custom private Templates can be created from any Root Volume or its associated Snapshot, enabling quick and easy replication of a particular VM should multiple instances be required. Data volumes can easily be un-mounted from one VM, and mounted to another VM in a matter of seconds.

Volumes, Snapshots and Templates can all be exported from the Cloud, and then used to re-create the user environment within another Cloud, alleviating concerns of getting locked in to a particular provider.

Why Choose CloudStack?

CloudStack has a proven track record in both the Enterprise and Service Provider space with some of the world’s largest Clouds built on its technology.  I have personally been involved in a wide number of implementations on 3 different continents and whilst any large IT Project will hit a few bumps along the road, all the implementations came in on time. This is because of the mature nature of the product, and a set of well-developed design and deployment methodologies.

Unlike other open source Cloud technologies, CloudStack is truly a single Project, with a common set of objectives and goals, being driven by a very active and passionate community.  The list of new features being developed is truly staggering, a few examples are:

A new Storage Framework – bringing better control over storage, allowing Primary Storage to stretch across a whole DC, and IOPs to be controlled at VM level. XenServer XenMotion enabling live migration of VM Volumes. Dedicated Resources – Allows a sub set of the infrastructure to be dedicated to a particular user, removing all the anti-cloud arguments referring to Shared Compute/Network/Storage etc. Support for Cisco Virtual Network Management Center (VNMC). Multiple IPs per Virtual NIC – ideal for Web Server VMs with multiple SSL Certificates. S3 Backed Secondary Storage – Enables Secondary Storage to stretch across a whole Region. Dynamic Scaling of CPU & RAM – Enables a user to dynamically increase or decrease the amount of CPU & RAM available to a VM. Support for Midokura Software Defined Networking. Additional Isolation within a VLAN – Using either PVLANs (VMware) or Security Groups (Xen and KVM) VMs on a common VLAN can be isolated enabling multi-tiered Guest Networks to be built on a single VLAN.

Strengths of CloudStack

  • Proven Massive Scalability – Real Clouds with > 50,000 Hosts already in production
  • Production deployment up and running in a matter of days, not months
  • Excellent Documentation Fully supported upgrade path from all previous versions
  • Polished web based Graphical User Interface Console Access for VMs
  • Single coherent project with common vision to build the best IaaS Platform
  • Support for multiple SDNs
  • No need for large teams of DevOps staff to deploy and manage
  • Backed by Apache Software Foundation
  • AWS Compatibility

 

About the Author

Geoff Higginbottom is an Apache CloudStack Committer and CTO of ShapeBlue, the strategic cloud consultancy. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on Apache CloudStack and Citrix CloudPlatform.

 

CloudStack Logs are known for not being the easiest things to read, and when trouble shooting a difficult problem anything which makes life a little easier is very welcome.

By offloading the Management Log to a Syslog Server, Filters and Tagging can be used to greatly simplify the process of reading the log files.  In addition, depending on your choice of Syslog Server,  alerting rules can be configured to inform you of any problems which the built-in alerting engine may ignore.

The steps required to setup a Syslog Server are in the CloudStack Knowledge Base, but are not very clear and appear to be out of date.  By following these instructions, you should be able to get a Syslog Server up and running in a matter of minutes.

Using your favourite editor, edit the following file:

/etc/cloud/management/log4j-cloud.xml

Locate the section starting with
<appender name=”SYSLOG” class=”org.apache.log4j.net.SyslogAppender”>

The default settings will look something like this:

<appender name=”SYSLOG” class=”org.apache.log4j.net.SyslogAppender”>
   <param name=”Threshold” value=”WARN”/>
   <param name=”SyslogHost” value=”localhost”/>
   <param name=”Facility” value=”LOCAL6″/>
   <layout class=”org.apache.log4j.PatternLayout”>
      <param name=”ConversionPattern” value=”%-5p [%c{3}] (%t:%x) %m%n”/>
   </layout>
</appender>

You need to update this section so that it looks like this, but inserting the IP Address of your Syslog Server

<appender name=”SYSLOG” class=”org.apache.log4j.net.SyslogAppender”>
   <param name=”SyslogHost” value=”192.168.0.254“/>
   <param name=”Facility” value=”LOCAL0″/>
   <param name=”FacilityPrinting” value=”true”/>
   <param name=”Threshold” value=”DEBUG”/>
   <layout class=”org.apache.log4j.EnhancedPatternLayout”>
      <param name=”ConversionPattern” value=”%d{ISO8601} %-5p [%c{3}] (%t:%x) %m%n”/>
   </layout>
</appender>

Then find the section labelled “Setup the Root Category” and change <level value=”INFO”/> to <level value=”DEBUG”/>

Restart the Cloud-Management Service “service cloud-management restart” and then start monitoring your Syslog Server

If you don’t see any log messages on your syslog server. Verify that you have properly configured your syslog server to receive packets over UDP. And you may need to setup a rule on your syslog server for log messages as defined by the “Facility” parameter above. Refer to the documentation of your syslog server for more information.

 

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.