Posts

So you have a Cluster of Citrix XenServers and you want to upgrade them to a new version, for example to go from XenServer 6.0.2 to XenServer 6.2, or simply apply the latest Hotfixes.  As this is a cluster that is being managed by CloudStack it is not as simple as using the Rolling Pool Upgrade feature in XenCenter – in fact this is the LAST thing you want to do, and WILL result in a broken Cluster.

This article walks you through the steps required to perform the upgrade, but as always you must test this yourself in your own test environment before attempting on a production system.

We need to change the default behaviour of CloudStack with respect to how it manages XenServer Clusters before continuing.  Edit /etc/cloudstack/management/environment.properties and add the following line:

# vi /etc/cloudstack/management/environment.properties

Add > manage.xenserver.pool.master=false

Now restart the CloudStack Management Service

# service cloudstack-management restart

Repeat for all CloudStack Management servers

It is vital that you upgrade the XenServer Pool Master first before any of the Slaves.  To do so you need to empty the Pool Master of all CloudStack VMs, and you do this by putting the Host into Maintenance Mode within CloudStack to trigger a live migration of all VMs to alternate Hosts (do not place the Host into Maintenance Mode using XenCenter as this will cause a new Master to be elected and we do not want that). 

Next you need to ‘Unmanage’ the Cluster, as this prevents users from being able to interact (stop/start) VMs you will need to arrange a ‘Maintenance Window’ but only long enough to update the Pool Master.  All Customer VMs will continue to run during the upgrade process unless you are using Local Storage, in which case VMs on the Hosts being upgraded will have to shut down.  After ‘Unmanaging’ the Cluster, all Hosts will go into a ‘Disconnected’ state, this is expected and is not a cause for concern.

Now you can upgrade your Pool Master, either upgrading to a newer version, or simply applying XenServer Hotfixes as required.  Once the Pool Master has been fully upgraded re-manage the Cluster and then wait for all of the Hosts in the Cluster to come back online within CloudStack. 

Monitor the status of your NFS Storage via XenCenter and wait for all Volumes to reconnect on the upgraded Host.  Once storage has reconnected and all Hosts are back online, take the Pool Master you just upgraded out of CloudStack Maintenance Mode.

Edit /etc/cloudstack/management/environment.properties and remove the following line which you added earlier:

# vi /etc/cloudstack/management/environment.properties

Delete > manage.xenserver.pool.master=false

Now restart the CloudStack Management Service

# service cloudstack-management restart

Repeat for all CloudStack Management servers

You can now upgrade each Slave by simply placing it into Maintenance Mode in CloudStack, apply the upgrade / Hotfixes and when completed, bringing out of Maintenance Mode before starting on the next Host.

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy and an Apache CloudStack Committer. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.

In this post Rohit Yadav, Software Architect, at ShapeBlue talks about setting up a Apache CloudStack (ACS)  cloud on a single host with KVM and basic networking. This can be done on a VM or a physical host. Such a deployment can be useful in evaluating CloudStack locally and can be done in less than 30 minutes.

Note: this should work for ACS 4.3.0 and above. This how-to post may get outdated in future, so please read the latest docs and/or read the latest docs on KVM host installation.

First install Ubuntu 14.04 LTS x86_64 on a baremetal host or a VM that has at least 2G RAM (preferably 4GB RAM) and with a real or virtual 64-bit CPU that has Intel VT-x or AMD-V enabled. I personally use VMWare Fusion which can provide VMs 64-bit CPU with Intel VT-x. Such as CPU is needed by KVM for HVM or full-virtualization. Too bad VirtualBox cannot do this yet, or one can say KVM cannot do paravirtualization like Xen can.

Next, we need to do bunch of things:

  • Setup networking, IPs, create bridge
  • Install cloudstack-management and cloudstack-common
  • Install and setup MySQL server
  • Setup NFS for primary and secondary storages
  • Preseed systemvm templates
  • Prepare KVM host and install cloudstack-agent
  • Configure Firewall
  • Start your cloud!

 

Let’s start by installing some basic packages, assuming you’re root or have sudo powers:

apt-get install openntpd openssh-server sudo vim htop tar build-essential

Make sure root is able to ssh using password, fix in /etc/ssh/sshd_config.

Reset root password and remember this password:

passwd root

Networking

Next, we’ll be setting up bridges. CloudStack requires that KVM hosts have two bridges cloudbr0 and cloudbr1 which is because these names are hard coded in the code and on the KVM hosts we need to have a way to let VMs communicate to the host, between themselves and reach the outside world etc. Add network rules and configure IPs as applicable.

apt-get install bridge-utils
cat /etc/network/interfaces # an example bridge configuration

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

# Public network
auto cloudbr0
iface cloudbr0 inet static
    address 172.16.154.10
    netmask 255.255.255.0
    gateway 172.16.154.2
    dns-nameservers 172.16.154.2 8.8.8.8
    bridge_ports eth0
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# Private network
auto cloudbr1
iface cloudbr1 inet manual
    bridge_ports none
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

Notice, we’re not using cloudbr1 because the intention is to setup basic zone, basic networking, so all networking going through one bridge only.

We’re done with setting up networking, just note the cloudbr0 IP. In my case, it was 172.16.154.10. You may notice that we’re not configuring eth0 at all, it’s because we’ve a bridge now and we expose this bridge to the outside networking using this cloudbr0’s IP. By not configuring eth0 (static or dhcp), we get Ubuntu to use cloudbr0 as its default interface and use cloudbr0’s gateway as its default gateway and route. You need to reboot your VM or host now.

Management server and MySQL

Setup CloudStack repo, you may use something that I host (the link is unreliable, let me know if it stops working for you). You may use any other Debian repo as well. One can also build from source and host their own repositories.

We need to install the CloudStack management server, MySQL server and setup the management server database:

echo deb http://packages.bhaisaab.org/cloudstack/upstream/debian/4.3 ./ >> /etc/apt/sources.list.d/acs.list
apt-get update -y
apt-get install cloudstack-management cloudstack-common mysql-server
# pick any suitable root password for MySQL server

You don’t need to explicitly install cloudstack-common because the management package depends on it. This is to point out that many tools, scripts can be found in this package, such as tools to setup database, preseed systemvm template etc.

You may put following rules on your /etc/mysql/my.cnf, they are mostly to configure innodb settings and have MySQL use the bin-log “ROW” format which can be useful for replication etc. Since we’re doing only test setup we may skip this, even though CloudStack docs say that you put only this but I think on production systems you may need to configure many more options (perhaps 400 of those).

[mysqld]
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'

Now, let’s setup managment server database;

service mysql restart
cloudstack-setup-databases cloud:cloudpassword@localhost --deploy-as=root:passwordOfRoot -i <stick your cloudbr0 IP here>

Storage

We’ll setup NFS and preseed systemvm.

mkdir -p /export/primary /export/secondary
apt-get install nfs-kernel-server quota
echo /export  *(rw,async,no_root_squash,no_subtree_check) > /etc/exports
exportfs -a
sed -i -e 's/^RPCMOUNTDOPTS=--manage-gids$/RPCMOUNTDOPTS="-p 892 --manage-gids"/g' /etc/default/nfs-kernel-server
sed -i -e 's/^NEED_STATD=$/NEED_STATD=yes/g' /etc/default/nfs-common
sed -i -e 's/^STATDOPTS=$/STATDOPTS="--port 662 --outgoing-port 2020"/g' /etc/default/nfs-common
sed -i -e 's/^RPCRQUOTADOPTS=$/RPCRQUOTADOPTS="-p 875"/g' /etc/default/quota
service nfs-kernel-server restart

I prefer to download the systemvm first and then preseed it:

wget http://people.apache.org/~bhaisaab/cloudstack/systemvmtemplates/systemvm64template-2014-09-11-4.3-kvm.qcow2.bz2
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
          -m /export/secondary -f systemvm64template-2014-09-11-4.3-kvm.qcow2.bz2 -h kvm \
          -o localhost -r cloud -d cloudpassword

KVM and agent setup

Time to setup cloudstack-agent, libvirt and KVM:

apt-get install qemu-kvm cloudstack-agent
sed -i -e 's/listen_tls = 1/listen_tls = 0/g' /etc/libvirt/libvirtd.conf
echo 'listen_tcp=1' >> /etc/libvirt/libvirtd.conf
echo 'tcp_port = "16509"' >> /etc/libvirt/libvirtd.conf
echo 'mdns_adv = 0' >> /etc/libvirt/libvirtd.conf
echo 'auth_tcp = "none"' >> /etc/libvirt/libvirtd.conf
sed -i -e 's/\# vnc_listen.*$/vnc_listen = "0.0.0.0"/g' /etc/libvirt/qemu.conf
sed -i -e 's/libvirtd_opts="-d"/libvirtd_opts="-d -l"/' /etc/init/libvirt-bin.conf
service libvirt-bin restart

Firewall

Finally punch in holes on the firewall, substitute your network in the following:

# configure iptables
NETWORK=172.16.154.0/24
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 2049 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 32803 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 32769 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 662 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 662 -j ACCEPT

apt-get install iptables-persistent

# Disable apparmour on libvirtd
ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

# Configure ufw
ufw allow mysql
ufw allow proto tcp from any to any port 22
ufw allow proto tcp from any to any port 1798
ufw allow proto tcp from any to any port 16509
ufw allow proto tcp from any to any port 5900:6100
ufw allow proto tcp from any to any port 49152:49216

Launch Cloud

All set! Make sure tomcat is not running, start the agent and management server:

/etc/init.d/tomcat6 stop
/etc/init.d/cloudstack-agent start
/etc/init.d/cloudstack-management start

If all goes well, open http://cloudbr0-IP:8080/client and you’ll see the ACS login page. Use username admin and password password to log in. Now setup a basic zone, in the following steps change the IPs as applicable:

  • Pick zone name, DNS 172.16.154.2, External DNS 8.8.8.8, basic zone + SG
  • Pick pod name, gateway 172.16.154.2, netmask 255.255.255.0, IP range 172.16.154.200-250
  • Add guest network, gateway 172.16.154.2, netmask 255.255.255.0, IP range 172.16.154.100-199
  • Pick cluster name, hypervisor KVM
  • Add the KVM host, IP 172.16.154.10, user root, password whatever-the-root-password-is
  • Add primary NFS storage, IP 172.16.154.10, path /export/primary
  • Add secondary NFS storage, IP 172.16.154.10, path /export/secondary
  • Hit launch, if everything goes well launch your zone!

Keep an eye on your /var/log/cloudstack/management/management-server.log and/var/log/cloudstack/agent/agent.log for possible issues. Read the admin docs for more cloudy admin tasks. Have fun playing with your CloudStack cloud.

CloudStack 4.3 provided further enhancements to the LDAP integration, and in this article we will look at how you configure CloudStack to authenticate against a Microsoft Active Directory Server.

Enable AD Integration

First step is to tell CloudStack about your Active Directory Servers (yes we can now have more than one)

Go to Global Settings then choose ‘LDAP Configuration’ from the Select View Dropdown, then click the ‘ + Configure LDAP’ button on the top right

LDAP-001

Populate the form with the details of your LDAP Server(s) – I will be adding just the one today

LDAP-002

LDAP Global Settings

Now go back to Global Settings and filter for ‘LDAP’ using the search box in the top right. These are the settings we need to configure in order to use LDAP.

LDAP-003

These are the settings I used in our Lab

ldap.basedn           DC=sbdemo1,DC=local

The following two settings specify a normal AD User Account which is used to query the list of users within AD, it does not require Domain Admin rights. Note how you need to use its ‘distinguishedName’ to identify it.

ldap.bind.password           xxxxxxxxx

ldap.bind.principal          CN=cloudstack-ldap,CN=Users,DC=sbdemo1,DC=local

ldap.email.attribute          mail          (default)

ldap.firstname.attribute          givenname          (default)

ldap.group.object          groupOfUniqueNames          (default)

ldap.group.user.uniquemember          uniquemember          (default)

dap.lastname.attribute          sn          (default)

The following setting is used by the Add Account UI element to filter the list of Users in the selection list so it only shows accounts which belong to the specified Group. In my case the Group is called CloudStack, but you need to use the ‘distinguishedName’ value to identify it.

ldap.search.group.principle          CN=CloudStack,CN=Users,DC=sbdemo1,DC=local

ldap.truststore          (blank)

ldap.truststore.password          (blank)

ldap.user.object          user          (default was inetOrgPerson)

ldap.username.attribute          samaccountname          (default was uid)

LDAP-004

After updating the various settings (adjusting them for your environment), restart the CloudStack Management Service to activate the settings.

Adding LDAP Accounts

To add a new LDAP Account, go to the Accounts Tab, the click the ‘Add LDAP Account’ button at the top right

LDAP-007

CloudStack will then list all LDAP Accounts which have not yet been added to CloudStack, and are in the Group specified in the ‘ldap.search.group.principle’ Global Setting’

Chose the AD User you wish to create the new Account for, then select the appropriate Domain,

LDAP-005

Password Management

Any accounts which were already configured in CloudStack will still use local CloudStack authentication, however you will not be able to change the user’s password using the CloudStack UI once LDAP is enabled (dual authentication is coming in release 4.5)

LDAP-006

You can still change the user password using the ‘updateUser’ API call.

Users with LDAP Accounts will no longer need to change their password via CloudStack, as their password will be managed by Windows AD.

Bulk Import

If you want to Bulk Import all of the users within LDAP who have not yet been added to CloudStack, you can do so by using the ‘importLdapUsers’ API command.

An example of the command using the unauthenticated API port would be:

http://192.168.0.3:8096/client/api?command=importLdapUsers

&accounttype=0

&domainid=b7e70c6f%2D8619%2D5641%2Dcd41%2Bafbd8147b438

This will import all users from AD, who are not currently in CloudStack, creating a new Account for each user, and adding them to the Domain specified by the domainid parameter. Both the Account Name and User Name will be the same as the AD ‘User Logon Name’

Summary

LDAP Integration has become even easier with CloudStack 4.3, bringing the ability to bulk import multiple users and create unique accounts for each user.  The API is still required for some features, such as Bulk Import, or Password Resets of CloudStack Local Accounts etc, but each release brings further improvements.

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.

 

In this article, Paul Angus Cloud Architect at ShapeBlue looks into a few interesting settings when using CloudStack to orchestrate VMware environments

Working with CloudStack 4.3 and VMware 5.5 in our lab recently, I came to using some very interesting global settings which renewed a project I had on the back burner…

vm.instancename.flag

This global setting changes the VM name as seen in vCenter or on the ESXi host. Instead of the VM appearing as i-34-1234-VM, which is the account ID followed by the sequential VM number, the VM will appear with the name given to it when creating the instance i.e. SB-TestVC01. In a public cloud this could be a bit of a nightmare as each name has to be unique, but in a private cloud it makes a lot more sense to see VMs with the same naming convention as in the rest of the environment.

vmware.create.full.clone

The biggest thing to note about this global setting is that the default is ‘true’ meaning that when using VMware, guest instances created from templates are full copies of the original template, not simply difference disks (deltas), as opposed to XenServer which isn’t currently configurable and always creates linked clones of the template.

If the speed of deployment and primary storage usage are your main concerns then you may want to change this setting to ‘false’ as less data is written to disks.

However there are the potential issues with linked clones:

1. As the original template is a parent of all instances based upon that template, then you have a single point of failure, which, if it becomes corrupted will make all of the VMs based on that template instance also corrupt.

2. There is a performance loss when the hypervisor has to figure out which (parent) file a disk read has to be performed on.

3. Rescuing VM disks from messed up storage becomes extremely tricky because the vDisk is not in one handy piece.

If these are a concern, then leave the global variable as it is.

vmware.nested.virtualization

This setting is very exciting for those of us who build a lot of testing and teaching environments. For the uninitiated, nested virtualisation is the ability to run fully functional hypervisors as virtual instances. It requires certain features on the processor and chipset, and I dependant on the version of ESXi/vCenter you’re running, but if you have those features you are able to deploy virtual KVM, XenServer, ESXi, Hyper-V instances.

Using a ‘parent’ CloudStack, you can then deploy these hypervisors and virtualised CloudStack management farms (with a bit of configuration management wizardry) from templates, all within self-contained VLANs on these ESXi ‘super-hosts’. Deployment of environments is now a whole lot quicker and easier…

However we’re still missing an element, and that is being able to create interfaces which allow packets which have been VLAN tagged by a virtual host to pass through to other virtual hosts. This is required if we want guests on different hypervisor hosts to be able to communicate with each other.

So what we want is our parent CloudStack to set the port group on a vSwitch of the ESXi super-hosts to trunk VLANs between virtual hosts. As it happens, ESXi uses the VLAN ID 4095 as the ‘code’ for ‘trunk all tagged VLAN packets’. So if you create a shared network with a VLAN ID of 4095, then connect the guest interfaces on your virtual hosts to that network and they’ll pass your virtual guest traffic.

About the author

Paul Angus is a Senior Consultant & Cloud Architect at ShapeBlue the leading independent CloudStack integrator & consultancy. He has designed numerous CloudStack environments for customers across 4 continents, based on Apache Cloudstack ,Citrix CloudPlatform and Citrix Cloudportal. Paul has spoken at all Apache Cloudstack collaboration conferences and is an active contributor to the CloudStack community. When not building Clouds, Paul likes to create scripts that build clouds……..and he very occasionally can be seen trying to hit a golf ball.

 

 

 

UPDATE: 09-Apr-2014 – The proper upgrade command is “apt-get install openssl libssl1.0.0”. If you’ve just updated openssl, please go back and update libssl as well.

UPDATE: 10-Apr-2014 – Added detailed verification steps / Apache CloudStack 4.0 – 4.1 are not vulnerable, they use older Debian/openssl.

Thanks to all involved for helping to put together and update this information

 

Earlier this week, a security vulnerability was disclosed in OpenSSL, one of the software libraries that Apache CloudStack uses to encrypt data sent over network connections. As the vulnerability has existed in OpenSSL since early 2012, System VMs in Apache CloudStack versions 4.0.0-incubating-4.3 are running software using vulnerable versions of OpenSSL. This includes CloudStack’s Virtual Router VMs, Console Proxy VMs, and Secondary Storage VMs.

The CloudStack community are actively working on creating updated System VM templates for each recent version of Apache CloudStack, and for each of the hypervisor platforms which Apache CloudStack supports. Due to testing and QA processes, this will take several days. In the meantime, a temporary workaround is available for currently running System VMs.

If you are running Apache CloudStack 4.0.0-incubating through the recent 4.3 release, the following steps will help ensure the security of your cloud infrastructure until an updated version of the System VM template is available:

Logon to each Secondary Storage VM, Console Proxy VM and Virtual Router and update openssl

XenServer & KVM

  1. Use the GUI to identify the Link Local IP and Host of the VM
  2. Connect to the Hypervisor Host using SSH
  3. From the Host, Connect to the VM using the following command, replacing n.n.n.n with the Link Local IP identified in step 1.
  4. On the System VM,When updating Secondary Storage VMs, run /etc/init.d/apache2 restart
    • run apt-get update
    • then run apt-get install openssl libssl1.0.0
    • If a dialog appears asking to restart programs, accept its request
  5. Log out of the System VM and host server
  6. Repeat for all Secondary Storage, Console Proxy and Virtual Router VMs

 

 VMware

  1. Use the GUI to identify the Management / Private IP of the VM
  2. SSH onto a CloudStack Management Server
  3. From the Management Server, connect to the VM using the following command, replacing n.n.n.n with the Managemnt/Private IP identified in step 1.
    • ​ssh -i /var/lib/cloud/management/.ssh/id_rsa -p 3922 root@n.n.n.n
  4. On the System VM,When updating Secondary Storage VMs, run /etc/init.d/apache2 restart
    • run apt-get update
    • then run apt-get install openssl libssl1.0.0
    • If a dialog appears asking to restart programs, accept its request
  5. Log out of the System VM and host server
  6. Repeat for all Secondary Storage, Console Proxy and Virtual Router VMs

 

Verification

On each System VM, you can test if it has non-vulnerable openssl packages installed by listing installed packages and looking at the installed versions of openssl and libssl. As in the example below, for a system to be non-vulnerable, the packages need to be at or above version 1.0.1e-2+deb7u6:

root@v-14-VM:~# dpkg -l|grep ssl
ii  libssl1.0.0:i386                1.0.1e-2+deb7u6                  i386         SSL shared libraries
ii  openssl                              1.0.1e-2+deb7u6                  i386         Secure Socket Layer (SSL) binary and related cryptographic tools

We realise that for larger installations where System VMs are being actively created and destroyed based on customer demand, this is a very rough stop-gap. The Apache CloudStack security team is actively working on a more permanent fix and will be releasing that to the community as soon as possible.

For Apache CloudStack installations that secure the web-based user-interface with SSL, these may also be vulnerable to HeartBleed, but that is outside the scope of this blog post. We recommend testing your installation with [1] to determine if you need to patch/upgrade the SSL library used by any web servers (or other SSL-based services) you use.

Information originally posted on https://blogs.apache.org/cloudstack/entry/how_to_mitigate_openssl_heartbleed

1: http://filippo.io/Heartbleed/

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy and an Apache CloudStack Committer. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.

 

In this article, Paul Angus Cloud Architect at ShapeBlue takes a look at using Ansible to Deploy an Apache CloudStack cloud.

What is Ansible

Ansible is a deployment and configuration management tool similar in intent to Chef and Puppet. It allows (usually) DevOps teams to orchestrate the deployment and configuration of their environments without having to re-write custom scripts to make changes.

Like Chef and Puppet, Ansible is designed to be idempotent, these means that you determine the state you want a host to be in and Ansible will decide if it needs to act in order to achieve that state.

There’s already Chef and Puppet, so what’s the fuss about Ansible?

Let’s take it as a given that configuration management makes life much easier (and is quite cool), Ansible only needs an SSH connection to the hosts that you’re going to manage to get started. While Ansible requires Python 2.4 or greater to on the host you’re going to manage in order to leverage the vast majority of its functionality, it is able to connect to hosts which don’t have Python installed in order to then install Python, so it’s not really a problem. This greatly simplifies the deployment procedure for hosts, avoiding the need to pre-install agents onto the clients before the configuration management can take over.

Ansible will allow you to connect as any user to a managed host (with that user’s privileges) or by using public/private keys – allowing fully automated management.

There also doesn’t need to be a central server to run everything, as long as your playbooks and inventories are in-sync you can create as many Ansible servers as you need (generally a bit of Git pushing and pulling will do the trick).

Finally – its structure and language is pretty simple and clean. I’ve found it a bit tricky to get the syntax correct for variables in some circumstances, but otherwise I’ve found it one of the easier tools to get my head around.

So let’s see something

For this example we’re going to create an Ansible server which will then deploy a CloudStack server. Both of these servers will be CentOS 6.4 virtual machines.

Installing Ansible

Installing Ansible is blessedly easy. We generally prefer to use CentOS so to install Ansible you run the following commands on the Ansible server.

# rpm -ivh http://www.mirrorservice.org/sites/dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# yum install -y ansible

And that’s it.

(There is a commercial version which has more features such as callback to request configurations and a RESTful API and also support. The installation of this is different)

By default Ansible uses /etc/ansible to store your playbooks, I tend to move it, but there’s no real problem with using the default location. Create yourself a little directory structure to get started with. The documentation recommends something like this:

ansible1

Playbooks

Ansible uses playbooks to specify the state in which you wish the target host to be in to be able to accomplish its role. Ansible playbooks are written in YAML format.

Modules

To get Ansible to do things you specify the hosts a playbook will act upon and then call modules and supply arguments which determine what Ansible will do to those hosts.

To keep things simple, this example is a cut-down version of a full deployment. This example creates a single management server with a local MySQL server and assumes you have your secondary storage already provisioned somewhere. For this example I’m also not going to include securing the MySQL server, configuring NTP or using Ansible to configure the networking on the hosts either. Although normally we’d use Ansible to do exactly that.

The pre-requisites to this CloudStack build are:

  • A CentOS 6.4 host to install CloudStack on
  • An IP address already assigned on the ACS management host
  • The ACS management host should have a resolvable FQDN (either through DNS or the host file on the ACS management host)
  • Internet connectivity on the ACS management host

Planning

The first step I use is to list all of the tasks I think I’ll need and group them or split them into logical blocks. So for this deployment of CloudStack I’d start with:

  • Configure selinux
  • (libselinux-python required for Ansible to work with selinux enabled hosts)
  • Install and configure MySQL
  • (Python MySQL-DB required for Ansible MySQL module)
  • Install cloud-client
  • Seed secondary storage

Ansible is built around the idea of hosts having roles, so generally you would group or manage your hosts by their roles. So now to create some roles for these tasks

I’ve created:

  • cloudstack-manager
  • mysql

First up we need to tell Ansible where to find our CloudStack management host. In the root Ansible directory there is a file called ‘hosts’ (/etc/Ansible/hosts) add a section like this:

[acs-manager]

xxx.xxx.xxx.xxx

where xxx.xxx.xxx.xxx is the ip address of your ACS management host.

MySQL

So let’s start with the MySQL server.  We’ll need to create a task within the mysql role directory called main.yml. The ‘task’ in this case to have MySQL running and configured on the target host. The contents of the file will look like this:

name: Ensure mysql server is installed

  yum: name=mysql-server state=present 

– name: Ensure mysql python is installed

  yum: name=MySQL-python state=present

 

– name: Ensure selinux python bindings are installed

  yum: name=libselinux-python state=present 

– name: Ensure cloudstack specfic my.cnf lines are present

  lineinfile: dest=/etc/my.cnf regexp=’$item’ insertafter=”symbolic-links=0″ line=’$item’

  with_items:

  – skip-name-resolve

  – default-time-zone=’+00:00′

  – innodb_rollback_on_timeout=1

  – innodb_lock_wait_timeout=600

  – max_connections=350

  – log-bin=mysql-bin

  – binlog-format = ‘ROW’

 

– name: Ensure MySQL service is started

  service: name=mysqld state=started 

– name: Ensure MySQL service is enabled at boot

  service: name=mysqld enabled=yes

 

– name: Ensure root password is set

  mysql_user: user=root password=$mysql_root_password host=localhost

  ignore_errors: true 

– name: Ensure root has sufficient privileges

  mysql_user: login_user=root login_password=$mysql_root_password user=root host=% password=$mysql_root_password priv=*.*:GRANT,ALL state=present

This needs to be saved as /etc/ansible/roles/mysql/tasks/main.yml

As explained earlier, this playbook in fact describes the state of the host rather than setting out commands to be run. For instance, we specify certain lines which must be in the my.cnf file and allow Ansible to decide whether or not it needs to add them.

Most of the modules are self-explanatory once you see them, but to run through them briefly;

The ‘yum’ module is used to specify which packages are required, the ‘service’ module controls the running of services, while the ‘mysql_user’ module controls mysql user configuration. The ‘lineinfile’ module controls the contents in a file.

We have a couple of variables which need declaring.  You could do that within this playbook or its ‘parent’ playbook, or as a higher level variable. I’m going to declare them in a higher level playbook. More on this later.

That’s enough to provision a MySQL server. Now for the management server.

 

CloudStack Management server service

For the management server role we create a main.yml task like this:

– name: Ensure selinux python bindings are installed

  yum: name=libselinux-python state=present

 

– name: Ensure the Apache Cloudstack Repo file exists as per template

  template: src=cloudstack.repo.j2 dest=/etc/yum.repos.d/cloudstack.repo

 

– name: Ensure selinux is in permissive mode

  command: setenforce permissive

 

– name: Ensure selinux is set permanently

  selinux: policy=targeted state=permissive

 

name: Ensure CloudStack packages are installed

  yum: name=cloud-client state=present

 

– name: Ensure vhdutil is in correct location

  get_url: url=http://download.cloud.com.s3.amazonaws.com/tools/vhd-util dest=/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/vhd-util mode=0755

 

Save this as /etc/ansible/roles/cloudstack-management/tasks/main.yml

Now we have some new elements to deal with. The Ansible template module uses Jinja2 based templating.  As we’re doing a simplified example here, the Jinja template for the cloudstack.repo won’t have any variables in it, so it would simply look like this:

 

[cloudstack]

name=cloudstack

baseurl=http://cloudstack.apt-get.eu/rhel/4.2/

enabled=1

gpgcheck=0

This is saved in /etc/ansible/roles/cloudstack-manager/templates/cloudstack.repo.j2

That gives us the packages installed, we need to set up the database. To do this I’ve created a separate task called setupdb.yml

– name: cloudstack-setup-databases

  command: /usr/bin/cloudstack-setup-databases cloud:{{ mysql_cloud_password }}@localhost –deploy-as=root:{{ mysql_root_password }}

 

– name: Setup CloudStack manager

  command: /usr/bin/cloudstack-setup-management

 

Save this as: /etc/ansible/roles/cloudstack-management/tasks/setupdb.yml

As there isn’t (as yet) a CloudStack module, Ansible doesn’t inherently know whether or not the databases have already been provisioned, therefore this step is not currently idempotent and will overwrite any previously provisioned databases.

There are some more variables here for us to declare later.

 

System VM Templates:

 

Finally we would want to seed the system VM templates into the secondary storage.  The playbook for this would look as follows:

– name: Ensure secondary storage mount exists

  file: path={{ tmp_nfs_path }} state=directory

 

– name: Ensure  NFS storage is mounted

  mount: name={{ tmp_nfs_path }} src={{ sec_nfs_ip }}:{{ sec_nfs_path }} fstype=nfs state=mounted opts=nolock

 

– name: Seed secondary storage

  command: /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp_nfs_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2 -h kvm -F

  command: /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp_nfs_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2 -h xenserver -F

  command: /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp_nfs_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.ova -h vmware -F

 

Save this as: /etc/ansible/roles/cloudstack-manager/tasks/seedstorage.yml

Again, there isn’t a CloudStack module so Ansible will always run this even if the secondary storage already has the templates in it.

 

Bringing it all together

Ansible can use playbooks which run other playbooks, this allows us to group these playbooks together and declare variables across all of the individual playbooks. So in the Ansible playbook directory create a file called deploy-cloudstack.yml, which would look like this:

hosts: acs-manager

  vars:

    mysql_root_password: Cl0ud5tack

    mysql_cloud_password: Cl0ud5tack

    tmp_nfs_path: /mnt/secondary

    sec_nfs_ip: IP_OF_YOUR_SECONDARY_STORAGE

    sec_nfs_path: PATH_TO_YOUR_SECONDARY_STORAGE_MOUNT

 

roles:

    – mysql

    – cloudstack-manager

 

  tasks:

 

  – include: /etc/ansible/roles/cloudstack-manager/tasks/setupdb.yml

  – include: /etc/ansible/roles/cloudstack-manager/tasks/seedstorage.yml

 

Save this as: /etc/ansible/deploy-cloudstack.yml  inserting the IP address and path for your secondary storage and changing the passwords if you wish to.

 

To run this go to the Ansible directory (cd /etc/ansible ) and run:

# ansible-playbook deploy-cloudstack.yml -k

‘-k’ tells Ansible to ask you for the root password to connect to the remote host.

Now log in to the CloudStack UI on the new management server.

 

How is this example different from a production deployment?

In a production deployment, the Ansible playbooks would configure multiple management servers connected to master/slave replicating MySQL databases along with any other infrastructure components required and deploy and configure the hypervisor hosts. We would also have a dedicated file describing the hosts in the environment and a dedicated file containing variables which describe the environment.

The advantage of using a configuration management tool such as Ansible is that we can specify components like the MySQL database VIP once and use it multiple times when configuring the MySQL server itself and other components which need to use that information.

 

Acknowledgements

Thanks to Shanker Balan for introducing me to Ansible and a load of handy hints along the way.

 

Summary

In this blog we have covered the basic principles of Ansible and gone through a simple example which will build a CloudStack management server including a MySQL server instance with the CloudStack databases deployed on it.

About the Author

Paul Angus is a Senior Consultant & Cloud Architect at ShapeBlue, The Cloud Specialists. He has designed numerous CloudStack environments for customers across 4 continents, based on Apache Cloudstack ,Citrix Cloudplatform and Citrix Cloudportal.

When not building Clouds, Paul likes to create scripts that build clouds……..and he very occasionally can be seen trying to hit a golf ball.

 

What Is Apache CloudStack™

Apache CloudStack™ is an open source software platform that pools computing resources to build Public, Private, and Hybrid Infrastructure as a Service (IaaS) Clouds. Apache CloudStack manages the Network, Storage, and Compute nodes that make up a Cloud infrastructure.

The Story So Far.

CloudStack started life as VMOps, a company founded in 2008 with product development spearheaded by Sheng Liang, who developed the Java Virtual Machine at Sun.  Whilst early versions were very much focused on the Xen Hypervisor, the team realised the benefits of multi-hypervisor support.  In early 2010, the company achieved a massive marketing win when they acquired the domain name cloud.com and formally launched CloudStack, which was 98% open source.  In July 2011, CloudStack was acquired by Citrix Systems, who released the remaining code as open source under GPLv3.

The big news came in April 2012, when Citrix donated CloudStack to the Apache Software Foundation where is was accepted into the Apache Incubator.  At the same time Citrix also ceased their involvement in the OpenStack Initiative.  Apache CloudStack has now been promoted to a Top-Level Project of the Apache Software Foundation, a measure of the maturity of the code and its community.

Cloud Types

CloudStack works within multiple enterprise strategies and mandates, as well as supporting multiple cloud strategies from a service provider perspective.  As an initial step beyond traditional server virtualization, many organizations are looking to private cloud implementations as a means to satisfy flexibility while still retaining control over service delivery.  The private cloud may be hosted by the IT organization itself, or sourced from a managed service provider, but the net goals of total control and security without compromising SLAs are achieved.

For some organizations, the managed service model is stepped up one level with all resources sourced from a hosted solution.  SLA guarantees and security concerns often dictate the types of providers an enterprise will look towards.  At the far end of the spectrum are public cloud providers with pay as you go pricing structures and elastic scaling.  Since public clouds often abstract details such as network topology, a hybrid cloud strategy allows IT to retain control over key aspects of their operations such as data, while leveraging the benefits of elastic public cloud capacity.

Open Flexible Platform

Multiple Hypervisor Support

CloudStack works with a variety of hypervisors, and a single cloud deployment can contain multiple hypervisor implementations. The current release of CloudStack supports pre-packaged enterprise solutions like Citrix XenServer and VMware vSphere, as well as OVM and KVM or Xen running on Ubuntu or CentOS.  Support for Hyper-V is currently being developed and should be available in a future release.

Massively Scalable Infrastructure Management CloudStack can manage tens of thousands of host servers installed in multiple geographically distributed datacentres. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause a cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud.

Automatic Configuration Management CloudStack automatically configures each guest virtual machine’s networking and storage settings.  CloudStack internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN, console access, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a CloudStack deployment.

Graphical User Interface CloudStack offers an administrator’s Web interface, used for provisioning and managing the cloud, as well as an end-user’s Web interface, used for running VMs and managing VM templates.  The UI can be customized to reflect the desired service provider or enterprise look and feel.

API and Extensibility CloudStack provides an API that gives programmatic access to all the management features available in the UI. This API enables the creation of command line tools and new user interfaces to suit particular needs. The CloudStack pluggable allocation architecture allows the creation of new types of allocators for the selection of storage and Hosts.

CloudStack can translate Amazon Web Services (AWS) EC2 & S3 API calls to native CloudStack API calls so that users can continue using existing AWS-compatible tools. CloudMonkey is a Command Line Interface (CLI) for CloudStack written in Python.  CloudMonkey brings the ability to easily create scripts to automate complex or repetitive admin and management tasks from simply adding multiple users, to deploying a complete CloudStack architecture.

More information on CloudMonkey can be found at http://goo.gl/ESp8ha

Access to the API, either directly or by using CloudMonkey is protected by a combination of API & Secret Keys and a Signature Hash.  Users can re-generate new random API & Secret Keys (as well as their UI Password) at any time, providing maximum security and peace of mind.

High Availability

CloudStack Multi-Node Deployment

CloudStack has a number of features to increase the availability of the system. The Management Server itself may be deployed in a multi-node installation where the servers are load balanced across data centres.  MySQL may be configured to use replication to provide for a failover in the event of database loss. For the hosts, CloudStack supports NIC bonding and the use of separate networks for storage as well as iSCSI Multipath.

CloudStack Deployment Architecture

Deployment Architecture

CloudStack has 6 key Building Blocks:

Regions are very similar to an AWS Region, and are the 1st and largest unit of scale for a CloudStack Cloud.  A Region consists of multiple Availability Zones, which are the 2nd largest unit of scale. Typically there is one Zone per Data Centre, and each Zone contains PODs, Clusters, Hosts and Storage.  A Cloud can contain multiple Regions, and even if one region should go offline, VMs in other Regions are still accessible as each Region has dedicated Management Servers, located in one or more of its Zones.

PODs are the 3rd unit of scale, and are often a single rack which house Networking, Compute and Storage.  PODs also have logical as well as physical properties with components such as IP Addressing and VM allocations being influenced by the PODs within a Zone.

Clusters are the 4th unit of scale, and are simply groups of homogenous Compute hardware combined with Primary Storage.  Each Cluster will run a common Hypervisor, but a Zone can consist of combinations of all of the supported Hypervisors.

Hosts are the 5th unit of scale and provide the actual compute layer on which Virtual Machines will run.

Storage is the final building block and there are two key types within CloudStack, Primary and Secondary.  Primary Storage is where Virtual Machines reside, and can be Local Storage within a Compute Host or Shared File/Block Storage using NFS, iSCSI, Fibre Channel etc.

Secondary Storage is where Virtual Machine Templates, ISO images, and Snapshots reside and is currently always presented over NFS.  Swift can also be used to replicate Secondary Storage between Zones, ensuring users always have access to their Snapshots even if a Zone is offline. There is a lot of development work currently underway with regards to Storage and some great new features coming in the next release of CloudStack thanks to a new Storage Subsystem.

Networking

The ‘Glue’ that brings all of the building blocks together is the Network layer.  CloudStack has two principle models for Networking, referred to as Basic and Advanced. Basic Networking is very similar to the model used by AWS, and can be deployed in 3 slightly different ways, with each adding to the features of the previous.

A true ‘Flat’ network where all VMs share a common Network Range with no form of Isolation.

Using Security Groups which utilise Layer-3 IP Address Filtering to isolate VMs from one another.

Elastic IP and Elastic Load Balancing – A Citrix NetScaler provides Public IP and Load Balancing functionality, and is completely orchestrated by CloudStack.

All three of these Basic Network models allow massive scale as the IP range used by VMs is contained within a POD.  The Zone can be scaled horizontally by simply adding more PODs, consisting of Clusters of Hosts and their associated Top of Rack Switching and Primary Storage. The Advanced Networking model brings a raft of features which place a massive amount of power right into the hands of the end users.  VLANs are the standard method of isolation but Software Defined Networking (SDN) offerings from Nicira, BigSwitch and soon Midokura bring the possibility of massive scale by overcoming any VLAN limitations.

CloudStack makes excellent use of System Virtual Machines to provide control and automation of Storage and Networking.  One such System VM is the CloudStack Virtual Router.  The key difference over a Basic Network, is that in the Advanced mode, users can create CloudStack Guest Networks, with each Network having a dedicated Virtual Router.

This innocuous sounding VM provides the following features:  DNS & DHCP, Firewall, Client IPSEC VPN, Load Balancing, Source / Static NAT, Port Forwarding, and all of them are configurable by end users from either the GUI or the CloudStack API.

Virtual Router Configuration Options Screen Shot

Virtual Router Static NAT Screen Shot

When a user creates a new Guest Network, and then deploys Guest VMs onto that Network, the VMs are attached to a dedicated L2 Broadcast Domain, isolated by VLANS and fronted by a Virtual Router.  They have full control of all traffic entering and leaving the network, with a direct connection to the Public Internet.

Firewall and Port Forwarding rules enable the mapping of Live IPs to any number of Internal VMs.  Load Balancing functionality with Round-Robin, Least Connections and Source Based Algorithms along with Source Based, App Cookie or LB Cookie Stickiness Policies is available straight out of the box.

Another powerful feature of the Advanced Network model is the Virtual Private Cloud (VPC).  A VPC enables the user to create a multi-tiered network configuration, placing VMs within their own VLAN.  ACLs enable the users to control the flow of traffic between each Network Tier and also the Internet.  A typical VPC may contain 3 Network Tiers, Web, App and DB, with only the Web Tier having Internet Access.

VPCs also bring additional features such as Site-2-Site VPN, enabling a persistent connection with infrastructure running in alternate locations such as other Data Centres or even alternate Clouds.  A VPC Private Gateway is a feature the Cloud Admins can leverage to provide a 2nd Gateway out of the VPC Virtual Router.  The connection can be used to connect the VMs running within the VPC to other infrastructure via for example a MPLS Network rather than over the Public Internet.

CloudStack optimises the use of the underlying network architecture within a DC by enabling the Cloud Admins to split up the various types of Network Traffic and map them to different sets of Bonded NICs within each Compute Host.

There are four types of Physical Network which can be configured, and they can be setup to all use a single NIC, or multiple bonds, depending on how many NICs are available in the Host Server.  The four networks are:

Management: Used by the CloudStack Management Servers and various other components within the system, sometimes referred to as the Orchestration Network.

Guest: Used by all Guest VMs when communicating with other Guest VMs or Gateway Devices such as Virtual Routers, Juniper SRX Firewalls, F5 Load Balancers etc.  In an Advanced configuration, multiple Guest Networks can be created, allowing certain NICs to be dedicated to a particular user or function.

Public:  In an Advanced Network configuration the Public Network connects the Virtual Routers to the Public Internet.  It only exists in a Basic Network when a Citrix NetScaler is used to provide Elastic IP and Elastic LB services.

Storage:  Used by the special Secondary Storage System VM and Host Servers when connecting to Secondary Storage devices.  It enables the optimisation of traffic used for deploying new VMs from Templates, and in particular for handling Snapshot traffic which can get network intensive, without negatively impacting the Guest & Management Traffic.

The traffic associated with Primary Storage, where the actual VMs reside, can also be split out onto dedicated NICs or HBAs etc, again allowing for optimal performance and High Availability.

Network Service Providers

In addition to the Virtual Router and VPC Virtual Router, CloudStack can also leverage the power of real hardware, bringing even more functionality and greater scale.  Currently supported devices are Citrix NetScaler, F5 Big IP, and Juniper SRX but with many more on the way. Once a device has been integrated by the Cloud Admins, the users have control of the features via the standard GUI or API.  For example, if a Juniper SRX is deployed, when a user configures a Firewall Rule within CloudStack UI, CloudStack uses the Juniper API to apply that configuration on the physical SRX.

When a Citrix NetScaler is deployed, in addition to Load Balancing, NAT & Port Forwarding it also enables AutoScaling. AutoScaling is a method of monitoring the performance of your existing Guest VMs, and then automatically deploying new VMs as the load increases.  After the load has dropped off the extra VMs can be destroyed, bringing your usage, and costs back down to a base level.  This level of flexibility and scalability is a key driving force in the adoption of cloud computing.

Management

CloudStack is actually quite easy to setup and administer thanks to its great Graphcal User Interface, API and CLI tools such as CloudMonkey.  A Wizard take you through the configuration and deployment of your first Zone, Networking, POD, Cluster, Host and Storage, meaning you can be up and running within a matter of hours.

Admin UI Screen Shot

A simple Role Based Access Control (RBAC) system presents the different levels of users with the features they are entitled to, and the standard allocations can be fine-tuned as required.  The authentication can also be passed off to LDAP enabling integration with Enterprise systems including Open LDAP and MS Active Directory.

Admins setup new User Accounts which are grouped together into Domains, allowing a hierarchical structure to be built up.  By grouping users into Domains, Admins can make certain sub-sets of the infrastructure available to a particular group of users.

A set of system parameters called Global Settings allows the Admins to control all of the features and setup controls like limits and thresholds, smtp alerts and a whole host of other settings, and again all from an easy to use GUI.

Service Offerings enable Admins to setup the parameters which control the end user environment such as number of vCPUs, RAM, Network Bandwidth and Features, Preferred Hardware based on VM Operating System, Tiered Storage and much more.

Admins have full control over the infrastructure, and can initiate the live migration of any VM, between Hosts in the same Cluster.  Stopped VMs can be migrated across different Clusters by moving their associated Volumes to different storage.  Storage devices and Hosts can be taken offline for Maintenance and upgrades, and admins can steer VMs to a particular set of Hosts using either the API or Tags.

User Experience

A big selling point of CloudStack, is the well thought out Graphical User Interface.  The majority of the features available to end users are available via the GUI, with just a few of the more advanced newer features available via the API.  Because of this easy to learn GUI, new users can get their first VMs up and running within a matter of minutes of their first login.

User UI Screen Shot

The process for creating a new VM is handled by a very intuitive graphical wizard which steps you through the process in 6 easy steps:

Choose the Availability Zone Choose a pre-built Template of mount an ISO for full custom install Choose the Compute Offering which controls the amount of CPU, RAM, Network Bandwidth, & Storage Tier Add an additional Data Volume and set its size Add to an existing Network or a VPC, or if none are available create a new Network automatically Allocate a name which will also be used as the VMs Hostname, then launch the VM

Once the users have their VMs up and running they can then start to explore the other features available to them. Snapshots provide a simple and effective way for a user to protect their VMs by taking instant Snapshots of any Disk Volume, or setting up an automatic schedule, such as Hourly, Daily, Weekly etc.

Custom private Templates can be created from any Root Volume or its associated Snapshot, enabling quick and easy replication of a particular VM should multiple instances be required. Data volumes can easily be un-mounted from one VM, and mounted to another VM in a matter of seconds.

Volumes, Snapshots and Templates can all be exported from the Cloud, and then used to re-create the user environment within another Cloud, alleviating concerns of getting locked in to a particular provider.

Why Choose CloudStack?

CloudStack has a proven track record in both the Enterprise and Service Provider space with some of the world’s largest Clouds built on its technology.  I have personally been involved in a wide number of implementations on 3 different continents and whilst any large IT Project will hit a few bumps along the road, all the implementations came in on time. This is because of the mature nature of the product, and a set of well-developed design and deployment methodologies.

Unlike other open source Cloud technologies, CloudStack is truly a single Project, with a common set of objectives and goals, being driven by a very active and passionate community.  The list of new features being developed is truly staggering, a few examples are:

A new Storage Framework – bringing better control over storage, allowing Primary Storage to stretch across a whole DC, and IOPs to be controlled at VM level. XenServer XenMotion enabling live migration of VM Volumes. Dedicated Resources – Allows a sub set of the infrastructure to be dedicated to a particular user, removing all the anti-cloud arguments referring to Shared Compute/Network/Storage etc. Support for Cisco Virtual Network Management Center (VNMC). Multiple IPs per Virtual NIC – ideal for Web Server VMs with multiple SSL Certificates. S3 Backed Secondary Storage – Enables Secondary Storage to stretch across a whole Region. Dynamic Scaling of CPU & RAM – Enables a user to dynamically increase or decrease the amount of CPU & RAM available to a VM. Support for Midokura Software Defined Networking. Additional Isolation within a VLAN – Using either PVLANs (VMware) or Security Groups (Xen and KVM) VMs on a common VLAN can be isolated enabling multi-tiered Guest Networks to be built on a single VLAN.

Strengths of CloudStack

  • Proven Massive Scalability – Real Clouds with > 50,000 Hosts already in production
  • Production deployment up and running in a matter of days, not months
  • Excellent Documentation Fully supported upgrade path from all previous versions
  • Polished web based Graphical User Interface Console Access for VMs
  • Single coherent project with common vision to build the best IaaS Platform
  • Support for multiple SDNs
  • No need for large teams of DevOps staff to deploy and manage
  • Backed by Apache Software Foundation
  • AWS Compatibility

 

About the Author

Geoff Higginbottom is an Apache CloudStack Committer and CTO of ShapeBlue, the strategic cloud consultancy. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on Apache CloudStack and Citrix CloudPlatform.

 

CloudStack Logs are known for not being the easiest things to read, and when trouble shooting a difficult problem anything which makes life a little easier is very welcome.

By offloading the Management Log to a Syslog Server, Filters and Tagging can be used to greatly simplify the process of reading the log files.  In addition, depending on your choice of Syslog Server,  alerting rules can be configured to inform you of any problems which the built-in alerting engine may ignore.

The steps required to setup a Syslog Server are in the CloudStack Knowledge Base, but are not very clear and appear to be out of date.  By following these instructions, you should be able to get a Syslog Server up and running in a matter of minutes.

Using your favourite editor, edit the following file:

/etc/cloud/management/log4j-cloud.xml

Locate the section starting with
<appender name=”SYSLOG” class=”org.apache.log4j.net.SyslogAppender”>

The default settings will look something like this:

<appender name=”SYSLOG” class=”org.apache.log4j.net.SyslogAppender”>
   <param name=”Threshold” value=”WARN”/>
   <param name=”SyslogHost” value=”localhost”/>
   <param name=”Facility” value=”LOCAL6″/>
   <layout class=”org.apache.log4j.PatternLayout”>
      <param name=”ConversionPattern” value=”%-5p [%c{3}] (%t:%x) %m%n”/>
   </layout>
</appender>

You need to update this section so that it looks like this, but inserting the IP Address of your Syslog Server

<appender name=”SYSLOG” class=”org.apache.log4j.net.SyslogAppender”>
   <param name=”SyslogHost” value=”192.168.0.254“/>
   <param name=”Facility” value=”LOCAL0″/>
   <param name=”FacilityPrinting” value=”true”/>
   <param name=”Threshold” value=”DEBUG”/>
   <layout class=”org.apache.log4j.EnhancedPatternLayout”>
      <param name=”ConversionPattern” value=”%d{ISO8601} %-5p [%c{3}] (%t:%x) %m%n”/>
   </layout>
</appender>

Then find the section labelled “Setup the Root Category” and change <level value=”INFO”/> to <level value=”DEBUG”/>

Restart the Cloud-Management Service “service cloud-management restart” and then start monitoring your Syslog Server

If you don’t see any log messages on your syslog server. Verify that you have properly configured your syslog server to receive packets over UDP. And you may need to setup a rule on your syslog server for log messages as defined by the “Facility” parameter above. Refer to the documentation of your syslog server for more information.

 

About the Author

Geoff Higginbottom is CTO of ShapeBlue, the strategic cloud consultancy. Geoff spends most of his time designing private & public cloud infrastructures for telco’s, ISP’s and enterprises based on CloudStack.