Last year we had a project which required us to build out a KVM environment which used shared storage. Most often that would be NFS all the way and very occasionally Ceph. This time however the client already had a Fibre Channel over Ethernet (FCoE) SAN which had to be used, and the hosts were HP blades using shared converged adaptors in the chassis- just add a bit more fun.
A small crowbar and a large hammer later, the LUNs from the SAN were being presented to the hosts. So far so good. But…
Clustered File Systems
If you need to have a volume shared between two or more hosts, you can provision the disk to all the machines, and everything might appear to work, but each host will be maintaining its own inode table and so will be unaware of changes other hosts are making to the file system, and in the event that writes ever happened to the same areas of the disk at the same time you will end up with data corruption. The key is that you need a way to track locks from multiple nodes. This is called a Distributed Locking Manager or DLM and for this you need a Clustered File System.
There are dozens of clustered file systems out there, proprietary and open source.
For this project we needed a file system which;
Supported on CentOS6.7
Easy to configure not a complex group of Distributed Parallel Filesystems
Need to support concurrent file access and deliver the utmost performance
No management node over head, so more cluster drive space.
So we opted for OCFS2 (Oracle Clustered File System 2)
Once you have the ‘knack’, installation isn’t that arduous, and it goes like this…
These steps should be repeated on each node.
1. Installing the OCFS file system binaries
In order to use OCFS2, we need to install the kernel modules and OCFS2-tools.
First we need to download and install the OCFS2 kernel modules for CentOS 6. Oracle now bundles the OCFS2 kernel modules in its Unbreakable Kernel, but they also used to be shipped with CloudStack 3.x so we used those.
Next we update the running kernel with the newly installed modules.
Add the Oracle yum repo for el6 (CentOS 6.7) for the OCFS2-tools
And add the PKI keys for the Oracle el6 YUM repo
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle-ol6
Now we can install the OCFS2 tools to be used to administrate the OCFS2 Cluster.
yum install -y ocfs2-tools
Finally we add the OCFS2 modules into the init script to load OCFS2 at boot.
sed -i "/online \"\$1\"/a\/sbin\/modprobe \-f ocfs2\nmount\-a" /etc/init.d/o2cb
2. Configure the OCFS2 Cluster.
OCFS2 cluster nodes are configured through a file (/etc/ocfs2/cluster.conf). This file has all the settings for the OCFS2 cluster. An example configuration file might look like this:
ip_port = 7777
ip_address = 192.168.100.1
number = 0
name = host1.domain.com
cluster = ocfs2
ip_port = 7777
ip_address = 192.168.100.2
number = 1
name = host2.domain.com
cluster = ocfs2
ip_port = 7777
ip_address = 192.168.100.3
number = 2
name = host3.domain.com
cluster = ocfs2
node_count = 3
name = ocfs2
We will need to run the o2cb service from the /etc/init.d/ directory to configure the OCFS2 cluster.
Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]: ENTER
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ENTER
Specify heartbeat dead threshold (=7) : ENTER
Specify network idle timeout in ms (=5000) : ENTER
Specify network keepalive delay in ms (=1000) : ENTER
Specify network reconnect delay in ms (=2000) : ENTER
Update the iptables rules to allow the OCFS2 Cluster port 7777 on all the nodes that we have installed:
The options work like this:
-L Is the Label of the OCFS2 cluster
-T What will the cluster be used for, type of Data
-fs-feature-level making OCFS2 compatible with older versions
4. Update the Linux FSTAB with the OCFS2 drive settings.
Next we had the following line to /etc/fstab to mount the volume at every boot.
/dev/sdd /san/primary _netdev,nointr 0 0
5. Mount the OCFS2 cluster.
Once the fstab has been updated we’ll need to mount the volume
This will give us a mount point on each node in this cluster of /san/primary. This mount point is backed by the same LUN in the SAN, but most importantly the filesystem is aware that there are multiple hosts connected to it and will lock files accordingly.
Each cluster of hosts would have a specific LUN (or LUNs) which is would connect to. It makes life a lot simpler if you are able to mask the LUNs from SAN such that only the hosts which will connect to a specific LUN can see that LUN, as this helps to avoid any mix ups.
Adding this storage into CloudStack
In order for the KVM hosts to utilise this storage in a CloudStack context, we must add the shared LUNs as primary storage in CloudStack. This is done by setting the storage type to ‘presetup – SharedMountPoint’ when adding the primary storage pools for these clusters. The mountpoint path should be specified in the way that they will be seen locally by the KVM hosts; in this case – /san/primary.
In this article we looked at the requirement for a Clustered File System when connecting KVM hosts to a SAN and how to configure OCFS2 on CentOS6.7
About The Authors
Glenn Wagner is a Senior Consultant / Cloud Architect at ShapeBlue, The Cloud Specialists. Glenn spends most of his time designing and implementing IaaS solutions based on on Apache CloudStack.
Paul Angus is VP Technology & Cloud Architect at ShapeBlue. He has designed and implemented numerous CloudStack environments for customers across 4 continents, based on Apache CloudStack.
Some say; that when not building Clouds, Paul likes to create Ansible playbooks that build clouds. And that he’s actually read A Brief History of Time.
https://www.shapeblue.com/wp-content/uploads/2017/03/Fotolia_51644947_XS-1.jpg300300Glenn Wagnerhttps://www.shapeblue.com/wp-content/uploads/2017/06/logo-340x156.pngGlenn Wagner2016-09-02 10:40:222017-09-08 09:44:54Installing and Configuring an OCFS2 Clustered File System