Share:

Ceph and CloudStack – Part 3

Introduction

In the previous two parts of this article series, we have covered the complete Ceph installation process and implemented Ceph as an additional Primary Storage in CloudStack. In this final part, I will show you some examples of working with RBD images, and will cover some Ceph specifics, both in general and related to the CloudStack.

RBD image manipulations

In case you need to do some low-level client support, you can even try to mount that image as the local disk on any KVM (or Ceph) node. For this purpose, we use a tool, conveniently named, “rbd”, which is used to operate RBD images in general (i.e. to create new images, snapshots, clones, delete images, etc.).

From any KVM node, let’s attempt to use rbd kernel client to mount an image:

[root@kvm1 ~]# rbd map cloudstack/ad9a6725-4a65-4c8b-b60a-843ed88618be
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address

As you can, see the above map command will fail. This happens as we are running on default kernel 3.x on the latest CentOS 7.6 and some of the new Ceph image features are not supported by the kernel client. Actually, even upgrading kernel to 5.0 will not bring Ceph kernel client to a usable state for our Mimic (or even Luminous) cluster – so one could wonder why kernel client exists at all….

So what we should do is to use rbd-nbd tool to map an image. Rbd-nbd is a client for RADOS block device (RBD) images similar to rbd kernel module, but unlike the rbd kernel module (which communicates with Ceph cluster directly), rbd-nbd uses NBD (generic block driver in kernel) to convert read/write requests to proper commands that are sent through network using librbd (user space client).

So, as stated, we are using librbd which is always on par with the cluster capabilities/features. But here the fun begins – the NBD kernel driver is not available by default with CentOS 7 (Red Hat decided to not include it with their kernel), so you have a couple of options: either you can rebuild the specific kernel version from the official kernel sources and extract the NBD kernel module or you can upgrade a kernel to the one provided by a well-known Elrepo repository, which I find simpler and easier to manage in the long run. For those of you possibly not familiar with Elrepo repository, this is a community repository, providing many different kinds of additional packages for Centos/RHEL, including fresh kernel versions – Elrepo kernel packages are built from the official sources while the kernel configuration is based upon the default RHEL configuration with added functionality enabled as appropriate.

A simple way to move to Elrepo kernel would be as following:

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-lt

If the above URLs become invalid, please find the new one at https://elrepo.org/tiki/tiki-index.php.

Now that we’ve got the new kernel installed, let’s check the order of kernels offered for boot:

[root@kvm1 ~]# awk -F\' /^menuentry/{print\$2} /etc/grub2.cfg
CentOS Linux (4.4.178-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-957.10.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-693.11.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-76b17df4966743ce9a20fe9a7098e2b6) 7 (Core)

Above we see our new kernel (version 4.4) on position 0, so let’s set it as the default one, and reboot:

grub2-set-default 0
reboot

Finally, with the NBD driver in place (as part of new kernel), let’s install rbd-nbd and mount our image:

[root@kvm1 ~]#  yum install rbd-nbd -y
[root@kvm1 ~]#  rbd-nbd map cloudstack/ad9a6725-4a65-4c8b-b60a-843ed88618be
/dev/nbd0

The above map command completed successfully (make sure that the image/volume is not mounted elsewhere to avoid file system corruption) and we can now operate /dev/nbd0 as any other locally attached drive, i.e.:

[root@kvm1 Ceph]# fdisk -l /dev/nbd0

Disk /dev/nbd0: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
Disk label type: dos
Disk identifier: 0x264c895d

Device Boot      Start         End      Blocks   Id  System
/dev/nbd0p1            8192    10485759     5238784   83  Linux

You can also show any other mapped images (if any):

[root@kvm1 Ceph]# rbd-nbd list-mapped
id    pool       image                                snap device
11371 cloudstack ad9a6725-4a65-4c8b-b60a-843ed88618be -    /dev/nbd0

When done with this, unmap the RBD image with:

[root@kvm1 Ceph]# rbd-nbd unmap /dev/nbd0

Alternatively, just to make the above exercise more complete, we can also attach the RBD image to our host with qemu-nbd (as you can guess this also requires NBD kernel module, a.k.a. newer kernel):

qemu-nbd --connect=/dev/nbd0 rbd:cloudstack/ad9a6725-4a65-4c8b-b60a-843ed88618be
qemu-nbd --disconnect /dev/nbd0

Again, the above tool talks to librbd, which relies on ceph.conf and admin key being present in /etc/ceph/ folder.

RBD image manipulations, for real this time

Away from the NBD magic and back to “rbd” tool – let’s briefly show some usage examples to manipulate RBD images.

At this point, I would expect you to be able to create a Compute Offering of your own, targeting Ceph as Primary Storage pool (similarly to how we created a Disk offering with “RBD” tag) and create a VM from a template. Assuming you have done so, let’s examine this VM’s ROOT image on Ceph.

Let’s list all volumes in our Ceph cluster:

[root@ceph1 ~]# rbd -p cloudstack ls
d9a1586d-a30b-4c52-99cc-c5ee6433fe18
fb3ee723-5e4e-48b3-ad7d-936162656cb4

In my example above (an empty cluster), I have only 2 images present, so let’s examine them:

[root@ceph1 ~]# rbd info cloudstack/d9a1586d-a30b-4c52-99cc-c5ee6433fe18
rbd image 'd9a1586d-a30b-4c52-99cc-c5ee6433fe18':
        size 50 MiB in 13 objects
        order 22 (4 MiB objects)
        id: 1b1d26b8b4567
        block_name_prefix: rbd_data.1b1d26b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Mon Apr  8 21:40:21 2019

[root@ceph1 ~]# rbd info cloudstack/fb3ee723-5e4e-48b3-ad7d-936162656cb4
rbd image 'fb3ee723-5e4e-48b3-ad7d-936162656cb4':
        size 50 MiB in 13 objects
        order 22 (4 MiB objects)
        id: 1b250327b23c6
        block_name_prefix: rbd_data.1b250327b23c6
        format: 2
        features: layering
        op_features:
        flags:
        create_timestamp: Mon Apr  8 21:50:10 2019
        parent: cloudstack/d9a1586d-a30b-4c52-99cc-c5ee6433fe18@cloudstack-base-snap

What we see above is the first image of 50 MB in size (here I’m using a very small template from our community friends at http://www.openvm.eu/). For the second image, we see that it has it’s “parent” set to the first image. What is happening here is that CloudStack will copy over the template from Secondary Storage (creating image d9a1586d-a30b-4c52-99cc-c5ee6433fe18), it will then create a snapshots from this image (“cloudstack-base-snap”, as shown above) and protect it (required from Ceph side), and then create an image clone ( image fb3ee723-5e4e-48b3-ad7d-936162656cb4) with a parent image being the previously created/protected snapshot. This is effectively a linked clone setup, in Ceph’s world.

Let’s quickly emulate above behavior manually – we will just create an empty file instead of copying a real template from Secondary Storage pool:

rbd create -p cloudstack mytemplate --size 100GB
rbd snap create cloudstack/mytemplate@cloudstack-base-snap
rbd snap protect cloudstack/mytemplate@cloudstack-base-snap
rbd clone cloudstack/mytemplate@cloudstack-base-snap cloudstack/myVMvolume

Finally, let’s check our “myVMvolume” image:

[root@ceph1 ~]# rbd info cloudstack/myVMvolume
rbd image 'myVMvolume':
        size 100 GiB in 25600 objects
        order 22 (4 MiB objects)
        id: fcba6b8b4567
        block_name_prefix: rbd_data.fcba6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Mon Apr  8 22:08:00 2019
        parent: cloudstack/mytemplate@cloudstack-base-snap
        overlap: 100 GiB

The reason we have above protected a snapshots of the main template, is that it makes it impossible to delete this template file and thus cause major damage (effectively destroy all VMs from this template). So in order to revert the exercise from above, we would need to first remove a clone (VM image), unprotect and delete the snapshot, and finally delete the “template” image:

[root@ceph1 ~]# rbd rm cloudstack/myVMvolume
Removing image: 100% complete...done.
[root@ceph1 ~]# rbd snap unprotect cloudstack/mytemplate@cloudstack-base-snap
[root@ceph1 ~]# rbd snap rm cloudstack/mytemplate@cloudstack-base-snap
Removing snap: 100% complete...done.
[root@ceph1 ~]# rbd rm cloudstack/mytemplate
Removing image: 100% complete...done.

As you can see from above, the “rbd” tool is the tool to use to manipulate RBD images – for many other examples, please see http://docs.ceph.com/docs/master/rbd/rados-rbd-cmds/

In the next sections, I will try to cover a bit of what to expect, dos and don’ts of using Ceph cluster – again, this is not meant to be a comprehensive guide, since CloudStack storage feature sets related to Ceph and also Ceph itself is constantly changing and improving.

Cloudstack with Ceph

CloudStack has supported Ceph for many years now and though in the early days, many of the CloudStack’s storage features were not on pair with NFS, with the wider adoption of Ceph as the Primary Storage solution for CloudStack, most of the missing features have been implemented. That being said, there are still some minor specifics with Ceph:

  • In contrast to NFS, which provides a file system, Ceph provides raw block devices for VMs and thus it’s not possible to make a full VM snapshot the way it can be done with NFS – i.e. there is no filesystem to which to write the content of the RAM memory or other metadata needed.
  • Continuing on above, since no file system, there is (currently) no way to write a heartbeat file the way it’s implemented on NFS (but some work is currently being planned around this)
  • Speaking of a simple volume snapshot, it’s currently not possible to really restore a volume from a snapshot (which became possible in 4.11 with NFS or SolidFire Managed Storage). But this is rather a simple thing to implement.
  • By using Ceph with KVM and CloudStack, there are two external libraries used – librbd (a user-space client, used by Qemu to speak to Ceph cluster) and rados-java (a Java wrapper for librados). Historically, there have been some issues/bugs with rados-java (though they have been resolved long time ago), so you should probably keep an eye on these two and make sure they are up to date.

Learning curve

Ceph can be considered a rather complex storage system to comprehend and it definitively has a steep learning curve. Even when using Ceph on its own (i.e. not with CloudStack), make sure you know the storage system well before relying on it in production and also make sure to be able to troubleshoot problematic situations when they arise. Anyone prepared to put in a bit of effort and planning can deploy a nice Ceph cluster, but it takes skills and a deeper understanding of how things work under the hood when it comes to troubleshooting unusual situations or how (for example) replacing failed disks (or nodes) can influence performance of the clients (in our case, CloudStack VMs). Based on my previous experience managing a production Ceph cluster, I would definitively strongly suggest taking the above recommendations very seriously. On the other hand, experimenting with such an advanced storage solution can be really interesting and rewarding.

Performance considerations

Ceph is a great distributed storage solution that performs very well under sequential IO workload, can scale indefinitely and has a very interesting architecture. However, as with any distributed storage solution, when you write data to a cluster, it actually takes time to write that data to first node, have that write operation replicated to other 2 nodes (replica size 3) and send back ACK to the client that the write is successful. In case you are using NVME devices, like some users in community, you can expect very low latencies in the ballpark of around 0.5ms, but in case you opt out for a HDD based solution (with journals on SSDs) as many users do when starting their Ceph journey, you can expect 10-30ms of latency, depending on many factors (cluster size, networking latency, SSD/journal latency, and so on). In other words, don’t expect miracles with commodity hardware – if something “works on commodity hardware”, that doesn’t mean it also performs well. Actually, until 2-3 years ago, Ceph was (unofficially) considered unsuitable for any more serious random IO workload, which is backed up by referenced guides published between major HW vendors and RedHat, where they clearly stated that Ceph is not the “best choice” when it comes to random IO. In contrast to sequential IO workload benchmarks (where Ceph actually shines), these reference guides were completely missing any benchmark data with random IO workload/pattern. That being said, with the introduction of BlueStore as the new storage backend in recent Ceph releases and with some more architectural changes, Ceph has become much more suitable for pure SSD (and NVME) clusters and performance has improved drastically.

I hope this Ceph article series has been useful and interesting and helped you get up and running. All in all Ceph is an interesting development in the storage space and can provide a true cloud storage solution if implemented correctly.

About the author

Andrija Panic is a Cloud Architect at ShapeBlue, the Cloud Specialists, and is a committer of Apache CloudStack. Andrija spends most of his time designing and implementing IaaS solutions based on Apache CloudStack.

Share:

Related Posts:

Apache CloudStack enables existing VMware users and gives an easy way for service providers to migrate to a fully open-source solution and eliminate vendor dependency.