Hear ye, hear ye. OpenStack Yoga is coming to town! And with it you will find, for the first time ever, support for your favorite complete data platform, Lightbits. Yes, yes, I know, it’s hard to believe. But let me tell you, we have been supporting OpenStack since the early days of the project, all the way back to the Queens release.
And although we support it, it has been external. Ever since the Queens release, the Lightbits drivers for Cinder, os-brick, and Nova were external to OpenStack. We maintained them out of tree, adding support for each new OpenStack and Lightbits release, forever waiting outside in the cold and looking in. Finally, however, the time has arrived for Lightbits to come into the OpenStack fold.
Lightbits OpenStack Support
So now that I’ve gotten the drama off my chest and out of the way, let me tell you what this means for you, and also tell you a little bit about how we implemented Lightbits support in OpenStack.
First, let’s talk about you. You have multiple data centers and they run OpenStack. OpenStack is mature, dependable, and it works, for the most part. But sometimes there’s a fly in your storage ointment. You might use direct-attached SSDs, or Ceph, or maybe some other external storage. But your databases and analytics workloads are not happy. The latency is too high and the throughput is too low. They are struggling to meet your stringent service demands. And the costs are high. The infrastructure is not agile, and you can’t grow compute and storage independently. It’s a pain that simply won’t go away.
And that’s where Lightbits comes in. With Lightbits in OpenStack Yoga, you get fast, agile, efficient, no-compromise access to your data. Compute hosts access data over NVMe/TCP, so it works in any data center, and on any network. Lightbits software manages your storage so that you get Intelligent Flash Management™ and unparalleled scalability, performance, and flexibility — with any server and any SSD. With Lightbits in OpenStack Yoga, the future has arrived.
The Lightbits Cinder Driver
So how did we implement it? It all begins with OpenStack Cinder. At its core, Cinder is a simple beast. Its mission statement is “to implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.” Now Lightbits is anything but traditional, but Cinder sure does provide access to it. This happens via the Lightbits Cinder driver that connects and translates the Cinder API to the Lightbits API.
Conceptually, the Lightbits driver for Cinder is simple. It is called by the core of the Cinder volume service and orchestrates volume operations using the Lightbits API. It weighs in at 1,285 lines of code, not including a varied boatload of tests. It supports the following operations and capabilities:
- Create Volume
- Delete Volume
- Attach Volume
- Detach Volume
- Extend Volume
- Create Snapshot
- Delete Snapshot
- Create Volume from Snapshot
- Create Volume from Volume (clone)
- Create Image from Volume
- Volume Migration (host assisted)
- Extend Attached Volume
- Thin Provisioning
- Multi-Attach
- Supported vendor driver
The Lightbits os-brick Connector
The Cinder driver is all about control plane orchestration: “Create this volume.” Delete that snapshot.” But what about the data path?
The actual reading and writing to and from your hard-earned volumes — the data path — is handled by another OpenStack component, os-brick. os-brick is “a Python package containing classes that help with volume discovery and removal from a host.” Small words, big operations.
At the core of os-brick are connectors. These are Python classes that know how to connect OpenStack compute hosts to external storage systems such as Lightbits clusters. The os-brick Lightbits connector uses the Lightbits discovery-client, which is a small daemon in charge of discovering, connecting to, and handling changes in Lightbits clusters. When instructed to attach to a Lightbits volume, the connector tells the discovery-client to discover and connect to the right Lightbits cluster – and then the NVMe/TCP machinery in the compute host does the work of actually connecting to the volume.
When told to detach, the same happens in reverse. When the virtual machine migrates to another compute host, the connectors on the source and destination compute hosts detach it from its volumes on the source host and attach it to its volumes on the destination host. When the Lightbits cluster gains or loses a storage server, the discovery-client takes care of it transparently and seamlessly, with no interruption or awareness by the virtual machine and the compute host.
A Few Nova Bits
The last but definitely not least piece of the puzzle is the Nova libvirt volume driver. Nova is an OpenStack project that provides a way to provision compute instances. Nova is also in charge of connecting those compute instances to their storage, which means that there’s a bit of Lightbits-related code there to connect compute instances to volumes, disconnect from volumes, and extend those volumes. That’s it. Simple, right? There’s awesomeness in simplicity that works.
Joining the OpenStack Community
Working with the OpenStack community to include Lightbits support in OpenStack Yoga was a pleasure. We are very excited to have finally come in from the cold and get this code in your hands and into your data centers.
If you have any questions, comments, success stories, or grim tales of despair, don’t hesitate to reach out.
And as always, patches are happily accepted!
Related resources: