Author: Scott Carter
The public cloud has provided huge benefits to organizations. It’s quick to set up and cheap to fail, making innovation faster and easier. Applications can be spun up quickly, allowing testing and deployment to be completed in days instead of months. The flexibility of the cloud is also a huge selling point. Need more compute? You can easily scale up or scale out on demand.
Unfortunately, finding the right cloud storage solution for your database workloads can be a bit more challenging. While public cloud providers offer a number of options for block storage, right-sizing that storage is difficult. For each workload, customers need a good understanding of their requirements including volume size, growth estimates, IOPS requirements, and latency sensitivity just to name a few.
This uncertainty introduces risk to your operations. If you choose a lower-cost storage option, your end-user experience will suffer if the application hits its performance ceiling. If you choose a high-performance storage option to minimize the impact of unpredictable demand, your bill will suffer. For the most IO-intensive database workloads, cloud-native storage options may not even be able to provide the level of IOPS required, leaving certain workloads stranded on-premises. With changing needs and the ever-evolving IT landscape, workloads need to be regularly evaluated to ensure that they are properly configured.
Legacy SAN, while complex, offered one large pool of capacity and throughput that could be shared across multiple applications and workloads. Consolidating workloads minimizes sprawl and the need to manage and forecast growth for each application separately. This is why companies invested in SAN technologies, to begin with.
What companies need today is a SAN built from the ground up for the public cloud era.
Architects and IT operations teams need a solution that offers the flexibility, configurability, performance, and enterprise features offered by legacy SAN while delivering on the promise of the public cloud: agility, efficiency, and simplicity.
Imagine if you could have the capabilities of a SAN with virtually unlimited IOPS and sub-millisecond latency for every database without breaking the bank.
This is where Lightbits comes in. It is easy to spin up a Lightbits cluster in the public cloud that will support petabyte scale data and millions of IOPS with consistent sub-millisecond latency, even at the tail. With Lightbits software defined storage you can start with a right-sized storage solution that meets your needs today.
Lightbits operates like a SAN in the public cloud. It is disaggregated from compute so it requires no changes to your front end application servers. There is no nickel and diming for storage-related features like compression, thin provisioning, snapshots, and clones–the data services are included in the license. A Lightbits cluster can be easily shared across hundreds of applications and quality of service can ensure that applications don’t impact each other. Best of all, you only pay for the storage consumed, not the storage provisioned. When your storage needs to increase, you can automatically add more nodes to a cluster to scale up.
Are you stuck trying to figure out how to migrate your database workloads to the public cloud? If so, call me and we will help you quickly figure out if we can eliminate the need for a lengthy sizing exercise, while at the same time, providing you with lower latency, higher performance, and a better overall TCO.