- Product
- Solutions
Deployment TypeEnvironmentIndustriesApplications WorkloadsPartners
- Learn More
CompanyMenuResourcesMenuToolsMenu
- Support
- User Community Hub
This blog post is an introduction to software-defined storage, NVMe, and private/edge/public clouds. If you are already familiar with all of the above, feel free to skip it and check out https://www.lightbitslabs.com/blog/introducing-lightbits-on-aws/ to get your hands dirty with Lightbits awesomeness on the public cloud. If, however, you want to learn more about software-defined storage, NVMe, private/edge/public clouds, and how Lightbits fits in, read on.
What is High-Performance NVMe Storage?
What is High-Performance Storage?
Why Is High-Performance Storage Important?
High-Performance Software-Defined Storage
What is Software-Defined Storage?
Why Is Software-Defined Storage Important?
High-Performance NVMe Storage
High-Performance NVMe for Private and Edge Clouds
High-Performance Storage for VMware
High-Performance Storage for Kubernetes
Lightbits High-Performance, NVMe-native Storage
Your data may now be stored in dozens of locations. Making sure your applications have immediate access to data, regardless of the type of cloud the data is stored on, is now essential for every organization and can be achieved with cloud-native, NVMe-optimized storage.
The old saying “less is more” may be true sometimes, but not when it comes to data storage. As the amount of data we produce every day continues growing, there will always be a need for more capacity, more flexibility, more secure data storage, and faster access to data. Cloud-native applications need their data as quickly as possible. That’s where high-performance storage comes into play.
Storage performance is the measure of how quickly an application can store or retrieve its data from a storage device or system. Technically, a storage device or system’s performance refers to the speed at which an application can read data from it and write data to it, measured in Inputs/Output Operations per Second (IOPS).
You can also measure storage performance in terms of data transfer rates, which is concerned less with the number of distinct I/O operations but rather with the total amount of data transferred over a period of time. A network-attached high-performance storage system can transfer petabytes of data at hundreds of gigabytes per second. That means complex data or unstructured data, like video files, can move across the network at lightning speed.
High-performance storage is important because the performance of the larger systems of which storage is a part is often dictated by the performance of the underlying storage system. You can have a blazingly fast CPU and network, but if the storage is not performing, the entire system will be sluggish. Mission critical, I/O-intensive applications always need high-performance storage.
High-performance storage is ideal for primary storage workloads such as big data analytics and highly transactional workloads. High-performance storage may cost more than conventional storage, so it’s not ideal for backup use cases. In addition, high-performance storage would not be the most economical solution for storing cold data—data sets that need to be stored long-term for compliance or other regulatory reasons or archived on a permanent basis—so it’s not ideal for archive use cases.
Software-Defined Storage (SDS), which abstracts storage software from the underlying hardware and delivers all expected storage capabilities while running on any hardware, is available in high-performance variants. Some workloads are better when implemented using SDS with high-performance capabilities.
Traditional data storage solutions were delivered using proprietary hardware and software, with the software highly customized and only able to run on the exact hardware platform it was delivered on. Software-defined storage provides the same functionality and capabilities but runs on any hardware platform. This removes the dependency on proprietary hardware and its various limitations. SDS can use standard servers and commodity off-the-shelf SSDs to provide full-featured storage solutions.
SDS is important because data today is no longer confined to on-premises data centers. Workloads can be scattered across the enterprise, in a variety of media, in traditional infrastructure, virtual machines, or in the cloud. Handling and managing today’s highly dispersed and dynamic data requires a new level of agility and flexibility. SDS makes this happen.
Furthermore, when SDS untethers storage from hardware, your organization will be able to use more of your enterprise servers and existing hardware for storage and thus enjoy an increase in storage capacity. Utilization of hardware improves, so you won’t need to add more hardware when you need more storage.
The benefits of SDS include:
Nonvolatile memory express (NVMe®) is storage access and transport protocol that can deliver the fastest response times and highest throughput for next-generation workloads. NVMe is superior to older protocols like SAS, which were originally designed for hard disk drives (HDDs). Yes, SAS protocols do work with solid-state drives (SSDs), but not efficiently or economically. In contrast, NVMe is designed to make the most of flash and next-generation SSDs as well as today’s multi-core processors.
Systems using NVMe drives enjoy high-bandwidth and low-latency access to storage devices by accessing flash storage through a PCI Express (PCIe) bus. NVMe provides thousands of parallel command queues, unlike older protocols which were serial in nature, with a single command queue. NVMe delivers higher IOPS and lower latency by spreading I/O requests across multiple cores, providing quick access to critical data. NVMe devices also consume less power, thus reducing the total cost of ownership. NVMe devices also support standard security protocols and facilitate scalability for next-generation demands.
Any discussion of the private cloud must begin with the understanding that cloud computing is a loosely defined term. It can refer both to software architecture and to a business model. A cloud computing architecture abstracts software from physical infrastructure using sophisticated management tools and virtualization. With a cloud architecture, an admin can deploy a virtual machine (VM) or virtual storage without any need to know where specifically it is hosted, or on what kind of hardware. It is running “in a cloud,” so to speak.
With this in mind, a hyperscaler, such as Amazon Web Services (AWS), enables users to deploy software and storage onto their platform. AWS takes care of setting up the server and storage hardware, along with the network and so forth. You don’t need to know anything about their infrastructure except that it’s there and you can spin your VMs up or down as needed. It’s a multi-tenant environment where different tenants belong to different organizations. E.g., both Coca-Cola and Pepsi may run their workloads on the same AWS cloud.
A private cloud, in contrast, is a single organizational environment. While it may have multiple tenants, e.g., both engineering and accounting can both be thought of as different tenants, the tenants all belong to the same organization. A private cloud is typically run on private, on-premises infrastructure or in colocation facilities. A private cloud offers the flexibility of the cloud architecture, but with the security and control, that’s only possible when you own the entire system and don’t share it with anyone.
Companies may choose a private cloud because their workloads deal with sensitive data including medical records, intellectual property, financial data, or other confidential documents. A private cloud (sometimes called a corporate cloud) combines important cloud features (like scalability and easy delivery of services) with features of on-premises infrastructure (including access control, resource customization, and security).
What is an Edge Cloud?
Today’s corporate data centers and hyperscale cloud facilities are sometimes referred to as the “core” of the Internet and corporate networks. That’s where the bulk of the data and computing power resides. There is nothing wrong with having most of the data hosted at the core. The difficulty is that the end users or devices requiring this data are often far away, with the resulting latency causing poor user experiences.
Edge computing seeks to solve this problem by placing computing and storage capacity closer to end users, often in small-scale “micro” edge data centers that may be located next to 5G towers.
An Edge Cloud applies the cloud software architecture to edge computing. System admins can now deploy cloud-based VMs to edge data centers, rather than to public cloud platforms like AWS or private clouds running in core corporate data centers.
Whether it’s an edge or private cloud, both models have a similar goal and that is to enable people and organizations to work smarter and faster. Much faster. In today’s competitive, budget-tightening environment, milliseconds count because seconds add up to hours and hours add up to big costs. Bringing applications and data closer is important; but in addition, network and storage systems must perform at the highest bandwidth, with the lowest latency, and the tightest security. High-performance NVMe does all this and more.
VM stands for virtual machine, a software-defined computer that resides within a physical machine. Special software known as a hypervisor splits up the computing capacity of the physical machine and enables it to host one or more VMs on its hardware. A VM can run a complete software stack, with an operating system, application software, database, and more. A VM borrows specific amounts of CPU, memory, and storage from its physical computer or server. But the virtual machine is walled off from the host system. As a result, the VM’s software can’t impede the physical computer’s operations. Each VM does not “know” that other VMs are running on the same underlying hardware. You can have a Windows Server VM and a Linux Server VM running on the same physical server.
VMware environments do well with high-performance storage because VMs may add performance overhead to a system. Storage should not add to this problem. The virtual machine should not be seen as a lesser device, but as a system that performs highly essential tasks which require high-performance storage in order to execute with the fastest possible response times.
Kubernetes (K8s) is an open-source container orchestration system. Its primary function is to automate the deployment, scaling, and management of container-based applications. Containers free application services from physical hardware by making them portable. And when services are separated into containers, they can be independently scaled.
Kubernetes and containers have grown in popularity because they offer a simple way to efficiently scale and manage cloud-native applications. They do this by dividing applications into a set of loosely coupled micro-services.
For the maximum value, Kubernetes requires a high-performance persistent storage solution. It must be as portable as containers but must function with the performance of local flash. The storage solution must also be standards-based and run on standard servers. To optimize storage for K8 and containers, hyperscale efficiency, and flexibility are essential. This can be found with a system of high-performance scale-out and redundant storage that performs like local NVMe SSDs.
Lightbits, the inventors of NVMe/TCP storage, offers an efficient, agile NVMe SDS for private clouds, edge clouds, and the public cloud. It offers high performance, high IOPS and throughput (4.7M IOPS and 22GB/s from a single storage server), and consistent low latency (160µs) for database and analytics workloads in OpenStack, VMware, and Kubernetes environments. It’s simple to consume because it works on standard Ethernet TCP/IP networks and NICs requiring no RDMA confirmation on the network switches. If you are building a cloud and want to offer fast, secure, and resilient services, consider the Lightbits high-performance, NVMe SDS.
Other related blogs: