Adieu, iSCSI Bottlenecks and Hello NVMe/TCP

iSCSI has served its purpose for the storage industry, and it’s time to bid it adieu. 

To understand why, we first need to look at its predecessor: SCSI – literally Small Computer Systems Interface. This protocol has been the de facto standard for computer and server access to storage for decades. The SCSI set of standards is used to carry blocks of data over short distances using a variety of serial and parallel busses internal to a computer. The widespread use of the Transmission Control Protocol/Internet Protocol (TCP/IP), which connects computer systems to TCP/IP networks such as the internet, brought together SCSI and the internet, creating iSCSI.

Here comes Flash!

SCSI’s heyday was when the industry was focused on rotational magnetic disk drives: It was a means for giving applications access to the slow disk drives within computer systems. Then, along came flash storage. With flash, solid-state drives replace mechanically spinning hard disk drives, offering high performance low latency parallel data access (this became the primary impetus behind the non-volatile memory express [NVMe] specification). Flash simply makes better sense for a number of reasons, including much lower access latency and much higher bandwidth, its ability to accelerate applications, and it being a better value than mechanical disk drives because it takes up less space and uses far less energy.

SCSI, iSCSI Bottlenecks

Even with flash storage’s rise and the upcoming retirement of spinning disk drives, SCSI and iSCSI interfaces for data transfer retained their hold on the industry. It quickly became apparent, though, that iSCSI was the new bottleneck. In the SCSI protocol, shuffling data back and forth between the client (initiator) and a drive involves a single queue of commands that the initiator specifies be performed by the drive. This made perfect sense when the average CPU core count in a computer was one.

iSCSI is simply a SCSI transport over TCP/IP: access to the drives remains mediated by a single queue that corresponds to the single queue in the initiator-logical unit connection. With today’s computers (both clients and servers) typically having dozens of CPU cores, the single queue model simply is not adequate.

NVMe Emerges

Companies such as Lightbits Labs knew NVMe offered a better path to solving the data storage dilemma because it could efficiently and effectively eliminate our reliance on iSCSI, among other benefits.

While data queuing using iSCSI is limited, NVMe offers the flexible, scalable expansion needed for today’s storage needs. If you think about the rise of online shopping services in just the last decade, for example, not to mention the sheer volume of all types of data being stored on social networks, you can understand why speed for stored data access is critical. Having just the right bits of data at your fingertips exactly when you need them requires a storage protocol that can push lots of data without any waiting. NVMe makes that possible.

NVMe over Fabrics Protocol

NVMe is widely recognized as the state-of-the-art protocol for accessing high-performance solid-state drives (SSDs). While NVMe was designed to interface with storage devices via PCI Express (PCIe), the NVMe industry group that developed the NVMe specification then developed the NVMe over Fabrics (NVMe-oF) specification. NVMe-oF enables NVMe over network fabrics beyond just PCIe. The network transports defined in the initial specification were Fibre Channel and Ethernet using RDMA. Both were only suited to small-scale deployments and required specialized equipment for data centers; consequently, they saw very limited adoption. 

Then along came NVMe/TCP: just as iSCSI enabled SCSI over standard Ethernet and TCP/IP, NVMe/TCP does the same for NVMe. Today, it’s understood that NVMe/TCP will replace iSCSI as the communication standard between compute servers and storage servers, and that it will become the default protocol for disaggregating storage and compute servers.

Pioneering NVMe/TCP 

Now, with Lightbits’ pioneering support for NVMe/TCP, NVMe is ready for the next storage challenges. Building large scale cloud storage infrastructure with NVMe/TCP is simple and efficient. That’s because TCP is ubiquitous, scalable, reliable, and ideal for both short-lived connections and container-based applications.

Migrating to disaggregated flash storage with NVMe/TCP requires no changes to the data center network infrastructure, making deployment simple, since practically every data center network has already been designed to carry TCP/IP.

So, adieu iSCSI. NVMe/TCP is tailor-made for today’s  data centers. By extending NVMe across the entire data center using simple and efficient TCP/IP fabric, users can enjoy these benefits:

– Low average and tail latencies
– Use existing networking infrastructure and application servers, no need to rip and replace
– Similar performance and latency as Direct-Attached SSDs (DAS)
– Efficient block storage software stack optimized for NVMe and existing data centers
– Parallel access to storage optimized for today’s multi-core applications/client servers
– Standard-based solution with widespread industry support
– Disaggregation of compute and storage across a data center’s availability zones and regions

Additional Resources

NVMe/TCP vs. iSCSI
NVMe over TCP
Kubernetes Persistent Storage
NVMe Over TCP and Edge Storage
Ceph Storage
Disaggregated Storage

About the Writer:

Co-Founder & Chief R&D Officer