NVMe Making the Big Leagues

Article By : Gary Hilson

2018 "big year" for NVMe, 2019 looking to be even bigger

TORONTO – You know you’ve made it when you get your own show. The fact that there’s a show dedicated to NVM Express (NVMe) next month solidifies an industry-wide sentiment that the host controller interface and storage protocol hit a tipping point in the last year.

“This year was a big year for NVMe,” said Thomas Coughlin, founder of Coughlin Associates. “Going into next year, we’re going to see the majority of new products coming out with NVMe.”

This includes products using the relatively young NVM Express Over Fabrics (NVMe-oF) specification and even some hard disk enclosures using NVMe, Coughlin said. “It looks to me like a universal architecture for storage,” he added.

One of the primary benefits of NVMe has been that the interface has unlocked the internal performance of flash in SSDs, which previously were hampered by architectures designed for spinning disk. But Coughlin doesn’t see SATA disappearing any time soon. “There’s a lot of infrastructure out there and people are going to continue to support that,” he said.

Both clients and enterprise applications will increasingly use NVMe to take advantage of the not only the performance of flash, but other memory-class storage such as 3D Xpoint and other emerging options, Coughlin said, while NVMe-oF will allow for older storage technologies to be woven in where they still make sense.

Another offshoot of the standard, noted Coughlin, is the ability to move management away from the SSDs and putting it on the host using the NVM Express Management Interface (NVMe-MI). This comes as the concept of computational storage is gaining ground, which is when processing power is put on the storage devices themselves. Coughlin sees NVMe playing role there too, as does the Computational Storage Technical Work Group recently formed by the Storage Networking Industry Association (SNIA).

The NVM Express organization is finishing a busy 2018 with updates to the NVM Express Management Interface (NVMe-MI) and the relatively young NVM Express Over Fabrics (NVMe-oF), which will get a lot of attention in the coming year.
The NVM Express organization is finishing a busy 2018 with updates to the NVM Express Management Interface (NVMe-MI) and the relatively young NVM Express Over Fabrics (NVMe-oF), which will get a lot of attention in the coming year.

The first NVMe specification was delivered in 2011 and has since been joined by NVMe-MI for managing devices at a glance and NVMe-oF, which will be getting a big push by the NVM Express organization for the foreseeable future, according to Amber Huffman, the organiztion’s president. The specification supports the fabric of choice, whether it’s Ethernet and Omnipass, among others, to take advantage of NVMe end-to-end using a tunneling protocol. Unlike PCIe, which doesn’t work well beyond a dozen attached devices in a box, Huffman said, NVMe-oF enables the connection of thousands of devices in a data center.

Version 1.1 of NVMe-oF, coming out early next year, will include a TCP layer, said Huffman, which is in addition to RDMA and fiber channel that allowed for use of InfiniBand, Ethernet or Omnipass. By bringing along TCP, a large number of vendors that have existing investments in network interface cards without RDMA capability can take advantage of NVMe-oF.

The first NVMe specification has seen incremental revisions over time to add capabilities — live firmware updates with 1.2 and sanitize features in 1.3 that were already common in SCSI and SATA. With NVMe 1.4 coming mid next year, IO determinism will be added to guarantee tight and consistent latency across networking and storage. Meanwhile, the addition of the management specification provides the capability of managing device enclosures.

The goal of NVMe was always to keep it lean and mean and open to accommodating new technologies such as emerging storage-class memories, such as 3D Xpoint and Optane, said Huffman. The SD Card Association is adopting NVMe to scale performance as they move forward with SD 7.0. She said the goal since inception was always to make sure NVMe was an interface that could be optimized for future storage class memories.

Throughout all the update and expansion to the NVMe specification family, interoperability has been key. As NVMe has evolved, so has the NVMe Plugfest, which takes place twice a year. The tenth interoperability gathering just recently wrapped up and included its traditional NVMe SSD test tracks are fairly mature combined with newer test tracks for NVMe management interface and NVMe-oF.

David Woolf, senior engineer, datacenter technologies at the University of New Hampshire InterOperability Laboratory which hosts the Plugfest, said this year there was quite a lot of interest in doing proof of concepts for NVMe over TCP, including different interoperability tests between different vendors. “We tried to ensure that what we’re testing at the Plugfest is following what’s going on in the spec and obviously there’s a little bit of lag there,” Woolf said.

Although the NVMe protocol itself was initially designed with flash in mind, it’s agnostic to the type of memory sitting behind the controller, said Woolf. That means from an NVMe conformance perspective the same protocol tests apply, although the performance and latency from the product might be different. But while the tests may be the same, he said, testing is getting more complex as the NVMe specification becomes more complex and more features are added. More tests have been added to accommodate the changes in NVMe 1.3, for example.

Ultimately, the NVMe roadmap guides the Plugfest activities, said Woolf. “There’s some discussion on open channel type drives, and computational storage. That’s fairly far down the road. When these things start to get picked up into and ratified in the spec, we want to be alerted to add them to our tests,” Woolf said.

With NVM Express focusing its efforts on the NVMe-oF, testing efforts are following suit, so that it can be effectively deployed in real-world scenarios.

NVMe 1.4, slated for release next year, will support IO determinism and enables hosts to treat an SSD as many small sub-SSDs and process IO in parallel in each small sub-SSD.
NVMe 1.4, slated for release next year, will support IO determinism and enables hosts to treat an SSD as many small sub-SSDs and process IO in parallel in each small sub-SSD.

Micron Technology has been an early champion of NVMe-oF, having opted to move ahead of the standard being released early last year. Micron’s SolidScale architecture was created for low-latency, high-performance access to compute and storage resources and specifically address CPU underutilization in the data center as NVMe SSDs deployed in application servers at the time were on average using less than 50% of their IOPS and capacity.

Today, the company still sees the enterprise and cloud customer bases as they key adopters, said Cliff Smith, Micron’s product line manager for NVMe. “This year we’ve seen the cloud guys, who are very big, consume quite a bit of flash in SSD form and in some cases component form,” Smith said.

The uptake been driven by their transition from SATA SSDs to NVMe SSDs. The enterprise customer base, meanwhile, is being guided by what vendors such as Dell, HP and Lenovo are putting into their servers, whereas the cloud companies such as Amazon or Microsoft own the whole stack, so they can adopt NVMe more quickly.

Smith said NVMe is past the hype cycle and now being effectively integrated, in part because the large incumbent storage vendors have digested many of the innovative startup storage companies that started out with all-flash arrays from the get-go with a software-driven approach that took advantage of NVMe drives. At the same time, the big hyperscale players and larger enterprise customers have implemented these technologies.

Despite the commercialization of 3D Xpoint and Intel’s recent Optane push, Micron is going to stick with its NAND-based NVMe product lines for storage solutions, said Smith, while 3D Xpoint will be treated more like a memory rather another candidate for the NVMe interface.

“The idea is that storage class memory is another layer where you’re going to have two to four racks besides the DRAM. You can get the data closer to the processor and it would make a lot of sense for deep learning and machine learning type algorithms, where you have a particular data set that wants to stay in a cache,” Smith said. “We’re not really looking to do storage class memory on an NVME bus.”

—Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.

Subscribe to Newsletter

Test Qr code text s ss