https://venturebeat.com/wp-content/uploads/2019/12/GETTYIMAGES-1168971314.jpeg?fit=578%2C367&strip=all

Optane essentials, Part 2: Memory modes and optimal workloads

by

This article is part of the Technology Insight series, made possible by funding from Intel.
________________________________________________________________________________________

In Part 1, we covered the media foundations of Intel Optane technology and the different implementations of that media into three products: Optane Memory, Optane SSDs, and Optane DC persistent memory modules (DCPMMs). We also touched on how Intel software is a key ingredient in turning 3D XPoint into Intel Optane. We’ll round out our 101 with a brief look at configurations and modes.

Why? While it’s true that Optane DCPMMs require 2nd Generation Xeon Scalable processors or later, much of the Optane magic hinges on how software – especially in settings such as virtualized cloud, AI/analytics, and HPC –  can make optimal use of Optane when that media is configured into given modes.

Optane Modes

DCPMMs can be configured into three possible modes: Memory Mode, App Direct Mode, and Dual Mode. Note that DCPMMs do not replace DRAM; you still need some DRAM in the system. But how much DCPMM capacity you complement the DRAM with will depend on your mode and specific application needs.

Memory Mode turns DRAM into an L4 cache. The cache is non-addressable and doesn’t show up in system memory counts, so all user-addressable capacity is the sum of the DCPMM capacities. No programming is needed for Memory Mode, but data contained in DCPMMs is volatile, just like with DRAM. Memory Mode provides a large memory pool with low but effective DRAM cache investment.

 

https://venturebeat.com/wp-content/uploads/2019/12/Memory-Mode.png?w=800&resize=800%2C193&strip=all
Above: Memory Mode uses a smaller amount of DRAM, invisible to the OS, as L4 cache while large amounts of Optane DCPMM provide an outsized memory pool.

App Direct Mode allows the system to handle DRAM and DCPMM resources independently, so operations that require top speed can address DRAM and the rest can rely on larger Optane resources. Under App Direct, data in DCPMM remains persistent, which can be very helpful in minimizing large configuration reload times following a power cycle or reset. However, applications must be optimized to take advantage of App Direct. Some are already, and more are coming. Also, some user programming may be needed.

https://venturebeat.com/wp-content/uploads/2019/12/App-Direct.png?w=800&resize=800%2C197&strip=all
Above: App Direct mode may require some extra programming, but it allows users to place which data and workloads they want in either volatile DRAM or non-volatile DCPMM resources.

Dual Mode is a sub-set of App Direct Mode that allows some DCPMM resources to operate in Direct Mode and the remainder in App Direct.

DCPMM modes, especially the first two, tend to get the most attention for performance reasons. However, if we turn our attention to Optane SSDs, we can employ Intel Memory Drive Technology back across the CPU’s PCI Express bus. In effect, Memory Drive is much like Memory Mode, including in having data volatility, only Optane SSDs are serving in place of DCPMMs. Naturally, there’s more latency involved in reaching out to SSD resources, but the size of memory pool possible can be relatively gargantuan. As a way to contain costs on projects that require memory capacity above all else, Memory Drive can be a life-saver.

https://venturebeat.com/wp-content/uploads/2019/12/Intel-MDT.png?w=800&resize=800%2C382&strip=all
Above: Intel Memory Drive Technology pre-fetches data into DRAM while using Optane SSDs to supply large amounts low-latency system memory.

Which workloads are best for Optane?

Speaking generally, Optane media excels when working with large volumes of real-time data that require low-latency access. Taking that one step further, we might add the following characteristics:

https://venturebeat.com/wp-content/uploads/2019/12/Optane-work-types.png?w=800&resize=800%2C154&strip=all
Above: Data center application groups with suitability for Optane technology’s benefits.

As mentioned previously, DCPMMs can be configured for either persistent or volatile operation and features latencies that are near DRAM in performance. This lends DCPMMs to deployment with workloads such as:

Returning to Optane SSDs, especially when running in a Memory Drive configuration, the following workload types perform particularly well:

For non-volatile workloads, you might consider Optane DC SSDs in applications such as:

Essentially, any storage application that suffers from I/O bottlenecking is a solid Optane SSD candidate. Also, if server monitoring tools reveal that CPU utilizations are low, this may be a sign that applications and/or VMs are memory-bound and could benefit from DCPMMs.

Wrapping up

And that’s about it for the Optane basics. We expect the technology to slowly make more inroads in the client space, but today it remains a clear data center play. Exactly how much cost advantages and performance gains can be provided by Optane implementations will depend on an organization’s specific needs and workloads.

But the ability to bring storage so much closer to the CPU, to the point that it becomes indistinguishable from memory, is compelling and certain to open new opportunities for application developers and enterprises alike.