Intel SSDs + vSMP Foundation FLash eXpansion = a winning combo!


FLX

At ISMC early 2016, we unveiled vSMP Foundation FLash eXpansion (FLX) which enables Xeon-based systems to use Intel’s current and upcoming generation of SSDs for system memory. While Intel’s SSDs devices provide state-of-the-art functionality and blazing IO performance in terms of both bandwidth and IOPS, and are financially much more attractive than DRAM, they cannot natively be used like byte-addressable RAM. ScaleMP’s vSMP Foundation FLX allows for the aggregation of DRAM and Non-Volatile Memory (NVM), such as Intel SSDs, into a single system memory space – in a manner that is completely transparent to the operating system and applications.

vSMP Foundation FLX, coupled with Intel’s NVM, allows for two key benefits:

  • Increase the total memory available for a system of specific scale.
  • Reducing the TCO for systems with a specific amount of memory.

Below are examples for cases where these benefits come into play. As you read through the examples, it is important to note that using NVM instead of RAM contributed to overall TCO by:

  • Reducing to acquisition cost as NVM is up to x20 lower cost per GB; and
  • Reducing power and cooling requirements per GB as NVM devices operate with as low as 25W for 2TB (13W/TB), whereas DRAM would consume 6W for no more than 64GB (96W/TB) – about x7.5 difference.

Total Memory Increase per Class of System

The table below shows the total maximal available system memory, per class of system (as of early 2016), and contrasts it with the maximal system memory which can be achieved by using Hybrid DRAM+NVM with vSMP Foundation FLX:

Class of System Max Memory – DRAM Only Max Memory – Hybrid:
DRAM+NVM
(Intel SSDs)
High density: 32GB DIMMs Highest density: 64GB DIMMs
Dual Xeon (24 DIMMs) 0.75 TB 1.5 TB 12.0 TB
Quad Xeon (96 DIMMs) 3.0 TB 6.0 TB 32.0 TB
Octa Xeon (192 DIMMs) 6.0 TB 12.0 TB 64.0 TB

The business implications are obvious: one can execute programs which require massive amounts of memory, such as in-memory very-large databases or increased cloud multi-tenancy, with a significantly reduced TCO.

TCO Reduction per Specific Memory Configuration

Assume a case where a large number of nodes is deployed for an application needing 512GB of memory, such as Financial Risk computation grid, a Cloud for hosting SaaS or IaaS, Seismic modeling, Genome processing, and so on. One could reduce the overall cost per node by installing only 128GB of DRAM, and augmenting that with 640GB of NVM, for example two NVMe drives of 320GB each. The cost reduction per node (as of early 2016) would be more than $2,000 per node, and the power consumption would be reduced by 22W per node or up to 1.8kW per rack (and a similar saving on cooling operation).

Performance

Naturally, one cares not only about cost but also about quality of the solution, and in computing, in many cases, the overall application performance. ScaleMP has shown that even very demanding workloads, such as in-memory TPCC or memcached, can be hosted on Hybrid DRAM+NVM, with small performance difference vs. DRAM. With Intel Optane SSDs, near DRAM speed will be achieved. The financial tradeoff is therefore viable with current Intel SSDs, and Optane SSDs are expected to further decrease this gap.

Looking to try vSMP Foundation FLX?

Apply with your contact information below:

Note: By submitting this form you agree to receive more information about ScaleMP offerings and to be contacted by an ScaleMP representative.