Software Defined Memory from ScaleMP – the Solution for Memory Scale Up


FLX

At the In-Memory computing Summit 2016, ScaleMP is publicly presenting vSMP Foundation for FLash eXpansion (FLX) which is a new member of the vSMP Foundation product suite, enabling COTS systems to use high-performing Non-Volatile Memory (NVM) as system memory to augment or replace DRAM. While the new NVM devices provide state-of-the-art functionality and blazing IO performance in terms of both bandwidth and IOPS, and are financially much more attractive than DRAM, they cannot be used as byte-addressable DRAM. ScaleMP’s vSMP Foundation FLX allows for the aggregation of DRAM and NVM into a single system memory space – in a manner that is completely transparent to the operating system and applications.

vSMP Foundation FLX, coupled with high-performing NVM, allows for two key benefits:

  • Increase the total memory available for a system of specific scale.
  • Reducing the TCO for systems with a specific amount of memory.

Below are examples for cases where these benefits come into play. As you read through the examples, it is important to note that using NVM instead of RAM contributed to overall TCO by:

  • Reducing the system acquisition cost, as NVM is up to x20 lower cost per GB; and
  • Reducing power and cooling requirements per GB as NVM devices operate with as low as 25W for 2TB (13W/TB), whereas DRAM consumes 6W for no more than 64GB (96W/TB) – about x7.5 difference.

Total Memory Increase per Class of System

The table below shows the total maximal available system memory, per class of system (as of early 2016), and contrasts it with the maximal system memory which can be achieved by using Hybrid DRAM+NVM with vSMP Foundation FLX:

Class of System Max Memory – DRAM Only Max Memory – Hybrid:
DRAM+NVM
High density: 32GB DIMMs Highest density: 64GB DIMMs
Dual Xeon (24 DIMMs) 0.75 TB 1.5 TB 12.0 TB
Quad Xeon (96 DIMMs) 3.0 TB 6.0 TB 32.0 TB
Octa Xeon (192 DIMMs) 6.0 TB 12.0 TB 64.0 TB

The business implications are obvious: one can execute programs which require massive amounts of memory, such as in-memory very-large databases or increased cloud multi-tenancy, with a significantly reduced TCO.

TCO Reduction per Specific Memory Configuration

Assume a case where a large number of nodes is deployed for an application needing 512GB of memory, such as Financial Risk computation grid (Value at Risk), a Cloud for hosting SaaS or IaaS, Seismic modeling, Genome processing, and so on. One could reduce the overall cost per node by installing only 128GB of DRAM, and augmenting that with 640GB of NVM, for example two NVMe drives of 320GB each. The cost reduction per node (as of early 2016) would be more than $2,000 per node, and the power consumption would be reduced by 22W per node or up to 1.8kW per rack (and a similar saving on cooling operation).

Performance

Naturally, one cares not only about cost but also about the quality of the solution, and in computing, in many cases, the overall application performance. ScaleMP has shown that even very demanding workloads, such as in-memory TPCC or Memcached, can be hosted on Hybrid DRAM+NVM, with small performance difference vs. pure DRAM. With upcoming NVM media innovations, near DRAM speed will be achieved. The financial tradeoff is therefore viable with current NVMe SSDs, and new SSDs and NVDIMMs are expected to further decrease the performance gap.

Interested in vSMP Foundation FLX?

Get an expert to contact you; submit this form:

Note: By submitting this form you agree to receive more information about ScaleMP offerings and to be contacted by an ScaleMP representative.