Written by Denis Gaudreault, country manager, Intel Canada
175 zettabytes…. That is IDC’s prediction for how much data will exist in the world by 2025. Millions of devices generate this data – everything from the cell phone in our pockets and PCs in our homes and offices, to computer systems and sensors integrated into our cars, to the factory floor at the industrial park leveraging IoT and automation. While many enterprises are unlocking the value of data by leveraging advanced analytics, others struggle to create value cost-effectively.
Case in point: Imagine the difference between going to your desk to get a piece of information, versus going to the library, versus driving from Toronto to Intel’s campus in Oregon, or even travelling all the way to Mars to get this information. These distances illustrate the huge chasm in latency between memory and data storage in many of today’s software and hardware architectures. As these datasets used for analytics continue to grow larger, the limits of DRAM memory capacity become more apparent.
Keeping hot data closer to the CPU has become increasingly difficult in these capacity-limited situations. For the past 15 years, software and hardware architects have had to make the painful tradeoff between putting all their data in storage (SSDs), which is slow (relative to memory), or paying high prices for memory (DRAM). Over the years, it has become a given for architects to make this decision. So how can companies bridge the gap between SSDs and DRAM, while reducing the distances between where data is stored to make it readily available for data analytics?
Persistent memory solves these problems by providing a new tier for hot data between DRAM and SSDs. This new tier allows an enterprise to deploy either two-tier memory applications or two-tier storage applications. While it is not a new concept to have tiers, this new persistent memory tier with combined memory and storage capability allows architects to match the right tool to the workload. The result is reduced wait times and more efficient use of compute resources, allowing companies to drive cost savings and massive performance increases that help them achieve business results, while at the same time maintaining more tools in the toolboxes that support their digital transformations. Enterprises will also benefit from innovations and discoveries from the software ecosystem as it evolves to support it.
Applications for advanced analytics and persistent memory
Persistent memory is particularly useful for enterprises looking to do more with their data and affordably extract actionable insights to make quick decisions. The benefits of persistent memory are especially valuable for industries that are experiencing digital transformation, like financial services and retail, where real-time analytics provide tremendous value.
For financial services organizations, real-time analytics workloads could include real-time credit card fraud detection or low-latency financial trading. For online retail, real-time data analytics can speed decisions to adjust supply chain strategies when there is a run on certain products, while at the same time immediately generating new recommendations to customers to shape and guide their shopping experience.
Persistent memory can also expedite recommendations for the next three videos to watch on TikTok or YouTube, keeping consumers engaged for longer periods. In these scenarios, real-time analytics allows these organizations to interact with their end-users more instantaneously, improving customer experiences and enabling the business to achieve a better return on investment. While these real-time analytics applications would be possible without persistent memory, it would be costly to maintain the same level of performance and latency.
Architecting for persistent memory
For those looking for off the shelf solutions without application changes, the easiest way to adopt persistent memory is to utilize it in Memory Mode to achieve large memory capacity more affordably with performance close to that of DRAM, depending on the workload. In Memory Mode, the CPU memory controller sees all of the persistent memory capacity as volatile system memory (without persistence), while using the DRAM as cache.
Many databases and analytics software or appliance vendors such as SAP HANA, Oracle Exadata, Aerospike, Kx, Redis, and Apache Spark now enable and have released new versions of software that utilizes the full capabilities of both application-aware placement of data and persistence in memory offered with persistent memory. A variety of applications along with the operating system and hypervisor that is aware of persistent memory are available in the ecosystem to be deployed by a customer’s preferred server vendor.
A new class of software products is also emerging in the market that removes the need to modify individual applications for the full capabilities of persistent memory. Software applications such as Formulus Black FORSA, Memverge Memory Machine and NetApp Maxdata are truly groundbreaking approaches to the new tiered data paradigm that bring the value of persistent memory while minimizing application-specific enabling.
For those who want full customization, software developers also have the option to utilize the industry-standard non-volatile memory programming language model with the help of open-source programming libraries such as PMDK.
Persistent memory: the art of possible
We are living in a time unlike any other. Never has it been more important to analyze data in real-time. With the help of persistent memory, businesses can now make more strategic decisions, better support remote workforces and improve end-user experiences.
In just the last six months alone, I’ve been impressed by the use cases and innovation I’ve seen our customers implement with persistent memory. Looking to the future, I am excited to watch persistent memory, and the ecosystem that has evolved to support it, continue to bring ideas, dreams and concepts to life – making the impossible, possible.