High bandwidth memory (HBM) has been attracting users across a growing number of data-intensive computing markets ever since the second-generation HBM2 version of the technology was approved as an industry standard in January 2016. Samsung started manufacturing HBM2 dynamic random-access memory (DRAM) that same month, and we are excited by the broad support we’re seeing for this groundbreaking memory technology.
HBM2 comes in memory cubes containing up to eight vertically stacked 8-gigabit (Gb) DRAM chips connected internally by as many as 40,000 tiny “through silicon via” (TSV) data paths. A wide 1024-bit data interface provides unprecedented memory bandwidth, with each DRAM stack capable of transferring up to 256 gigabytes (GB) of data per second. The compact, energy-efficient architecture also takes up far less circuit board space than traditional memory modules, making it attractive for space-constrained designs.
High performance computing applications, which are always hungry for more data, were among the first applications needing compute acceleration. Nvidia quickly chose to use HBM2 in its Tesla P100 accelerators to power data centers in need of supercharged performance. AMD chose to use HBM2 in its Radeon Instinct accelerators for the data center, as well as in its high-end graphics cards.
We saw the announcement of a client-based solution using HBM in November. Intel has embraced the technology, leveraging HBM2 to introduce high-performance, power efficient, graphics solutions for mobile PCs. The new Intel chipset will make it easier to build thinner, lighter notebooks.
HBM2 technology is also finding its way into networking applications. Rambus and Northwest Logic, for instance, recently teamed up to introduce HBM2-compatible memory controller and physical layer (PHY) technology for use in high-performance networking chips. Other companies developing products that combine HBM2 storage with various networking capabilities include Mobiveil, eSilicon and Open-Silicon and Wave Computing.
AI’s growing HBM appetite
Finally, artificial intelligence (AI) is starting to look like one of the most promising new markets for HBM2. It turns out that GPUs, which were originally developed for graphics processing, are remarkably effective at helping AI software learn how to identify the complex patterns gleaned from large volumes of data. IDC is forecasting that global AI revenue will multiply from about $8 billion in 2016 to more than $47 billion in 2020.
And as AI technology makes increasing inroads into areas such as health care, home automation and voice, image and text recognition, demand for high bandwidth memory that can help users squeeze more performance out of their AI applications appears to have nowhere to go but up.
Published by TIEN SHIAH
Tien Shiah is Product Marketing Manager for High Bandwidth Memory at Samsung Semiconductor, Inc. In this capacity, he serves as the company’s product consultant, market expert and evangelist for HBM in the Americas, focused on providing a clear understanding of the tremendous benefits offered by HBM in the enterprise and client marketplaces.