JEDEC released HBM3 standard: double rate

Home > Sci-Tech

JEDEC released HBM3 standard: double rate

2022-01-28 12:09:21 28 ℃

Jedec today announced that the next version of its high bandwidth memory (HBM) DRAM standard: JESD238 HBM3. HBM3 is an innovative method for increasing data processing rates used in applications in higher bandwidth, lower power, and unit area capacity for market success, including graphics processing and high performance computing and server. .

According to reports, the main properties of the new HBM3 include:

The verified HBM2 architecture is extended to a higher bandwidth, double the rate of HBM2 generation, and defines the data rate of up to 6.4 Gb / s, which is equivalent to 819 Gb / s per device;

Adding the number of separate channels from 8 (HBM2) to 16; each channel has two pseudo channels, HBM3 actually supports 32 channels;

Support 4 high, 8 high and 12 high TSV stacks, and prepare for the future to 16 high TSV stacks;

Supports various density based on each memory layer 8GB to 32GB, device density from 4GB (8GB 4 height) to 64GB (32GB 16); the first generation of HBM3 devices is expected to be based on 16GB storage layer;

In order to meet the market's demand for high-platform RAS (reliability, availability, maintenance), HBM3 introduces powerful, symbolic-based on-chip ECC, and real-time error reporting and transparency;

Enhance energy efficiency by using a low swing (0.4V) signal and lower (1.1V) operating voltage on the host interface;

"With its enhanced performance and reliability properties, HBM3 will support new applications that require huge memory bandwidth and capacity," NVIDIA Technical Marketing Director and Jedec HBM Chair Barry Wagner said.

"HBM3 will enable the industry to achieve higher performance thresholds, improve reliability and reduce energy consumption," Mark Montierth, Vice President and General Manager of Magic High Performance Memory and Network. "When working with JEDEC members, we use Miguang to optimize market-leading computing platforms in providing a long history of advanced memory stacking and packaging solutions."

"With the continuous advancement of HPC and AI applications, the demand for higher performance and higher energy is more faster than ever, with the current HBM3 JEDEC standard release, SK HYNIX is very happy to provide our customers. With the current high-top bandwidth and optimal memory, it has increased robustness by using enhanced ECC scheme. Sk Hercuo is proud to become a member of Jedec, so I am very happy to continue to build with our industry partners. Powerful HBM ecosystem, and provides ESG and TCO value for our customers, "said Uksong Kang, Vice President of SK Hercules DRAM Product Plan.

"For more than a decade, Synopsys has always been a positive contributor of JEDEC to help promote the development and adoption of the most advanced memory interfaces such as HBM3, DDR5 and LPDDDR5 in a series of emerging applications. Koeter said. "Synopsys HBM3 IP and Verification Solutions have been adopted by leadership, accelerating this new interface into high-performance SOC and supports development with maximum memory bandwidth and energy efficiency." He went back.

Artificial intelligence push HBM growth

High bandwidth memory (HBM) is becoming more and more mainstream. With the latest iterative specifications, suppliers in the ecosystem are preparing to ensure that it can be implemented so that customers can begin design, test, and deploy systems.

The large-scale growth and diversity of artificial intelligence (AI) means that HBM is not suitable for the niche market. It even becomes cheaper, but it is still a high quality memory that requires expertise to implement. As the memory interface of 3D stack DRAM, HBM achieves a higher bandwidth in dimensions smaller than DDR4 or GDDR5 by stacking up to 8 DRAM die and optional base die (including buffer circuitry and test logic). While using less power consumption at the same time.

Like all memory, HBM has improved progress in performance improvement and power consumption. JINYUN KIM, Chief Engineer, Samsung Electronic Memory Product Planning Team, said that when you migrate from HBM2 to HBM3, a key change is to increase the data transfer rate from 3.2 / 3.6Gbps to 6.4Gbps, and each pin will increase by 100%.

The second fundamental change is that the maximum capacity increases from 16GB (8h) to 50%. Finally, HBM3 is implemented as industry standards as industry standards, which improves system reliability, Kim said. "This is critical to the next generation of artificial intelligence and machine learning systems." HBM essence is that you cannot simply take out HBM2 and use the latest best alternative, but every generation of HBM contains many improvements, and the latest best GPU and ASIC have a match, Kim said. Over the years, Samsung has continuously updated its HBM product portfolio. Before each participation, the life cycle of the current and next-generation HBM products is consistent with the needs of major partners. "This helps fully meet the needs of backward compatibility," he said.

Designers will have to adapt to the use of HBM3, Kim, which is an ideal solution for support system architecture. It can solve the growing and size of AI / ML data set, and point out the maximum bandwidth of each stack in HBM2 is 409Gb / s. . "Use HBM3, the bandwidth has jump to 819Gb / s, and the maximum density of each HBM stack will increase to 24GB to manage larger data sets."

These changes enable system designers to extend accessibility of various applications limited by density restrictions. "We will see the HBM3 adoption from General GPU and FPGA to the main memory through DRAM-based cache and CPUs in the HPC system," he said.

Kim said that the mainstreaming of AI / ML reasoning can be said to be the mainstreaming of HBM. "Do not have new use cases, applications and workloads, drive the next-generation HPC and data center on the need for AI / ML computing power."

Jim Handy, chief analyst of Objective Analysis, said artificial intelligence has always been the main driving force of HBM in GPU. "GPU and AI accelerator have an incredible desire to bandwidth, and HBM brings them to where they want to go." Sometimes HBM is most affordable, even if it passed is six times DRAM - although it has been Falling to approximately three times.

"Use HBM applications require so much computing power, so HBM is indeed a unique method," Handy said. If you try to do this with DDR, you will eventually have multiple processors instead of only one processor to complete the same work, and the processor cost will eventually exceed your cost savings in DRAM. "

One feature of artificial intelligence is that it uses a matrix operation, Handy says that when the GPU is significantly better than using standard X86 architecture, artificial intelligence becomes cheaper, the next step of evolution is to use FPGA to access the dedicated processor. Since AI needs so much bandwidth, GPU and FPGA have been integrated with HBM.

A key feature of HBM is impossible to pull out HBM2 and replace it with HBM3 because it is welding rather than a socket, but the system designer should easily convert, Handy. "This is almost linear development of HBM2E and earlier versions."