Samsung’s 8-Layer HBM3E Passes Quality Checks For NVIDIA’s AI Chips
8/07/2024Samsung’s 8-Layer HBM3E Passes Quality Checks For NVIDIA’s AI Chips
NVIDIA’s AI chips are about to get a significant boost in performance and efficiency. Samsung’s fifth-generation HBM chips, known as HBM3E, have successfully passed NVIDIA’s rigorous tests for usage in AI accelerators. Let’s dive into the details.
The Journey to Approval
Earlier, there were reports that Samsung’s HBM3E chips failed NVIDIA’s tests due to heat and high power consumption. However, Samsung has now confirmed that its 8-layer HBM3E design meets the necessary criteria. This achievement opens the door for Samsung to supply high bandwidth memory chips to NVIDIA’s GPUs and AI accelerators.
Key Improvements
The HBM3E chips offer several enhancements over their predecessors (HBM3):
-
Memory Bandwidth: While HBM3 provides a 1024-bit data path and a memory speed of 6.4Gb/s, HBM3E increases it to 9.6Gb/s. This translates to a memory bandwidth of over 1200GB/s, compared to HBM3’s 819GB/s.
-
Power Efficiency: Samsung reworked the HBM3E design to ensure better power efficiency and improved thermals, crucial for AI processors.
The Road Ahead
Although the official deal between Samsung and NVIDIA is yet to be signed, it’s expected that Samsung will start supplying HBM3E chips in Q4 2024. Meanwhile, SK Hynix, another major HBM supplier, is already shipping 12-layer HBM3E chips for NVIDIA’s current and next-gen GPUs.
Micron, the third main HBM supplier, is also reportedly providing HBM3E chips to NVIDIA. As demand for HBM chips continues to rise, HBM3E is poised to play a significant role in AI, machine learning, and data analytics workloads.
In summary, Samsung’s 8-layer HBM3E chips passing NVIDIA’s quality checks marks a crucial milestone, benefiting both companies and advancing AI technology.