High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-entry Memory Wave (SDRAM) initially from Samsung, AMD and SK Hynix. RAM in upcoming CPUs, and FPGAs and in some supercomputers (such because the NEC SX-Aurora TSUBASA and Fujitsu A64FX). HBM achieves greater bandwidth than DDR4 or GDDR5 while using much less energy, and in a substantially smaller kind factor. That is achieved by stacking up to eight DRAM dies and an optional base die which might embrace buffer circuitry and take a look at logic. The stack is usually connected to the memory controller on a GPU or CPU through a substrate, similar to a silicon interposer. Alternatively, the memory die might be stacked directly on the CPU or GPU chip. Inside the stack the dies are vertically interconnected by through-silicon vias (TSVs) and microbumps. The HBM know-how is similar in principle but incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Know-how. HBM memory bus could be very large compared to other DRAM reminiscences resembling DDR4 or GDDR5.
An HBM stack of four DRAM dies (4-Hello) has two 128-bit channels per die for a total of 8 channels and a width of 1024 bits in whole. A graphics card/GPU with four 4-Hi HBM stacks would subsequently have a memory bus with a width of 4096 bits. As compared, the bus width of GDDR reminiscences is 32 bits, with 16 channels for a graphics card with a 512-bit memory interface. HBM helps as much as 4 GB per bundle. The larger variety of connections to the Memory Wave Workshop, relative to DDR4 or GDDR5, required a brand new methodology of connecting the HBM memory to the GPU (or other processor). AMD and Nvidia have each used goal-built silicon chips, referred to as interposers, to connect the memory and GPU. This interposer has the added benefit of requiring the Memory Wave and processor to be physically shut, reducing memory paths. Nonetheless, as semiconductor machine fabrication is significantly dearer than printed circuit board manufacture, this adds price to the ultimate product.
external frame The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into unbiased channels. The channels are fully impartial of each other and will not be necessarily synchronous to one another. The HBM DRAM makes use of a wide-interface structure to achieve excessive-speed, low-power operation. Each channel interface maintains a 128-bit knowledge bus operating at double information rate (DDR). HBM supports transfer charges of 1 GT/s per pin (transferring 1 bit), yielding an general bundle bandwidth of 128 GB/s. The second technology of Excessive Bandwidth Memory, HBM2, also specifies up to eight dies per stack and doubles pin switch charges as much as 2 GT/s. Retaining 1024-bit huge access, HBM2 is able to succeed in 256 GB/s memory bandwidth per package. The HBM2 spec allows up to eight GB per package. HBM2 is predicted to be particularly helpful for performance-sensitive client functions resembling virtual reality. On January 19, 2016, Samsung introduced early mass production of HBM2, at up to 8 GB per stack.
In late 2018, JEDEC introduced an update to the HBM2 specification, providing for elevated bandwidth and capacities. As much as 307 GB/s per stack (2.5 Tbit/s effective information rate) is now supported in the official specification, though products operating at this speed had already been available. Additionally, the replace added support for 12-Hello stacks (12 dies) making capacities of as much as 24 GB per stack possible. On March 20, 2019, Samsung introduced their Flashbolt HBM2E, that includes eight dies per stack, a switch charge of 3.2 GT/s, providing a complete of 16 GB and 410 GB/s per stack. August 12, 2019, SK Hynix announced their HBM2E, that includes eight dies per stack, a transfer rate of 3.6 GT/s, offering a complete of 16 GB and 460 GB/s per stack. On July 2, 2020, SK Hynix announced that mass production has begun. In October 2019, Samsung announced their 12-layered HBM2E. In late 2020, Micron unveiled that the HBM2E normal could be up to date and alongside that they unveiled the following customary often known as HBMnext (later renamed to HBM3).