Multi-die vs. multi-PCB to increase memory density

Netlist Planar-X – cheaper 32GB RDIMMs and VLP

UPDATE: 07/06/2012 – VMware certifies Netlist as sole memory vendor

With increasing memory capacity, there is a need to fit a greater number of memory packages (which hold the DRAM die) onto the limited space on a memory module.

One way is to minimize the number of memory packages – by fitting more DRAM dies in a memory package (multi-die).

The other is the increase the real-estate by using a sandwich of multiple PCBs to build the memory module (multi-PCB).

Pricing

Multi-PCB allows use of monolithic memory where DDP would otherwise be used. It can thus lead to a cheaper version of the product.

Currently multi-die is generally more expensive than the monolithic version – i.e. one 4Gbit x 2 DDP will be more expensive than two 4Gbit monolithic memory packages.

However multi-die may become cheaper in the future with improvements in manufacturing processes.

Multi-die – DDP, QDP and TSV/3DS/Memory Cube

Multi-die refers to the number of DRAM dies you place in a memory package. Dual-die means you have two DRAM dies in one memory package.

You can currently get memory modules built using dual-die packaging (DDP), or quad-die packaging (QDP) memory packages. This reduces the number of memory packages needed to build a memory module.

If you only have one DRAM die per package, that is called single-die packaging (SDP) or monolithic packaging.

In order to build a multi-die memory package, you have to tackle a number of issues:

– thermal issues – cooling so many dies in one package
– uneven thermal issues which cause non-uniform performance in the two dies
– asymmetrical signal lines to the different DRAM dies within the package – lead to signal skew which can increase overall latency

A number of techniques are being researched to reduce these problems – which are variously described as:

– TSV – through-silicon via – a technique to reduce the interconnect lines within the package
– 3DS – related to 3D stacking of DRAM die
– Memory Cube – something similar to the above

However, there is still a lot of work to be done – but some of these techniques are already being used to build DDP and QDP memory packages.

Because of the increased manufacturing complexity, these packages are generally more expensive than the monolithic variety – for example, DDP is more expensive than 2 x SDP.

However, future DDP memory packages COULD become cheaper to use as described here:

https://ddr3memory.wordpress.com/2012/06/25/ddp-vs-monolithic-memory-packages/
DDP vs. monolithic memory packages and their impact
June 25, 2012

Example of multi-die and multi-PCB

Currently 32GB RDIMMs and 32GB LRDIMMs use 4Gbit x 2 DDP memory packages to build 32GB memory modules.

– 32GB RDIMMs use 4Gbit x 2 DDP memory packages
– 32GB LRDIMMs use 4Gbit x 2 DDP memory packages

https://ddr3memory.wordpress.com/2012/06/25/ddp-vs-monolithic-memory-packages/
DDP vs. monolithic memory packages and their impact
June 25, 2012

Netlist’s 32GB HyperCloud is built using Planar-X IP – which creates a sandwich of PCBs to increase the real-estate available on a memory module. This allows the use of lower density (and cheaper) 4Gbit monolithic (SDP) memory packages – although there will be many more of these, there will be space to fit them all on the memory module sandwich of PCBs.

– 32GB HyperCloud use 4Gbit SDP memory packages using Planar-X IP

This should make it cheaper to produce than 32GB RDIMMs and 32GB LRDIMMs.

The Netlist 32GB HyperCloud is expected to be available mid-2012 as IBM HCDIMM and HP HDIMM according to Netlist most recent conference call.

Netlist patent docs for Planar-X also suggest an ability to create a sandwich of 4 PCBs. While Netlist has been using Planar-X for a while – but at 2 PCBs – if they were to start using 4 PCBs, that would allow production of 64GB HyperCloud or 64GB RDIMMs using 4Gbit monolithic (SDP) memory packages.

– 64GB HyperCloud using 4Gbit SDP memory ackages in the future

Combining multi-die together with multi-PCB

VLP stands for very low profile memory and these modules have lower height than standard RDIMMs.

16GB VLP RDIMMs thus have lower real-estate available on the memory module compared to standard RDIMMs.

Using just a multi-die approach, such a 16GB VLP RDIMM would require:

– 16GB VLP RDIMM using 4Gbit x 2 DDP (8Gbit DDP) memory packages
.
– 16GB VLP RDIMM using 2Gbit x 4 QDP (8Gbit QDP) memory packages

Netlist can leverage both multi-die and multi-PCB (2 PCB Planar-X) techniques to create a 16GB VLP RDIMM – the 2 PCBs allow them to use a generation lower memory density.

– 16GB VLP RDIMM using 2Gbit x 2 DDP (4Gbit DDP) and Planar-X IP (2 PCB)

2Gbit x 2 DDP would be simpler and cheaper vs. the 8Gbit DDP and QDP packages:

– 2Gbit DRAM die is less expensive than 4Gbit DRAM die – not all DRAM makers have capability to produce 4Gbit DRAM die (Samsung does, many others don’t)
.
– a QDP package is more complex (thermal, connection issues) and expensive than a DDP memory package

This allows Netlist to create a 16GB VLP RDIMM cheaper than the competition.

For a comparison of 2Gbit x 2 DDP (multi-die plus Planar-X) vs. 4Gbit x 2 DDP and 2Gbit x 4 QDP approaches to build a 16GB VLP RDIMM:

https://ddr3memory.wordpress.com/2012/07/06/vlp-rdimms-for-virtualization-on-blade-servers/
VLP RDIMMs for virtualization on Blade Servers
July 6, 2012

Advertisements

4 Comments

Filed under Uncategorized

4 responses to “Multi-die vs. multi-PCB to increase memory density

  1. Pingback: Examining Netlist | ddr3memory

  2. Pingback: Memory for VMware virtualization servers | ddr3memory

  3. Pingback: VLP RDIMMs for virtualization on Blade Servers | ddr3memory

  4. Pingback: HyperCloud to own the 32GB market ? | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s