DDP vs. monolithic memory packages and their impact

32GB RDIMMs/LRDIMMs vs. 32GB HyperCloud

Both 32GB RDIMMs and 32GB LRDIMMs being offered today are using 4Gbit x 2 DDP memory packages.

32GB HyperCloud are using 4Gbit monolithic memory packages.

DDP packages include two DRAM dies in a memory package and are used when you need to increase memory density, but don’t have the memory module real estate to accomodate all those monolithic memory packages.

Each DDP memory package includes two DRAM dies – 4Gbit DRAM die x 2 – in one memory package.

DDP costs more than monolithic:

http://www.invensas.com/Company/Documents/Invensas_IWLPC2011_NearTermSolutions3DMemoryStackingDRAM.pdf
Near Term Solutions for 3D Memory Stacking (DRAM)
Simon McElrea, Invensas Corporation

pg. 19 shows the packaging costs for monolithic (SDP) vs. DDP vs. other alternatives (like TSV i.e. Through-Silicon-Via).

That is, a DDP memory package costs more than two monolithic packages.

This is because of the additional complexity of fitting two DRAM dies (and interfacing them) on one package.

In addition, there are asymmetries within the DDP which lead to:

– asymmetrical line lengths
– uneven thermal effects (which affects the behavior of the two dies so they perform differently)
– the placement of two DRAM dies within the same package may make load reduction efforts more difficult (?)

In contrast, Netlist is able to make the same sized 32GB memory module using 4Gbit monolithic memory packages. Netlist has earlier pointed out some of the problems with use of DDP memory packages:

http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
Netlist goes virtual and dense with server memory
So much for that Cisco UCS memory advantage
By Timothy Prickett Morgan
Posted in Servers, 11th November 2009 18:01 GMT

The company also developed a memory packaging technology called Planar-X, which allows for two PCBs loaded with memory chips to be packaged together relatively inexpensively to share a single memory slot. This technique is cheaper and more reliable, according to Duran, than some of the dual-die packaging techniques memory module makers use to make dense memory cards out of low density and cheaper memory chips.

Using the Planar-X double-board designs, Netlist can take 1Gb memory chips and make an 8GB memory module that costs only 20 to 30 per cent more than a standard 4GB module using 1Gb chips; using 2Gb chips, it can make a 16GB module, something no one else can do yet.

This is similar to the thermal issues with stacked designs (as MetaRAM was doing) and which Netlist had earlier pointed out:

http://www.netlist.com/technology/technology.html

While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency. Stacking components results in unequal cooling of devices, causing one device to run slower than the other in the stack. This often results in module failures in high-density applications.

The density limitation is solved by proprietary board designs that use embedded passives to free up board real estate, permitting the assembly of more memory components on the substrate. The performance of the memory module is enhanced by fine-tuning the board design to minimize signal reflections, noise, and clock skews.

32GB RDIMMs are using 4Gbit x 2 (DDP) memory packages

The description suggests the 16GB is using 2Gbit x 2 DDP, while the 32GB is using 4Gbit x 2 DDP (that is 4Gbit x 2 DRAM dies in one memory package):

http://www.samsung.com/global/business/semiconductor/support/brochures/downloads/memory/samsung_LRDIMM.pdf

Lineup: 32GB (4Gb DDP), 16GB (2Gb DDP)

32GB LRDIMMs are using 4Gbit x 2 (DDP) memory packages

The description suggests the 32GB LRDIMM is using 4Gbit x 2 DDP (that is 4Gbit x 2 dies in one memory package):

http://www.inphi.com/lrdimm/images/pdfs/LRDIMM-whitepaper.pdf

pg. 5:

LRDIMM capacities up to 32GB are possible today with 4Rx4 modules using 4 Gb, DDP (dual-die package) DRAM.

32GB HyperCloud (HCDIMM/HDIMM) use 4Gbit monolithic

In contrast the Netlist 32GB HyperCloud uses 4Gbit DRAM die (monolithic) memory packages – just as the ones used on the 16GB RDIMMs (2-rank).

Netlist uses it’s “Planar-X” IP to open up real-estate for the greater number of memory packages on a single memory module.

The description suggests it is a 2-rank (virtual) – just like the 16GB HCDIMM. And it uses 4Gbit DRAM die (monolithic). And it uses NLST’s Planar-X IP:

http://www.netlist.com/products/hypercloud/

NMD4G7G31G0DHDxx 32GB 1333MHz 2Rx4 4Gb Planar-X LP – NEW

Cost difference between 32GB LRDIMM vs. 32GB HyperCloud

Two 4Gbit monolithic memory packages are cheaper than one 4Gbit x 2 DDP memory packaging.

For this reason 32GB LRDIMMs will be more expensive than 32GB HyperCloud (HCDIMM/HDIMM).

http://www.invensas.com/Company/Documents/Invensas_IWLPC2011_NearTermSolutions3DMemoryStackingDRAM.pdf
Near Term Solutions for 3D Memory Stacking (DRAM)
Simon McElrea, Invensas Corporation

pg. 19 shows the cost for monolithic (SDP) vs. DDP vs. other alternatives (like TSV i.e. Through-Silicon-Via).

32GB RDIMMs will be non-viable (4-rank experiences speed slowdown at 3 DPC and 2 DPC):

https://ddr3memory.wordpress.com/2012/06/20/non-viability-of-32gb-rdimms/
Non-viability of 32GB RDIMMs
June 20, 2012

Thus the competition will be between 32GB LRDIMM and 32GB HyperCloud (IBM HCDIMM/HP HDIMM) – which are expected to be available mid-2012 (according to Netlist prior conference call).

Once they become available at IBM and HP, it is possible that the 32GB HCDIMM may become cheaper than both the 32GB LRDIMM as well as the 32GB RDIMM.

At that point 32GB HCDIMM will outperform the 32GB LRDIMM:

– performance advantage over 32GB LRDIMM
– IP advantage over LRDIMM
– cost advantage over both 32GB LRDIMM and 32GB RDIMM

At the 16GB level, RDIMMs are slightly cheaper than LRDIMMs and HyperCloud (IBM HCDIMMs/HP HDIMMs) – both LRDIMMs and HyperCloud are priced similarly at IBM/HP.

At the 32GB level, there might be a price inversion as the HyperCloud may be in a peculiar position of being cheaper than both the 32GB LRDIMM/32GB RDIMMs.

Improvements in DDP designs

When DDP designs resolve some of these asymmetrical problems within the DDP memory package, and when they reduce the cost of producing DDPs (potentially so that per-die cost becomes cheaper eventually with next-gen DDP designs) – then these benefits will help both LRDIMMs and HyperCloud.

Here is a paper that describes a future DDP (dual die packaging) design. They suggest it will help both LRDIMMs/HyperCloud.

One of the authors – Bill Gervasi – is a former Netlist employee and a former JEDEC committee chair. The paper mentions the applicability to both LRDIMMs and HyperCloud. Here is the section where they compare the design weaknesses in the LRDIMMs with the superiority of the HyperCloud design:

http://www.invensas.com/Company/Documents/Invensas_ISQED2012_CostMinimizedDoubleDieDRAMUltraHighPerformanceDDR3DDR4MultiRankServerDIMMs.pdf
Cost-minimized Double Die DRAM Packaging for Ultra-High Performance DDR3 and DDR4 Multi-Rank Server DIMMs
Richard Crisp 1 , Bill Gervasi 2 , Wael Zohni 1 , Bel Haba 1
1 Invensas Corp, 2702 Orchard Parkway, San Jose, CA USA
2 Discobolus Designs, 22 Foliate Way, Ladera Ranch, CA USA

pg. 1:
Total assembly cost is the lowest of any DDP and on a per-die basis is lower than Single Die Packaging.

pg. 3:
5. Applicability to LRDIMM and Hypercloud DIMMs

The LRDIMM differs from the RDIMM in that the DQ and DQ Strobe signals are buffered[1]. The data buffer is placed in the central region of the LRDIMM. This requires all data and data strobes to be routed from each DRAM package to the buffer and then routed back to the edge connector which demands additional routing layers versus an RDIMM. Since the LRDIMM is plugged into an edge connector, the thickness of the DIMM PCB is fixed.

Adding PCB layers necessarily requires a reduction of the thickness of the dielectric layers separating the power planes and routing layers. Unless the width of the traces is made narrower, the characteristic impedance of the etched traces is decreased and can lead to signal reflections arising from impedance discontinuities that diminish voltage and timing margin.

Trace width is limited by the precision of the control of the etching process, with such narrower traces being more costly to manufacture within tolerance. Because the DFD’s C/A bus routes on a single layer and other interconnections lay out cleanly, the layer count is reduced leading to nominal impedances being attainable with normal dimensional control keeping raw card costs from rising.

The Hypercloud architecture is similar to the LRDIMM in that the DQ and DQ Strobe signals are buffered, but unlike the LRDIMM the buffering is provided by a number of data buffer devices placed between the edge connector and the DRAM package array on the DIMM PCB[2]. The 11.5 x 11.5 mm package outline of the DFD supports placement of the buffers without requiring growth of the vertical height of the DIMM. In fact a simple modification of the RDIMM PCB will enable the Hypercloud data buffers to be mounted on the PCB making conversion of an RDIMM design to Hypercloud a straightforward matter.

Here Invensas is saying that with the LRDIMM centralized buffer chipset, there are a greater number of lines to route back and forth between DRAM packages and the central buffer. This forces the use of a greater number of PCB layers, each of which then needs to be thinner, and leads to signal quality issues.

Advertisements

7 Comments

Filed under Uncategorized

7 responses to “DDP vs. monolithic memory packages and their impact

  1. Pingback: Memory buying guide – when to use RDIMMs ? | ddr3memory

  2. Pingback: Multi-die vs. multi-PCB to increase memory density | ddr3memory

  3. Pingback: Infographic – memory buying guide for Romley 2-socket servers | ddr3memory

  4. Pingback: Examining Netlist | ddr3memory

  5. Pingback: Memory for VMware virtualization servers | ddr3memory

  6. Pingback: VLP RDIMMs for virtualization on Blade Servers | ddr3memory

  7. Pingback: HyperCloud to own the 32GB market ? | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s