Second take

UPDATE: added 06/24/2012: Invensas on LRDIMM design inferiority vs. HyperCloud

Basically LRDIMMs and DDR4 are both going to be using NLST IP.

This article gives a good introduction:

http://www.theregister.co.uk/2011/11/30/netlist_32gb_hypercloud_memory/
Netlist puffs HyperCloud DDR3 memory to 32GB
DDR4 spec copies homework
By Timothy Prickett Morgan
Posted in Servers, 30th November 2011 20:51 GMT

Here is a CMTL labs comparison of HCDIMM and LRDIMM that goes into more detail:

CMTL HCDIMM Outperforms LRDIMM in “Big Data” & “Big Memory” Applications White Paper
http://www.netlist.com/products/hypercloud/whitepapers/hcdimm_vs_lrdimm_whitepaper_march_2012.pdf

UPDATE: added 06/24/2012: Invensas on LRDIMM design inferiority vs. HyperCloud

Here is a paper that describes a future DDP (dual die packaging) design. One of the authors – Bill Gervasi – is a former Netlist employee and a former JEDEC committee chair. The paper mentions the applicability to both LRDIMMs and HyperCloud. Here is the section where they compare the design weaknesses in the LRDIMMs with the superiority of the HyperCloud design:

http://www.invensas.com/Company/Documents/Invensas_ISQED2012_CostMinimizedDoubleDieDRAMUltraHighPerformanceDDR3DDR4MultiRankServerDIMMs.pdf
Cost-minimized Double Die DRAM Packaging for Ultra-High Performance DDR3 and DDR4 Multi-Rank Server DIMMs
Richard Crisp 1 , Bill Gervasi 2 , Wael Zohni 1 , Bel Haba 1
1 Invensas Corp, 2702 Orchard Parkway, San Jose, CA USA
2 Discobolus Designs, 22 Foliate Way, Ladera Ranch, CA USA

pg. 3:
5. Applicability to LRDIMM and Hypercloud DIMMs

The LRDIMM differs from the RDIMM in that the DQ and DQ Strobe signals are buffered[1]. The data buffer is placed in the central region of the LRDIMM. This requires all data and data strobes to be routed from each DRAM package to the buffer and then routed back to the edge connector which demands additional routing layers versus an RDIMM. Since the LRDIMM is plugged into an edge connector, the thickness of the DIMM PCB is fixed.

Adding PCB layers necessarily requires a reduction of the thickness of the dielectric layers separating the power planes and routing layers. Unless the width of the traces is made narrower, the characteristic impedance of the etched traces is decreased and can lead to signal reflections arising from impedance discontinuities that diminish voltage and timing margin.

Trace width is limited by the precision of the control of the etching process, with such narrower traces being more costly to manufacture within tolerance. Because the DFD’s C/A bus routes on a single layer and other interconnections lay out cleanly, the layer count is reduced leading to nominal impedances being attainable with normal dimensional control keeping raw card costs from rising.

The Hypercloud architecture is similar to the LRDIMM in that the DQ and DQ Strobe signals are buffered, but unlike the LRDIMM the buffering is provided by a number of data buffer devices placed between the edge connector and the DRAM package array on the DIMM PCB[2]. The 11.5 x 11.5 mm package outline of the DFD supports placement of the buffers without requiring growth of the vertical height of the DIMM. In fact a simple modification of the RDIMM PCB will enable the Hypercloud data buffers to be mounted on the PCB making conversion of an RDIMM design to Hypercloud a straightforward matter.

Here Invensas is saying that with the LRDIMM centralized buffer chipset, there are a greater number of lines to route back and forth between DRAM packages and the central buffer. This forces the use of a greater number of PCB layers, each of which then needs to be thinner, and leads to signal quality issues.

LRDIMMs are currently only being made by Inphi, have high latency issues and under-perform as well. They have failed in copying Netlist IP (Inphi hired former MetaRAM CEO as “Technical Advisor” who some years back ALSO conceded to Netlist and went out of business).

DDR4 does a better job – as they are doing a better job of copying the Netlist IP (expect JEDEC to license prior to DDR4 finalization).

Load-reduction and Rank-multiplication

The technology in LRDIMMs/NLST HyperCloud is called “load reduction” and “rank multiplication”.

16GB memory modules

Currently you only need it at 3 DPC with Romley (as 16GB RDIMMs 2-rank are cheap and work well at 1 DPC and 2 DPC).

3 DPC usually used for virtualization/data centers (lots of memory per server for all the VMs you want to run).

32GB memory modules

When 32GB RDIMMs arrive, however the need for this technology will shift to 2 DPC also (i.e. more mainstream even) – because 32GB RDIMMs will be 4-rank (as 2-rank cannot be made for a couple of years because of lack of 8Gbit DRAM die).

Availability of LRDIMMs vs. NLST HyperCloud

Both LRDIMMs and NLST HyperCloud are available from IBM and HP.

HyperCloud is called IBM HCDIMM and HP HDIMM (HP Smart Memory HyperCloud) – delivers 1333MHz at 3 DPC.

HP ships it fully loaded at 24 DIMMs per server for their high volume virtualization/data center servers – the HP DL360p and HP DL380p.

Similarly it is offered on the IBM System x3650 M4 series of servers (24 DIMMs per 2-socket server).

Advertisements

Leave a comment

Filed under Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s