16GB LRDIMMs vs. 16GB RDIMMs (2-rank) and 16GB HCDIMMs
16GB LRDIMMs do not outperform 16GB RDIMMs (2-rank) at 1 DPC, 2 DPC.
In addition they do not outperform the RDIMM at 3 DPC either. In contrast 16GB HyperCloud outperforms RDIMMs at 3 DPC.
Therefore, at the 16GB level, LRDIMMs:
– add no extra benefit
– are slightly costlier
– have the high latency issues associated with LRDIMMs
For more on the LRDIMM high latency issues:
LRDIMM latency vs. DDR4
May 31, 2012
HP/Samsung comments on non-viability of 16GB LRDIMM
This is confirmed by HP/Samsung comments at the IDF conference on LRDIMMs (on the Inphi LRDIMM blog main webpage):
Webcast of HP, Samsung, ANSYS, Intel and Inphi presentation at IDF 2011 for HPC applications
HP/Samsung at the IDF conference on LRDIMMs video (Inphi LRDIMM blog) have said they will not be pushing 16GB LRDIMMs (because it cannot compete against the 16GB RDIMM 2-rank). However they would push the 32GB LRDIMMs because they have some utility vs. 32GB RDIMMs (which will only be available in 4-rank for a few years because 8Gbit DRAM die won’t be available for a while if ever).
IDTI confirms non-viability of 16GB LRDIMMs
IDTI confirms the analysis made here here for the non-viability of 16GB LRDIMMs vs. 16GB RDIMMs (2-rank) – 16GB LRDIMMs cannot outperform the 16GB RDIMMs (2-rank).
This is because at 3 DPC the 16GB LRDIMMs cannot do better than the 16GB RDIMMs. Compare this to the 16GB HyperCloud which outperforms the 16GB RDIMMs at 3 DPC by delivering 1333MHz at 3 DPC.
IDTI comments – referring to 16GB LRDIMMs and contrasting them with the situation with 32GB LRDIMMs:
IDT Third Quarter Fiscal Year 2012 Financial Results
Jan 30, 2012 at 1:30 PM PT
DISCLAIMER: please refer to the original conference call or transcript – only use the following as guidance to find the relevant section
at the 42:40 minute mark ..
So if you go through the analysis .. which I am not going to bore you with here .. and you look at the benefits of LRDIMM in Sandy Bridge, the cost-performance tradeoff is not .. uh .. not very favorable.
It turns out – now just give you the answer .. uh .. that you can build a DIMM using .. uh .. uh .. 64 .. I’m sorry 4Gbit DRAM and standard Registered DIMM (RDIMM) that has .. really a lower cost and roughly equal performance to what you would get with LRDIMM – that’s why the attach rates for LRDIMM in Sandy Bridge is relatively small.
The the only place where LRDIMM will give you a performance tradeoff in the Sandy Bridge generation is in the 32GB DIMMs, not in the 16GB DIMMs.
At the 16GB level, HyperCloud outperforms RDIMMs at 3 DPC.
For this reason, if you are buying 16GB memory modules, you would buy:
– 16GB RDIMM (2-rank) if you need to populate at 1 DPC or 2 DPC
– 16GB HyperCloud if you need to populate at 3 DPC (virtualization/cloud computing/data centers), or if you think you may need to upgrade to 3 DPC later
For a 2-socket Romley server (2 processors), 4 memory channels per processor and DIMMs populated at 2 DPC:
2 processors x 4 memory channels per processor x 2 DPC = 16 DIMM slots
Using 16GB memory modules, that is:
16 DIMM slots x 16GB = 256GB
So basically if you need more than 2 DPC i.e. more than 256GB in a 2-socket server you HAVE to use a “load reduction” and “rank multiplication” solution – and HyperCloud trumps LRDIMM.
IBM/HP server offerings
Looking at the 16GB choices for the HP DL360p and DL380p series of servers:
Memory options for the HP DL360p and DL380p servers – 16GB memory modules
May 24, 2012
For the IBM x3650 M4 series of servers:
Memory options for the IBM System x3650 M4 server – 16GB memory modules
May 25, 2012
For an explanation of HyperCloud vs. LRDIMMs vs. RDIMMs:
What are IBM HCDIMMs and HP HDIMMs ?
May 27, 2012