Reverting to 3 memory channels per processor – max at 2 DPC
Thanks to Daniel Bowers for clarifying the use of “socket B” Xeon E5-2400 processors in this server.
IBM has announced the x3300 M4 servers – which are 2-socket servers containing 12 DIMM slots.
The 12 DIMM slots suggest these are running at 3 memory channels per processor (as opposed to the 4 memory channel Romley processors seen so far on this blog). And maximum memory capacity is at 2 DPC.
Industry needs availability on a wide range of servers
When 32GB HCDIMMs become available, they will be the memory module of choice in the 32GB segment.
This will be applicable to both the regular Romley servers as well as the non-Intel-POR (plan-of-record) servers like the IBM x3750 M4 server.
The reason is that:
– HCDIMMs are RDIMM-compatible
– HCDIMMs run at higher speed than LRDIMMs
– HCDIMMs outperform LRDIMMs in latency and throughput even when HCDIMMs are run at the lower same speeds as LRDIMM
Their qualification on a wide range of servers is essential for the industry to get access to the best memory available.
LRDIMM sales occur – and benchmarks
Inphi reported Q2 2012 results.
Inphi suggests sales of both 16GB LRDIMMs and 32GB LRDIMMs.
It is likely that most of these may be 32GB LRDIMMs, since 16GB LRDIMMs are non-viable vs. RDIMMs.
Inphi is shy about reporting benchmarks for LRDIMMs – saying they will be available second half of 2012.
Benchmarks for LRDIMMs should have been available prior to LRDIMM launch.
LRDIMMs exhibit 45% worse latency and 36.7% worse throughput at 3 DPC
LRDIMMs (which are a new standard and incompatible with DDR3 RDIMMs) exhibit significant performance impairment at 3 DPC compared to RDIMM-compatible HCDIMMs:
– LRDIMMs have 45% worse latency than HCDIMMs (235ns vs. 161.9ns for HCDIMMs)
– LRDIMMs have 36.7% worse throughput than HCDIMMs (40.4GB/s vs. 63.9GB/s for HCDIMMs)
And this is when HCDIMMs are run at a SLOWED down 1066MHz at 3 DPC (in order to match the lower max achievable speed of the LRDIMMs).
This comparison – at SAME speed – highlights the architectural weaknesses of the LRDIMM design irrespective of the speeds.
When compared at the MAXIMUM achievable speeds (LRDIMMs at 1066MHz at 3 DPC and HCDIMM at 1333MHz at 3 DPC):
– LRDIMMs have approx. 45% worse latency than HCDIMMs (235ns vs. 161.9ns for HCDIMMs)
– LRDIMMs have 40% worse throughput than HCDIMMs (40.4GB/s vs. 68.1GB/s for HCDIMMs)
A reader commented on a problem that end-users face when presented with HyperCloud as a solution – firstly that it is not qualified on the server that THEY use, and secondly that the HyperCloud naming/posturing is not immediately suggestive of an RDIMM-compatible product which may have made them consider it even though it was not qualified on their server.
Marketability as a better RDIMM
UPDATE: 07/09/2012 – buyout
UPDATE: 07/09/2012 – strategic value of RDIMM-compatibility and misconceptions debunked
UPDATE: 07/27/2012 – confirmed HCDIMM similar latency as RDIMMs
UPDATE: 07/27/2012 – confirmed LRDIMM latency and throughput weakness
Is Netlist a buyout candidate ?
Is LRDIMM a dead-end product ?
Is it easier to market a better RDIMM that includes all the features of an LRDIMM ?
Could IDTI be licensing or second-sourcing RDIMM-compatible HyperCloud ?
I would like to thank one of the readers for suggesting that IDT maybe entering the LRDIMM/HyperCloud space. While IDT intention to deliver LRDIMM in late 2012 was known, the comment sparked an examination of what options IDTI would have if it were actively to use HyperCloud for it’s product and whether it would justify IDTI decision to not offer any LRDIMM product for the Romley launch.
Netlist will eventually face the problem of second-source for HyperCloud.
Currently they have the capability of supplying most of the demand for HyperCloud – both at the 16GB and 32GB levels – they have capacity for millions of memory modules at their facility.
However, they will eventually be required to establish a second-source for HyperCloud by the OEMs.