Reverting to 3 memory channels per processor – max at 2 DPC
Thanks to Daniel Bowers for clarifying the use of “socket B” Xeon E5-2400 processors in this server.
IBM has announced the x3300 M4 servers – which are 2-socket servers containing 12 DIMM slots.
The 12 DIMM slots suggest these are running at 3 memory channels per processor (as opposed to the 4 memory channel Romley processors seen so far on this blog). And maximum memory capacity is at 2 DPC.
Industry needs availability on a wide range of servers
When 32GB HCDIMMs become available, they will be the memory module of choice in the 32GB segment.
This will be applicable to both the regular Romley servers as well as the non-Intel-POR (plan-of-record) servers like the IBM x3750 M4 server.
The reason is that:
– HCDIMMs are RDIMM-compatible
– HCDIMMs run at higher speed than LRDIMMs
– HCDIMMs outperform LRDIMMs in latency and throughput even when HCDIMMs are run at the lower same speeds as LRDIMM
Their qualification on a wide range of servers is essential for the industry to get access to the best memory available.
Netlist suggests Q1 2013 availability
Netlist’s HyperCloud HCDIMMs are an RDIMM-compatible load reduction and rank multiplication product.
LRDIMMs are a new standard which are not compatible with RDIMMs.
HCDIMMs outperform the LRDIMMs in performance, latency, price and IP issues.
Now there are signs that 1600MHz versions of HCDIMMs may become available in Q1 2013.
LRDIMM sales occur – and benchmarks
Inphi reported Q2 2012 results.
Inphi suggests sales of both 16GB LRDIMMs and 32GB LRDIMMs.
It is likely that most of these may be 32GB LRDIMMs, since 16GB LRDIMMs are non-viable vs. RDIMMs.
Inphi is shy about reporting benchmarks for LRDIMMs – saying they will be available second half of 2012.
Benchmarks for LRDIMMs should have been available prior to LRDIMM launch.
LRDIMMs exhibit 45% worse latency and 36.7% worse throughput at 3 DPC
LRDIMMs (which are a new standard and incompatible with DDR3 RDIMMs) exhibit significant performance impairment at 3 DPC compared to RDIMM-compatible HCDIMMs:
– LRDIMMs have 45% worse latency than HCDIMMs (235ns vs. 161.9ns for HCDIMMs)
– LRDIMMs have 36.7% worse throughput than HCDIMMs (40.4GB/s vs. 63.9GB/s for HCDIMMs)
And this is when HCDIMMs are run at a SLOWED down 1066MHz at 3 DPC (in order to match the lower max achievable speed of the LRDIMMs).
This comparison – at SAME speed – highlights the architectural weaknesses of the LRDIMM design irrespective of the speeds.
When compared at the MAXIMUM achievable speeds (LRDIMMs at 1066MHz at 3 DPC and HCDIMM at 1333MHz at 3 DPC):
– LRDIMMs have approx. 45% worse latency than HCDIMMs (235ns vs. 161.9ns for HCDIMMs)
– LRDIMMs have 40% worse throughput than HCDIMMs (40.4GB/s vs. 68.1GB/s for HCDIMMs)
And Netlist after that – implications for Netlist
Inphi will report results on July 25, 2012.
A reader commented on a problem that end-users face when presented with HyperCloud as a solution – firstly that it is not qualified on the server that THEY use, and secondly that the HyperCloud naming/posturing is not immediately suggestive of an RDIMM-compatible product which may have made them consider it even though it was not qualified on their server.