Industry needs availability on a wide range of servers
When 32GB HCDIMMs become available, they will be the memory module of choice in the 32GB segment.
This will be applicable to both the regular Romley servers as well as the non-Intel-POR (plan-of-record) servers like the IBM x3750 M4 server.
The reason is that:
– HCDIMMs are RDIMM-compatible
– HCDIMMs run at higher speed than LRDIMMs
– HCDIMMs outperform LRDIMMs in latency and throughput even when HCDIMMs are run at the lower same speeds as LRDIMM
Their qualification on a wide range of servers is essential for the industry to get access to the best memory available.
LRDIMMs exhibit 45% worse latency and 36.7% worse throughput at 3 DPC
LRDIMMs (which are a new standard and incompatible with DDR3 RDIMMs) exhibit significant performance impairment at 3 DPC compared to RDIMM-compatible HCDIMMs:
– LRDIMMs have 45% worse latency than HCDIMMs (235ns vs. 161.9ns for HCDIMMs)
– LRDIMMs have 36.7% worse throughput than HCDIMMs (40.4GB/s vs. 63.9GB/s for HCDIMMs)
And this is when HCDIMMs are run at a SLOWED down 1066MHz at 3 DPC (in order to match the lower max achievable speed of the LRDIMMs).
This comparison – at SAME speed – highlights the architectural weaknesses of the LRDIMM design irrespective of the speeds.
When compared at the MAXIMUM achievable speeds (LRDIMMs at 1066MHz at 3 DPC and HCDIMM at 1333MHz at 3 DPC):
– LRDIMMs have approx. 45% worse latency than HCDIMMs (235ns vs. 161.9ns for HCDIMMs)
– LRDIMMs have 40% worse throughput than HCDIMMs (40.4GB/s vs. 68.1GB/s for HCDIMMs)
VMware certification limited to Netlist HyperCloud and VLP only
Netlist becomes the only memory certified by VMware on it’s virtualization products.
I cannot find a VMware testimonial in favor of LRDIMMs.
Deleted this post.
Qualifications on other servers in the works
HyperCloud is only available on the HP DL360p and DL380p and IBM x3650 M4 servers – these are high volume virtualization/data center servers and should provide sufficient unit volume demand for Netlist.
While this volume maybe sufficient for Netlist, the lack of availability on other IBM/HP servers does create problems for end-users trying to create a mental map of “how to choose memory” for servers.
Higher memory load reduces achievable bandwidth (memory loading).
Conversely, moving to higher frequencies at the same load also is hard (if you are already operating at the max speed possible).
Similarly, moving to lower voltages exacerbates the problem.