Is HyperCloud vs. LRDIMMs similar to Betamax vs VHS ?

Betamax vs VHS all over again ?

The choice of load reduction and rank multiplication memory – whether HyperCloud or LRDIMMs – may seem superficially like another such “standards battle” – that of Betamax vs. VHS (where the inferior technology won out because of market conditions).

This may superficially seem to be the situation with memory choices – but it is not.

In Betamax vs. VHS there was a scarce resource – market share.

Since both Betamax and VHS were separate standards – i.e. non-interoperable – the gaining of market share by one would automatically spell extinction for the other – because with inferior market share you would have a harder time convincing people to buy your product (who wants to buy a product which will not play on the “mainstream” player). It would be a slippery slope down.

Those who did not gain market share fast enough would find it harder and harder to do so later.

The reason this was so was because there was a scarce resource – market share – which was essential for the survival of each party.

Whoever lost market share would find it increasingly difficult to regain it later.

Such a situation does not exist with HyperCloud vs. LRDIMMs.

HyperCloud is not a new standard – LRDIMM is

This is because HyperCloud is not a new standard – but is a “better RDIMM”.

Even though it could be called a “proprietary” memory, HyperCloud has a very important strategic advantage. It is indistinguishable from a regular RDIMM.

HyperCloud is:

– interoperable with standard RDIMMs
– does not require BIOS update – or “cooperation” from motherboard manufacturers

LRDIMMs are a new JEDEC standard – the reason is they require a BIOS update in order to work – and this process needs to be standardized.

Compare that to HyperCloud which do not require a JEDEC standard, or BIOS update, or cooperation with motherboard manufacturers to ensure it works.

The reason is that HyperCloud is interoperable with RDIMMs – and behaves like a “better RDIMM”. It leverages the RDIMM standard and behaves like an RDIMM that just appears to have a lower load and lower rank (load reduction and rank multiplication).

This capability is not present in the LRDIMMs – which require a BIOS update and thus cooperation with motherboard makers and a JEDEC standard to ensure there are no problems of inconsistency across motherboard makers. That is an added level of complexity for the LRDIMM approach.

HyperCloud does not require cooperation of motherboard makers

Interoperability with standard RDIMMs means that HyperCloud is a standard RDIMM except better (lesser load than the equivalent RDIMM would place on the memory bus).

Obviously you would not mix too many RDIMMs with HyperCloud because it would reduce the load reduction effort you are presumably trying to do with HyperCloud – but you could use HyperCloud as a regular RDIMM.

Even though IBM/HP recommend using all RDIMM, all HyperCloud or all LRDIMM configurations on the current generation of Romley servers, the reason they do so for HyperCloud may be because you defeat the load reduction effort if you start adding regular RDIMMs to a server that has HyperCloud installed specifically for it’s load reduction value.

Not requiring a BIOS update is a serious strategic advantage for HyperCloud – it means HyperCloud does NOT require cooperation from motherboard manufacturers.

It also means there are no constraints placed on Netlist about HOW it chooses to sell HyperCloud – at what scale, slowly at first and more later, or whatever.

Netlist is therefore under no compulsion to flood the market to gain market share, or to sell to compete with LRDIMMs.

They can sell as many or as few HyperCloud memory modules as they want – since it is addressing the (far bigger and mainstream) market for RDIMM standard memory modules.

For all intents and purposes they are just another RDIMM producer – except an RDIMM with very special powers.

LRDIMMs require cooperation of motherboard makers

While LRDIMMs are pin-compatible with DIMM slots, they do require a compulsory BIOS upgrade that specifically enables LRDIMM support in the motherboard – as LRDIMMs REQUIRE a BIOS modification in order for their load reduction efforts to work.

This compels LRDIMMs to seek cooperation from motherboard makers.

Because of this constraint – LRDIMMs requiring a BIOS upgrade on pre-Romley servers (Westmere) – it was very difficult to sell LRDIMMs to already installed servers. Data centers did not want to be fiddling with the BIOS – the results of which would be hard to predict.

See this set of SuperMicro customer issues with LRDIMMs on pre-Romley servers – mostly dealing with their disappointment that 3 DPC at 1333MHz was not reachable with LRDIMMs, and issues with needing BIOS updates to make LRDIMMs work (website seems to be very slow):

This one confirms that “LRDIMMs require BIOS update”:

Board is x8dah+-F. I have purchased the 16GB memory modules hynix hmt42gl7bmr4a-h9 LRDIMMS according to the board compatibility chart but the board beeps with a memory error. I put in regular 4GB DDR3 memory and it boots fine. Please let me know what I need to do to get this working as I need a full 288GB of memory.

LRDIMMs will only work if the board has been flashed with the LRDIMM-enabled BIOS. However, RDIMMs and UDIMMs still work with this BIOS. Please contact Tech Support to get LRDIMM – Enable BIOS.

If the user’s board has the standard BIOS loaded, then you need to use UDIMMs or RDIMMs in the board to boot the system to video and then flash to the LRDIMM-enabled BIOS. After that, you can populate the LRDIMMs. If the user does not have access to UDIMMs or RDIMMs, he will have to RMA the board.

People having problems getting 1333MHz with LRDIMMs:

We need to get 1333MHz working frequency, so the normal modules are not fit.
Do you mind LRDIMM will meet the requirement?
My board model is X8DAH+-F

If you want to use LRDIMM please refer following support info. the memory speed will up to 1066MHz only.
• 16 GB LRDIMM can run up to 3 DPC (total board capacity of 288 GB) at speeds up to 1066 MHz. Reducing the number of DPC will not increase the speed. LVDIMMs are not currently supported.
• 32 GB LRDIMM can run up to 2 DPC (total board capacity of 384 GB) at speeds up to 1066 MHz. Reducing the number of DPC will not increase the speed. LVDIMMs are not currently supported.
If you want to support LR DIMM you need flash BIOS(LR-DIMM support) first.

Please verify if the X8DTU-6TF+ motherboard needs to use 16GB Load Reducing (LR) dimms in order to max out the board to 288GB.

We wanted to eliminate confusion, so we’re only supporting LRDIMMs by default on the “-LR” SKUs. For example:

• LRDIMM (Load Reduced DIMM, for X8DTU-6F+-LR and 8XDTU-6TF+-LR Only)
• DDR3 ECC 1066 MHz memory with support of up 288 GB in 18 slots

Warning: For your system memory to work properly, be sure to use the correct BIOS ROM for your system.
For the X8DTU+-6F+, use the X8DTU+-6F+BIOS. For the X8DTU+-6F+- LR, use the X8DTU+-6F+-LR BIOS.
For the X8DTU+-6TF+, use the X8DTU+-6TF+ BIOS. For the X8DTU+- 6TF+-LR, use the X8DTU+-6TF+-LR BIOS.
To flash the BIOS, refer to

We have a server X8DTU-6TF+ with the latest BIOS. Every time we install the 16GB memory LR-DIMMS, the system will not boot. Is there a BIOS to fix this issue?

Yes, please request a LRDIMM BIOS from technical support dated 7/14/11, until it is posted online.

How many MEM-DR332L-CL01-LR10(32GB DDR3-1066) can I install on X8DTU-LN4F+ with one CPU only?

With the BIOS for LR-DIMM, you can install up to six MEM-DR332L-CL01-LR10 with one CPU and the speed will be fixed at 1066MHz.

what is the memory speed if I install 18 pcs MEM-DR316L-HL01-LR13(Hynix HMT42GL7BMR4A-H9) in SYS-1026T-6RF+ to support 288GB memory size in total?

Memory speed will be fixed at 1066Mhz.

Now with Romley servers, LRDIMMs still require a BIOS update. But this BIOS update can be installed at the factory.

Reasons why LRDIMMs was troublesome on pre-Romley servers

This explains one of the reasons why LRDIMMs were rolled out with Romley and not earlier. It is far easier to do BIOS updates at the factory for a new server platform than for servers already in use at data centers etc.

However, even with Romley, a BIOS update needs to be made for every motherboard – and it is possible that some motherboards may still not have the appropriate BIOS update to enable LRDIMM support.

The need for BIOS support from motherboard makers means that there has to be political clout to bring them in line – therefore a product like LRDIMMs can only be pushed by a big player or by consensus between disparate manufacturers of motherboards.

It is for this reason that you REQUIRE a “standard” to be codified then for LRDIMM support on motherboards – so that all of them implement it correctly.

And this is why there is a JEDEC standard for LRDIMMs. Even though it maybe copying Netlist IP in load reduction and rank multiplication, this standard ensures that LRDIMMs can operate on different motherboards.

So one can see that with LRDIMMs there is a HUGE effort involved in just bringing them to market – support at the BIOS level and getting motherboard makers to add the BIOS support ahead of time for Romley rollout.

Each motherboard BIOS needs to be updated, tested with LRDIMMs and then released with Romley rollout.

Reasons why LRDIMMs were introduced with Romley

LRDIMMs which require:

– special BIOS upgrade on the motherboard otherwise LRDIMMs cannot function at all
– require the cooperation of motherboard makers so they add in the support
– require a degree of standardization – and thus the JEDEC standard for LRDIMMs

LRDIMMs thus require special treatment, require political clout to force motherboard makers to add support for LRDIMMs on their motherboards. And require the efforts of a standardization body to ensure that there is some “standard” – so that an LRDIMM will work on many motherboards without problems.

Such an effort requires the support of Intel – and Intel exerted it’s influence to ensure that LRDIMM support would be built into Romley servers (some of which may still not have fully implemented the BIOS updates).

One can also see why a standardization body like JEDEC would be needed to ratify a LRDIMM standard (otherwise the new memory type would work on some motherboards but not others).

One can see WHY LRDIMMs were therefore NOT introduced pre-Romley.

One of the reasons why LRDIMMs were introduced at Romley was because with a new platform the BIOS updates required could be added at the factory. With Westmere (pre-Romley), any company wanting to upgrade to LRDIMMs would need to upgrade the BIOS on their server motherboards – which is not something many running data centers would want to do.

So while HyperCloud addresses the already existing (and greater) RDIMM market, LRDIMM was a new standard, requiring specialized support.

BIOS configuration and non-interoperability of LRDIMMs

BIOS configuration requirements for LRDIMMs vs. “standard RDIMM” nature of HyperCloud – from this Cirrascale/Netlist PR:
HyperCloud Achieves Server Memory Speed Breakthrough at SC11
Demonstration Highlights HyperCloud’s Advantages over commodity RDIMM, LRDIMM
SAN JOSE, CA—November 15, 2011

The successful demonstration also highlights the fundamental differences between HyperCloud and industry’s commodity offering, LRDIMM (load reduced dual inline memory module). Unlike LRDIMM’s monolithic signal architecture, HyperCloud’s distributed signal architecture improves performance by eliminating data path delays and system-level latency. Also, while LRDIMM requires a special BIOS configuration, HyperCloud provides seamless plug-and-play operation with past, current and future generations of Intel processors.

Interoperability with RDIMMs has been an old feature of HyperCloud:
Netlist Demonstrates New HyperCloud Memory Modules at Supercomputing 09
Showcases interoperability between standard JEDEC server memory solutions and HyperCloud modules

PORTLAND, Ore., Nov. 16 /PRNewswire-FirstCall/ — Visit Netlist at SC09 in Booth # 2398 — At Supercomputing 09, Netlist, Inc. (Nasdaq: NLST), a designer and manufacturer of high-performance memory subsystems, is demonstrating the world’s first 16GB 2 virtual rank (vRank) double-data-rate three, registered dual in-line memory module (DDR3 RDIMM), HyperCloud(TM). Netlist will also showcase the interoperability of HyperCloud memory with standard JEDEC server memory solutions on popular enterprise servers. This demonstration reinforces HyperCloud’s ability to function as a standard RDIMM while increasing memory bandwidth and capacity for datacenter servers.

To showcase its 2-vRank HyperCloud modules, Netlist is using industry standard servers, such as the HP ProLiant DL380, demonstrated in the following configurations:

— 8GB and 16GB 2 vRank DDR3 RDIMM functionality
— Three 2 vRank modules per channel
— 1333 Mega Transfers per second (MT/s)
— Interoperability with standard JEDEC DDR3 modules

— Interoperability with different RDIMM capacities

“This technology maximizes server utilization with a simple plug-and-play memory module,” said Paul Duran, director of business development at Netlist. “HyperCloud enables high-performance cloud computing while reducing datacenter costs and increasing application performance.”

“Customers running memory intensive computing environments, such as virtualization, cloud computing, and HPC applications, are often limited by memory bottlenecks in their servers,” said Mike Gill, vice president, Industry Standard Servers Platform Engineering at HP. “The Netlist technology on HP industry-standard servers increases server memory capacity and bandwidth to enhance application performance in converged infrastructures.”

HyperCloud will debut at the Supercomputing trade-show, taking place in Portland, Oregon during November 17-19, 2009, in booth number 2398. Netlist plans to sample HyperCloud to major OEM customers in December with production slated for Q1 2010. HyperCloud will be available in 4GB, 8GB, and 16GB 2 vRank module options.

Why HyperCloud vs. LRDIMMs is unlike Betamax vs. VHS

For these reasons, with HyperCloud vs. LRDIMMs there is no “scarce resource”.

As HyperCloud sells as regular RDIMM.

And LRDIMM sells as something new.

In fact if there was another type of memory – let’s call it LRDIMM2 – and that ALSO required a BIOS update and support from motherboard makers, THEN we could say that:

LRDIMM2 vs. LRDIMM is very much like Betamax vs. VHS

And THEN there would be a rush of effort by LRDIMM backers and LRDIMM2 backers to ensure that their standard got the BIOS support.

However, even then the situation would not be like Betamax vs. VHS – because support for LRDIMM may not preclude support for LRDIMM2.

A situation that would be closer to Betamax vs. VHS would be if each of LRDIMM/LRDIMM2 required a MODIFICATION to the DIMM slots – and each got their own group of motherboard manufacturers backing their standard.

In THAT situation one type of DIMM slot/size would tend to win out and then the other sized one would start to feel the pressure.

So LRDIMMs are not facing this problem as there is no LRDIMM2. There is only one LRDIMM.

Although LRDIMM requires special effort by motherboard makers to include support for LRDIMMs, if they all agree to provide that support, then LRDIMMs will face no problems of exclusion. The problem only starts if motherboard makers are slow in doing the BIOS updates, or slacking in their support of LRDIMM at the BIOS level.

In contrast, HyperCloud has no such constraints or ties to motherboard makers AT ALL.

They can sell as many or as few HyperCloud memory modules as they want – since it is addressing the (far bigger and mainstream) market for RDIMM standard memory modules.

Netlist is therefore under no compulsion to “flood the market” to gain market share, or to sell cheaper than cost (at low or negative margins) to compete with LRDIMMs.

At the 16GB level, LRDIMMs will not compete (since they are non-viable vs. RDIMMs):
Why are 16GB LRDIMMs non-viable ?
June 19, 2012

And at the 32GB level, it will be 32GB RDIMMs which will not compete (at 3 DPC and 2 DPC they experience slowdown):
Non-viability of 32GB RDIMMs
June 20, 2012

At the 32GB level, the 32GB LRDIMMs will compete with the 32GB HyperCloud, however the LRDIMMs max out at 1066MHz at 3 DPC (on the servers that HyperCloud is selling on), and will be made using 4Gbit x 2 DDP while 32GB HyperCloud will be made using 4Gbit monolithic memory packages (which may make 32GB HyperCloud cheaper to produce than 32GB LRDIMMs).



Filed under Uncategorized

28 responses to “Is HyperCloud vs. LRDIMMs similar to Betamax vs VHS ?

  1. The proof is in the pudding, right?

    Romley was released in March. If the above argument is correct then the
    results to be reported by Netlist in July will confirm it by showing the start of the ramp in demand for Hypercloud DIMMs.

    • Yes, if Netlist provides a revenue breakdown which identifies that.

    • NLST revenue for HyperCloud would MORE reflect what the demand is for 3 DPC using 16GB. For the HP DL360p and DL380p and IBM x3650 M4 this segment is only served by NLST.

      So I would suggest that the NLST numbers would more reflect the demand for 3 DPC at 16GB on those servers.

      I don’t know if it reflects anything about HyperCloud vs. LRDIMM and it’s relation to Betamax vs. VHS i.e. anything about standards or the public’s perception of standards.

      Since IBM/HP are not posturing the HCDIMM/HDIMMs as anything other than some special RDIMMs.

      But the number should be interesting to see – because it will show that the early part of the ramp looks like and may better point the direction in which that ramp might go.

      I don’t think IBM/HP or others have couched HyperCloud vs. LRDIMM as some sort of standards war – and the reason probably IS because it is not that (with HyperCloud being just a better RDIMM and LRDIMM being in fact the new “standard” which could face questions of whether people want to adopt that and if it is worth the trouble).

      Another question is if LRDIMMs will require BIOS updates for the servers already shipped to data centers at a later date ? If this happens it will not be looked at very positively by existing users of LRDIMMs.

    • Another reason why the initial ramp curve for 3 DPC at 16GB says almost nothing about HyperCloud vs. LRDIMM is because LRDIMM is not operating in the 16GB space.

      Both IBM/HP did not list 16GB LRDIMM it’s user guides for those servers (though 16GB LRDIMM did LATER appear on the IBM memory list).

      So I would say that the Netlist ramp would say almost nothing about HyperCloud vs. LRDIMM.

      That question would see resolution when 32GB becomes common – in the second half of 2012.

      That will be interesting.

  2. please check out this product

    Up to 512GB DDR3 1600MHz ECC Registered DIMM; 16x DIMM sockets

    2 DPC? rDIMM remains at 1600?

    • Edit: I mistook the question to be a question about performance beyond the Intel PoR – when in fact the question has been addressed in the blog articles – correct response is given below.

      I don’t know about this particular motherboard – since that webpage will not give correct speed info – for that one needs to look at the speed tables in their detailed user guide.

      However 2 DPC at 1600MHz with 16GB RDIMM (2-rank) is doable.

      It is currently being done by the IBM x3750 M4 with tweaks.

      Generally I have heard from SuperMicro that faster speeds can be done using “Forced SPD” type tweaks in the BIOS.

      But these are outside the Intel PoR (“plan of record”) and are considered “at your own risk” – and not something many would be wanting to do.

      However if on this specific motherboard SuperMicro does indeed offer 1600MHz at 2 DPC using 16GB RDIMM (2-rank) – then that is certainly doable – as the IBM x3750 M4 has done. And while it IS outside the Intel PoR it may not be as bad as a BIOS tweak i.e. if they have done their homework with the motherboard changes – as IBM x3750 M4 seems to have done.

      Keep in mind that my range of interest does not move much beyond Netlist – however I have posted on IBM x3750 M4’s “anomalous” high speed – so this info is posted there – with a response from IBM there also.

      • Thanks, I am learning.

        I was looking at the potential alternatives, as I always do.
        in this case, going to 512GB using 2DPC with cheaper yet faster memory using rDIMMs instead of Hypercloud.

        In technology, there is this annoying thing where just when you think you had the whole thing cornered, something totally unexpected pops up to totally change the picture.

      • Edit: I mistook the question to be a question about performance beyond the Intel PoR – when in fact the question has been addressed in the blog articles – correct response is given below.

        Yes, that is the analysis one would have to do when buying IBM x3650 M4 or the IBM x3750 M4 (more expensive, emphasis on processor power vs. hard disk capability).

        So probably virtualization-centric data centers would buy the IBM x3650 M4 (virtualization needs high disk space vs. processor power), while HPC users would favor IBM x3750 M4.

        Also when 16GB HCDIMM becomes available on a larger set of servers, that will change the picture again for that server model 🙂

        Doesn’t IBM etc. advise customers about what thing is best for what – or are they also behind the curve because of all the changes ?

        I suspect the latter because most people have very specific areas of visibility – and a planner/adviser needs to have a bigger picture in view.

      • Wait a second ! I keep having difficulty keeping the speeds separate also.

        The tone of my previous comments above (referring to Intel PoR) was assuming you were talking about something special (i.e. something like 1333MHz at 3 DPC using RDIMMs – that is possible on IBM x3750 M4 using the special tweaks they have done).

        Ok, you are talking about 1600MHz at 2 DPC using 16GB RDIMM 2-rank – that is quite ordinary and I have extensively covered that in the blog in the HP DL360p and DL380p and the IBM x3650 M4 memory choice articles:
        Memory options for the HP DL360p and DL380p servers – 16GB memory modules
        May 24, 2012
        Memory options for the HP DL360p and DL380p servers – 32GB memory modules
        May 24, 2012
        Memory options for the IBM System x3650 M4 server – 32GB memory modules
        May 25, 2012

        As I have outlined in the articles above – if you only need 1 DPC, 2 DPC use you should buy the RDIMMs.

        If you need 3 DPC at 16GB size – you should buy the HyperCloud.

        If you need 32GB you may consider buying the RDIMM at 1 DPC (possibly). But at 2 DPC and 3 DPC you should buy the HyperCloud.

        Sorry, for my mistake.

        Now coming to your specific case – the SuperMicro server – you mention achieving 512GB on a 16 DIMM 2-socket server.

        That means 512GB/16 DIMM slots = 32GB per DIMM slot.

        This means you are talking abuot 32GB DIMMs !!

        That completely changes the picture.

        Firstly these 32GB RDIMM will be 4-rank – which means they will experience abysmal slowdown.

        And you should use the HyperCloud – if HyperCloud is not available on this server then you should use the 32GB LRDIMMs (which will still perform better than the 32GB RDIMMs (4-rank)).

        If 32GB HyperCloud was available on this server you would use the 32G HyperCloud (and not the 32GB LRDIMM).

        Hope this helps.

        I apologize for the mixup – in my previous 2-3 comments above.

      • You have to be careful reading the summary spec sheets for servers.

        The memory is ROUTINELY listed as 1600MHz for 1/2/3 DPC use. That just means the 1600MHz rated memory can be used at 1/2/3 DPC – it does not mean it will run at 3 DPC at 1600MHz.

        For that you have to look at the detailed user guides – as I have linked in the HP DL360p and DL380p and IBM x3650 M4 memory choice articles.

        Those detailed user guides show EXACTLY what speed is achieveable at 1 DPC, 2 DPC 3 DPC for each type of memory (and for each speed and voltage i.e. standard or low voltage) type of memory the OEM is supporting.

        So this is a type of deliberate sloppiness by the OEMs which can mislead users if they just look at the summary spec sheets.

        In addition you have to be careful – as sometimes they have errors also – I have found errors on some HP user guides even – one was showing UDIMMs (unregistered DIMMs) running at 1333MHz at 3 DPC (which obviously cannot be true) !

      • quote:
        Wait a second ! I keep having difficulty keeping the speeds separate also.

        I think I got side-tracked because the tone of your comment was of surprise that 1600MHz at 2 DPC using RDIMMs.

        I assumed you were talking about something new (i.e. that goes beyond the suggestions I have given in the blog) – and presumed you were talking about the extra special beyond Intel PoR stuff (as on the IBM x3750 M4).

        But your problem with the SuperMicro of 512MB on 16 DIMM slots on 2-socket server – is squarely within the discussion of memory choice for the regular Romley servers on the blog.

      • Also note that if your configuration is requiring use of 32GB LRDIMMs (because 32GB HyperCloud is not available on that SuperMicro server) then you have to pay a LOT of money.

        32GB LRDIMMs are very expensive more than 2x the 16GB LRDIMMs/16GB HyperCloud.

        For example from the IBM memory prices info in this article:
        What are IBM HCDIMMs and HP HDIMMs ?
        May 27, 2012

        The IBM memory price list:

        shows the price for 32GB LRDIMM is $4399:

        90Y3105 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM $4,399.00

        compare that to 32GB RDIMMs (4-rank):

        90Y3101 32GB (1x32GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066MHz LP RDIMM $2,499.00

        So the LRDIMM is nearly TWICE as expensive as the RDIMM !!

        I don’t understand why it is SO EXPENSIVE – because it is using the same 4Gbit x 2 DDP memory on the 32GB RDIMM (4-rank) – so should just be slightly more expensive than the 32GB RDIMM.

        I wonder if it is because they want NOBODY to order it – and don’t have 32GB LRDIMMs available ??
        IBM listed 32GB LRDIMMs as being “available later in 2012” in their user guides at Romley launch.
        So obviously 32GB LRDIMM was NOT available at that time – even though 16GB HyperCloud WAS available.

        This is why I conjectured that these OEMs were pressured into listing LRDIMMs in their user guides when they may not have been available at that time. Contrast that with 16GB HyperCloud which WAS available.

        Meanwhile compare to 32GB HCDIMM which will be available mid-2012 according to NLST previous conference call – now the OEMs could have listed that also as “available later in 2012”.

        Which suggests they were under pressure to list SOME LRDIMM.

        Have you tried to order any 32GB LRDIMMs ? Are they even available ?

        Compare that to the 16GB LRDIMM which is priced the same as the 16GB HCDIMM (HyperCloud).

        49Y1567 16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM $549.00

        00D4964 16GB (1x16GB, 1.5V)PC3-10600 CL9 ECC DDR3 1333MHz LP HyperCloud DIMM $549.00

        The prices at the resellers are going to be slightly cheaper – but you get the idea of the relative pricing between these various memory types.

      • Basically for 512GB on 16 DIMM slots you need 32GB RDIMMs.

        But 32GB RDIMMs are non-viable:
        Non-viability of 32GB RDIMMs
        June 20, 2012

        The reason is 32GB RDIMMs will be 4-rank for foreseeable future.

        4-rank experiences abysmal slowdown (see above article).

        So you have to buy a load reduction and rank multiplication solution at 3 DPC and 2 DPC use even (and possibly 1 DPC maybe if IBM docs to be believed).

        This means you buy the 32GB HyperCloud if available – if not available, you buy the 32GB LRDIMMs.

        But 32GB LRDIMMs are VERY EXPENSIVE.

        So you are basically in a bind with this server.

        You have to think about choosing another server that allows you to use 16GB memory modules – which are economical – and run at 3 DPC – but that will only get you up to 384GB.

        As analyzed in the article:
        What are IBM HCDIMMs and HP HDIMMs ?
        May 27, 2012

        If you use less than or equal to 256GB in 2-socket server (24 DIMM slots) you should use the 16GB RDIMMs (2-rank).

        These will allow you to run 256GB at 1600MHz.

        The problem starts to get tricky above 256GB.

        If you stick with 16GB memory – you have to use HyperCloud and then you can get 384GB at 1333MHz.

        If you stick with 32GB memory – you have to use HyperCloud and you can get up to 768GB at 1333MHz.

        Now you CAN run 32GB RDIMMs (rated at 1600MHz) to achieve 512MB (populating 16 DIMM slots) – however they will run at 800MHz (Edit: corrected 1066MHz to 800MHz).

        So THAT is the tradeoff.

        The other problem as outlined in previous comment – the 32GB LRDIMMs are expensive.

        You can wait for 32GB HyperCloud to be available – and they maybe cheaper than the 32GB LRDIMMs listed today at IBM (and 32GB LRDIMMs may also drop in price to compete).

        With both routes – from my analysis it turns out that coincidentally either way anything above 256GB requires a load reduction and rank multiplication solution – whether you use 16GB or 32GB.

      • You CAN however STILL use 32GB RDIMMs (4-rank) at 2 DPC – to achieve:

        32GB x 16 DIMM slots = 512GB total memory

        But the achievable speed will be 800MHz (that is what the IBM user guide is saying for 4-rank memory at 1.5V or 1.35V).
        Memory options for the IBM System x3650 M4 server – 32GB memory modules
        May 25, 2012

        Maybe the SuperMicro server has the 32GB RDIMM (4-rank) running slightly faster than that.

        But you get the picture – 4-rank suffers abysmal speed slowdown at 3 DPC (actually at 3 DPC 4-rank doesn’t even work because of the “8 ranks per memory channel” limit) and even at 2 DPC.

    • quote:
      please check out this product
      Up to 512GB DDR3 1600MHz ECC Registered DIMM; 16x DIMM sockets
      2 DPC? rDIMM remains at 1600?

      Note that a similar situation exists for IBM System X memory choices. 1600MHz RDIMMs (usually available at 1.5V only) not available above 8GB (so no 16GB, 32GB).

      Check out the infographic post – for info specific to IBM:
      Infographic – memory buying guide for Romley 2-socket servers
      June 29, 2012

  3. OK, I get it.
    so this is the “buyer be ware advice”

    they write 512GB of 1600MHz rDIMM which can not be actually achieved by any one.

    This explains to me comments from a few people about being happy at 256GB with their server. 16 sockets x 16GB rDIMM per socket, low cost and fast enough.

    • Right.

      This is why the 256GB seems to be a magical number.

      Below that feel free to use RDIMMs.

      Above that you have to use a load reduction and rank multiplication solution – otherwise you get speed slowdown.

      And that happens if you go the 16GB route, or the 32GB route (coincidentally both routes have that 256GB limit outcome from the analysis).

    • quote:
      I just checked the manual for that motherboard.

      1600MT/s possible only for UDIMMs at 1,2,4 or 8GB size in 1DPC config.

      Looking at the memory speed tables on page 33:

      16GB/32GB RDIMMs give a max of 1333MHz at 1 DPC, 2 DPC (nothing exceptional – same as IBM/HP 1333MH RDIMMs and slower than IBM/HP 1600MHz RDIMMs)
      16GB/32GB LRDIMMs give a max of 1333MHz at 1 DPC, 2 DPC (nothing exceptional – same as IBM/HP)

    • quote:
      Looking at the memory speed tables on page 33:

      16GB/32GB RDIMMs give a max of 1333MHz at 1 DPC, 2 DPC (nothing exceptional – same as IBM/HP 1333MH RDIMMs and slower than IBM/HP 1600MHz RDIMMs)
      16GB/32GB LRDIMMs give a max of 1333MHz at 1 DPC, 2 DPC (nothing exceptional – same as IBM/HP)

      The only thing exceptional I see there is that they show the 32GB RDIMM (4-rank) on pg. 34 as delivering 1333MHz at 2 DPC. I don’t know if this is a typo or not. Because IBM clearly shows 4-rank running at 800MHz only at 2 DPC. HP does not eve LIST a 32GB RDIMM in it’s user guide ! (at least on the ones I posted on the memory choice for HP blog articles).

    • quote:
      The only thing exceptional I see there is that they show the 32GB RDIMM (4-rank) on pg. 34 as delivering 1333MHz at 2 DPC. I don’t know if this is a typo or not. Because IBM clearly shows 4-rank running at 800MHz only at 2 DPC. HP does not eve LIST a 32GB RDIMM in it’s user guide ! (at least on the ones I posted on the memory choice for HP blog articles).

      I would suggest that maybe a typo. Because if their 32GB RDIMM (4-rank) can run at 1333MHz at 2 DPC, WHY are they selling the 32GB LRDIMM – which also does that (plus is expensive and high latency and all that) ?

      So given their enthusiasm for LRDIMM, I would suggest their 32GB RDIMM figures may have a typo – do you see that their specs on pg. 34 (RDIMM) and pg. 35 (LRDIMM) are the SAME. That does not make sense. Why are they selling LRDIMMs in the first place then ?

    • Common sense would suggest to buy product from IBM/HP – at least their docs include ALL memory choices – and they have vetted all the products against each other and their speed guides reflect that.

      They are actually recommending the HyperCloud (16GB at 3 DPC) for instance.

      These other OEMs are behind – and their speed guides are reflecting that – continuing to tout LRDIMMs. They are not going to cover the full gamut of possibilities for end-users. Customers are going to be pissed later when they find out they “were never told” or “never knew”.

      You have already seen – in the discussion above – how the superficial spec sheets can mislead a lot of people.

      These OEMs are going to sell a lot of product to a lot of people – until they realize 6 months later what happened.

    • By the way, have you gotten an indication that 32GB LRDIMMs are even available for immediate delivery ?

      Or are they still being listed as “available later in 2012” ?

      The excessively high price on the 32GB LRDIMM seems anomalous – as if it is designed so no one actually asks to order it – but it is there on the user guide (to satisfy Intel).

  4. Pingback: Examining LRDIMMs | ddr3memory

  5. Pingback: Examining Netlist | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s