Memory buying guide – when to use RDIMMs on Romley ?

“above 256GB requires HyperCloud” is a universal limit

256GB seems to be a magical number.

Below that feel free to use RDIMMs.

NOTE: the analysis covers 2-socket servers (max of 24 DIMM slots). For 4-socket servers you need to scale the numbers by 2x (i.e. analysis is valid on a per 2-socket basis)

Above that you have to use a load reduction and rank multiplication solution – otherwise you get speed slowdown.

And that happens if you go the 16GB route, or the 32GB route (coincidentally both routes have that 256GB limit outcome from the analysis).

Analysis

From analysis presented in the articles:

https://ddr3memory.wordpress.com/2012/05/24/memory-options-for-the-hp-dl360p-and-dl380p-servers-16gb-memory-modules/
Memory options for the HP DL360p and DL380p servers – 16GB memory modules
May 24, 2012

https://ddr3memory.wordpress.com/2012/05/24/memory-options-for-the-hp-dl360p-and-dl380p-servers-32gb-memory-modules/
Memory options for the HP DL360p and DL380p servers – 32GB memory modules
May 24, 2012

https://ddr3memory.wordpress.com/2012/05/25/memory-options-for-the-ibm-system-x3630-m4-server-16gb-memory-modules-2/
Memory options for the IBM System x3650 M4 server – 16GB memory modules
May 25, 2012

https://ddr3memory.wordpress.com/2012/05/25/memory-options-for-the-ibm-system-x3630-m4-server-32gb-memory-modules/
Memory options for the IBM System x3650 M4 server – 32GB memory modules
May 25, 2012

https://ddr3memory.wordpress.com/2012/05/27/what-are-ibm-hcdimms-and-hp-hdimms/
What are IBM HCDIMMs and HP HDIMMs ?
May 27, 2012

When choosing memory for 2-socket servers (24 DIMM slots i.e. 12 DIMMs per processor) – if you choose 16GB memory modules, OR 32GB memory modules, in both cases there seems to be a similar outcome:

– if you need less than or equal to 256GB total memory, you should use the 16GB RDIMMs (2-rank) – these are cheap and they will work at 1333MHz or 1600MHz (same as their rating)
.
– if you need more than 256GB you will require a load reduction and rank multiplication solution – else there will be speed slowdown
.
– when choosing a load reduction and rank multiplication solution – choose HyperCloud if you think it’s latency, cost and IP superiority trumps LRDIMMs – usually they are priced similarly (IBM memory price list)

Below 256GB

Most 16GB RDIMMs (2-rank) currently run at full speed at 1 DPC and 2 DPC. So you only need a load reduction and rank multiplication solution at 3 DPC.

You can use 16GB RDIMMs for 1 DPC and 2 DPC – these will run at full speed (IBM/HP).

On a 2-socket motherboard (2 processors) – there are 4 memory channels per processor – with each memory channel supporting up to 3 DIMMs per channel (3 DPC).

2 processors x 4 memory channels per processor x 3 DIMMs per channel = 24 DIMMs

or

8 x 3 DPC = 24 DIMMs

For use at 2 DPC it is:

8 x 2 DPC = 16 DIMMs

At 1 DPC it is:

8 x 1 DPC = 8 DIMMs

So at 2 DPC you can populate 16 DIMM slots – which is:

16GB x 16 DIMM slots = 256GB total memory

If you choose to use 32GB RDIMMs – which will be available in 4-rank only for the foreseeable future – these experience slowdown at 3 DPC and 2 DPC (IBM docs even show them running slow at 1 DPC).

However if 32GB RDIMMs (4-rank) are able to run at 1 DPC at full speed on your particular server – then you can populate them so:

8 x 1 DPC = 8 DIMMs

8 DIMMs x 32GB = 256GB

So if you pick the 32GB you could potentially run them at 1 DPC (256GB) but will need a load reduction and rank multiplication solution above that i.e. 512GB (2 DPC) or 768GB (3 DPC).

Above 256GB

Above 256GB, you will be operating squarely in territory that will require load reduction and rank multiplication solution (to avoid speed slowdown).

– using 16GB at 3 DPC requires a load reduction and rank multiplication solution
.
– using 32GB at 2 DPC and 3 DPC requires a load reduction and rank multiplication solution

Rule of thumb

For this reason a rule of thumb seems to emerge:

– below 256GB on a 2-socket server and you should be able to use RDIMMs
.
– above 256 on a 2-socket server and you have to use a load reduction and rank multiplication solution
.
– when choosing a load reduction and rank multiplication solution – choose HyperCloud if you think it’s latency, cost and IP superiority trumps LRDIMMs – usually they are priced similarly (IBM memory price list)

If you need LESS than or equal to 256GB memory on a 2-socket server, then the decision is simple:

– you should buy 16GB RDIMMs and can run them at 1 DPC or 2 DPC
– these will be cheap, and fast (you can run 1600MHz 16GB RDIMMs at 2 DPC at 1600MHz)

If you need MORE than 256GB memory on a 2-socket server, then the decision is simple:

– you should buy 16GB HyperCloud
– usually they are priced similar to LRDIMMs (IBM memory price list) – 32GB HyperCloud may be cheaper still because they use 4Gbit monolithic memory packages instead of 4Gbit x 2 DDP memory packages as on the 32GB RDIMM/32GB LRDIMM

You can exclude a few variety of memory types from consideration – because they are known to be non-viable (i.e. something else is better).

16GB LRDIMMs are non-viable as examined here:

https://ddr3memory.wordpress.com/2012/06/19/why-are-16gb-lrdimms-non-viable/
Why are 16GB LRDIMMs non-viable ?
June 19, 2012

32GB RDIMMs will be 4-rank for the foreseeable future and 4-rank experiences abysmal speed slowdown at 3 DPC and 2 DPC (actually 4-rank will not even run at 3 DPC because of the “8 ranks per memory channel” limit).

32GB RDIMMs are non-viable (for 2 DPC and 3 DPC) as examined here:

https://ddr3memory.wordpress.com/2012/06/20/non-viability-of-32gb-rdimms/
Non-viability of 32GB RDIMMs
June 20, 2012

At the 32GB level this leaves LRDIMMs and HyperCloud.

LRDIMMs bring with them a set of problems:

– HyperCloud latency, cost and IP superiority trumps LRDIMMs
– usually they are priced similarly (IBM memory price list)

On the risk factors for LRDIMM:

https://ddr3memory.wordpress.com/2012/06/05/lrdimms-future-and-end-user-risk-factors/
LRDIMMs future and end-user risk factors
June 5, 2012

https://ddr3memory.wordpress.com/2012/06/15/why-are-lrdimms-single-sourced-by-inphi/
Why are LRDIMMs single-sourced by Inphi ?
June 15, 2012

So LRDIMMs are in general weaker than HyperCloud.

A word on the OEM server spec sheets

You have to be careful reading the summary spec sheets for servers.

The memory is ROUTINELY listed as – for example 1600MHz for 1/2/3 DPC use. That just means the 1600MHz rated memory can be used at 1/2/3 DPC – it does not mean it will run at 3 DPC at 1600MHz.

For that you have to look at the detailed user guides – as I have linked in the HP DL360p and DL380p and IBM x3650 M4 memory choice articles above.

Those detailed user guides show exactly what speed is achieveable at 1 DPC, 2 DPC 3 DPC for each type (and for each speed and voltage i.e. standard or low voltage) of memory the OEM is supporting.

So this is a type of sloppiness by the OEMs which can mislead users if they just look at the summary spec sheets.

In addition, you have to be careful – as sometimes they have errors as well – I have found errors on some HP user guides even – one was showing UDIMMs (unregistered DIMMs) running at 1333MHz at 3 DPC (which obviously cannot be true) !

Sometimes older versions of the user guide or motherboard manual is also floating on the web, and you can wind up reading an older version which does not list newer memory available for that server.

So for example for HP you can find earlier versions of the user guides for the HP DL360p and DL380p servers which do not mention HyperCloud. HyperCloud was released a bit later on the HP servers than on the IBM servers – i.e. a bit after the initial Romley rollout. And so there are HP doc versions that predate the HyperCloud mention – the guides I have linked in the HP DL360p and DL380p and IBM x3650 M4 memory choice articles are the newer ones which list HyperCloud.

An example of the problems end-users face in deciphering server specification sheets regarding memory is illustrated by the comments section for this article:

https://ddr3memory.wordpress.com/2012/06/21/is-montage-another-metaram/
Is Montage another MetaRAM ?
June 21, 2012

Conclusion

So the very short rule of thumb then becomes – on a 2-socket server you should choose:

– if you need less than or equal to 256GB, you should use 16GB RDIMMs (2-rank) – you will be able to achieve 256GB at max speed (i.e. 1600MHz or 1333MHz whatever the rating of the RDIMMs are)
.
– if you need more than 256GB, you should use HyperCloud – you will be able to achieve up to 384GB using 16GB HyperCloud, and up to 768GB using 32GB HyperCloud (available mid-2012 according to NLST) – both running at 1333MHz
.
– if you want to use HyperCloud, but it is not available on your server, then for 16GB use, you will have to keep using RDIMMs at 3 DPC as well – for a max of 384GB at 1066MHz (if you used HyperCloud it would give 1333MHz)
.
– if you want to use HyperCloud, but it is not available on your server, then for more than 384GB you would have to use 32GB LRDIMMs at 2 DPC (giving you 512GB running at 1333MHz) or 3 DPC (giving you 768GB at 1066MHz) – (if you used HyperCloud it would give 1333MHz at both 512GB as well as 768GB)

When you buy LRDIMMs you have to be aware of these issues:

– high latency issues compared to RDIMMs and HyperCloud
.
– max out at 1066MHz at 3 DPC (i.e. slower than HyperCloud which yields 1333MHz at 3 DPC)
.
– 16GB LRDIMMs are priced similarly to the 16GB HyperCloud (but 16GB LRDIMMs are non-viable vs. 16GB RDIMM – so in practice you would never buy the 16GB LRDIMM)
.
– 32GB LRDIMMs are VERY expensive – so that is another thing to consider – even though both 32GB LRDIMMs and 32GB RDIMMs (4-rank) are using 4Gbit x 2 DDP memory packages, yet the LRDIMM is nearly twice as expensive currently. 32GB HyperCloud is expected to arrive mid-2012 (according to NLST) and is based on 4Gbit monolithic memory packages which are cheaper than DDP (Netlist leverages it’s Planar-X IP so monolithic can be used). 32GB LRDIMMs may become cheaper at that time.

Check out the articles above regarding non-viability of 16GB LRDIMM and 32GB RDIMM for more detail.

For more on 32GB RDIMM/32GB LRDIMM use of 4Gbit x 2 DDP memory packages:

https://ddr3memory.wordpress.com/2012/06/25/ddp-vs-monolithic-memory-packages/
DDP vs. monolithic memory packages and their impact
June 25, 2012

Prices

Prices have been discussed in this article (reseller prices are slightly lower than IBM/HP retail):

https://ddr3memory.wordpress.com/2012/05/27/what-are-ibm-hcdimms-and-hp-hdimms/
What are IBM HCDIMMs and HP HDIMMs ?
May 27, 2012

The 16GB LRDIMM is priced the same as the 16GB HCDIMM (HyperCloud):

49Y1567 16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM $549.00

00D4964 16GB (1x16GB, 1.5V)PC3-10600 CL9 ECC DDR3 1333MHz LP HyperCloud DIMM $549.00

32GB RDIMMs are priced at nearly TWICE the price of the 32GB RDIMM – even though both are based on similar 4Gbit x 2 DDP memory packages:

90Y3105 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM $4,399.00

90Y3101 32GB (1x32GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066MHz LP RDIMM $2,499.00

These prices may change, but you get the idea of the relative pricing of these products.

Explaining the excessive price of the 32GB LRDIMM

I wonder if the 32GB LRDIMM is priced excessively high because they want NOBODY to even think about ordering it – and don’t have 32GB LRDIMMs available ?

IBM listed 32GB LRDIMMs as being “available later in 2012” in their user guides at Romley launch.

So obviously 32GB LRDIMM was NOT available at that time.

16GB HyperCloud WAS available and was listed as well, but the 32GB HyperCloud which is also due mid-2012 (according to NLST) was not listed as “available later in 2012”.

Which suggests some pressure on the OEMs from Intel – to list SOME LRDIMM in their Romley offerings (even something that was “available later in 2012”).

Probably 16GB LRDIMMs would have been available to list, but IBM/HP may not have felt in good conscience that they should list the 16GB LRDIMMs (since they are non-viable vs. the RDIMMs).

Exceptions

The “above 256MB use HyperCloud” rule of thumb has been developed because that 256GB boundary leaps out of the analysis as it applies to regular Romley servers.

There ARE exceptions to the “above 256GB use HyperCloud” rule of thumb however.

There are some servers which can implement BIOS tweaks – which SuperMicro describes as “Forced SPD” setting which place the server outside the Intel PoR (“plan of record”). In some of these it may offer such speeds on an “at your own risk” basis. However many data center type users may not want to operate in that region, while others maybe willing to operate in that regime.

Then we have IBM x3750 M4 server which has implemented some tweaks on the motherboard (which place it outside the Intel PoR) – which has the effect of improving the memory bus signal quality (on the motherboard) – this allows all memory types to experience improved performance – so both RDIMMs and LRDIMMs perform better and presumably HyperCloud would too (if available on this server). The IBM x3750 M4 server seems to be addressing the high processing power market (has high processing power/disk space ratio) and is suited for the HPC market. This is described here and also includes comments from IBM:

https://ddr3memory.wordpress.com/2012/06/02/memory-choices-for-the-ibm-system-x-x3750-m4-servers-2/
Memory choices for the IBM System X x3750 M4 servers
June 2, 2012

With these types of servers, the limit moves upwards from “256GB” to “384GB”, since beyond that you need to use 32GB RDIMMs (4-rank) and these experience slowdown that will require a load reduction and rank multiplication solution. So the rule of thumb would move up from “256GB” to “384GB” for these servers.

UPDATE: added 06/28/2012: added reference to more complete analysis

This has been an examination of the memory buying process when considering standard (1.5V) memory.

A more complete examination in which both 1.5V and 1.35V memory choices are considered:

https://ddr3memory.wordpress.com/2012/06/28/memory-buying-guide-including-1-35v-memory-for-romley/
Memory buying guide – including 1.35V memory for Romley
June 28, 2012

And a new rule of thumb is derived – that includes both 1.5V and 1.35V considerations:

for 1.5V – “above 256GB requires HyperCloud”
for 1.35V – “above 384GB requires HyperCloud”

Advertisements

28 Comments

Filed under Uncategorized

28 responses to “Memory buying guide – when to use RDIMMs on Romley ?

  1. I note that with the 4-socket IBM System x3750 M4 [1], I can go up to 768GB with 16GB RDIMMs at 3 DPC without any slowdown. That’s more than 256GB per 2-sockets.
    [1] http://www.redbooks.ibm.com/abstracts/tips0881.html?Open#memory
    David.

    • That’s because the IBM x3750 M4 uses the motherboard tweaks as described in this article:

      https://ddr3memory.wordpress.com/2012/06/02/memory-choices-for-the-ibm-system-x-x3750-m4-servers-2/
      Memory choices for the IBM System X x3750 M4 servers
      June 2, 2012

      And as clarified by your comments there.

      So it makes perfect sense that x3750 should have that performance.

      I have added an exception paragraph to specify that x3750 etc. do not follow the rule of thumb described in the article.

    • Thanks for the feedback.

    • For extra special servers like the IBM x3750 M4 server, the rule of thumb would be moved upwards from “256GB” for regular Romley servers to a higher notch at “384GB”.

      This analysis is on a per 2-socket basis – so for the 4-socket server you just double the memory sizes.

      Because beyond 384GB (for 2-socket server) you would need to use 32GB RDIMMs (4-rank) – as 16GB would not suffice – with the associated problems associated with use of 4-rank – and you would use a load reduction and rank multiplication solution above “384GB”.

  2. Pingback: Where are the 1600MHz LRDIMMs/HyperCloud for Romley ? | ddr3memory

  3. I created an info graph for the rDIMM selection process.
    http://www.kloudpedia.com/2012/06/27/rdimm/

    Please see if you would like to copy or create a better one for this blog.

    • Ok, thanks – I was going to give some corrections – but the explanation may be made simpler by focusing on the total memory a user wants – and using that as the starting point.

      Let me get back to you on this – I’ll post back here shortly.

    • Ok, I see where you are going with the “start with the DPC” ..

      Here are some of the comments I had noted – some of them maybe useful ..

      – change the .tiff to .jpg/.gif/.png for more universal viewing ..

      – suggestion for an open source flowcharting program try Dia – http://portableapps.com/apps/office/dia_portable

      – if it is a flow chart – should be an arrow pointing to the 3 DPC box (i.e. “start here”), or arrows going into the 1 DPC, 2 DPC and 3 DPC boxes – can also consider removing the arrows for “No” going from 3 DPC to 2 DPC – and just keep them as separate 1 DPC, 2 DPC and 3 DPC boxes – each with it’s arrows going to the memory options below

      – you would remove the “Yes/No” option from the 1 DPC box – since there is no choice of server at that point if don’t want 3 DPC, 2 DPC or 1 DPC

      – correct typo “HCDIM is HyperCloud ..” to “HCDIMM is ..” (second from bottom)

      – maybe change “3DPC” to “3 DPC”
      – change “32 GB” to “32GB”

      – “RDIMMs do not have buffers on the data lines so bus speed drops with more than 1 DPC” is incorrect – only is so for 32GB RDIMM (4-rank) i.e. may work at full speed at 1 DPC, 2 DPC slows down and 3 DPC cannot work due to “8 ranks per memory channel” limit – the 16GB RDIMM (2-rank) work well at 1 DPC and 2 DPC (speed slowdown is at 3 DPC)

      – “A 32GB RDIMM 2R does not exit” – and “A Maximum of 8 ranks ..” – may want to give those separate white boxes

      – uneasy with the “lots of money to spend” decision box – since that maybe better described as a “spectrum of memory sizes usable” type of thing and not necessarily a binary thing (this would fit with a graphic description that covers the 1 DPC, 2 DPC use case for a 3 DPC motherboard then also) – perhaps a “how much memory you want” type of box – also RDIMM vs. HCDIMM prices are not that hugely different (IBM/HP) so maybe a bit extreme to label them cheap/expensive ..

      – the tweaked decision box (referring to the IBM x3750 M4) may be confusing the diagram – since is an exception (beyond Intel PoR) and maybe less confusing (to the more common use case diagram) if is removed to a separate box below – i.e. handled as an “exception” from the main diagram (?)

      – on the “tweaked” decision box going to 32GB RDIMM 4R – there is no data to suggest that the 32GB RDIMM (4-rank) is an option on the IBM x3750 M4 or what speed it will run at – it has been excluded for some reason i.e. probably the 4-rank speed slowdown issue – the data sheet for that server does not show any 32GB RDIMM (4-rank) (like the HP user guides). As the article states (and IBM docs suggest) – the 4-rank is non-viable at 2 DPC and 3 DPC (and is replaceable at 1 DPC – by using 16GB RDIMM (2-rank) at 2 DPC instead) – however since you are listing the choices for the 1 DPC motherboard you can’t use 2 DPC – so you maybe right in listing the 32GB RDIMM 4R and the 32GB HCDIMM – but there is no info on how much improvement the 32GB RDIMM (4-rank) sees on the tweaked server – if it is very bad it may not be worth listing it as an option on the diagram (esp. if the 32GB HyperCloud will be cheaper using 4Gbit monolithic instead of 4Gbit x 2 DDP as on the 32GB RDIMM (4-rank)). The IBM x3650 M4 user guide:
      http://www.redbooks.ibm.com/abstracts/tips0881.html?Open#memory

      – perhaps a separate box also for non-viable memory types – 16GB LRDIMM and 32GB RDIMM (4-rank) which are clearly non-viable (vs. RDIMM and vs. LRDIMM/HyperCloud respectively)

  4. DDR3memory
    It looks like pricing on 32GB LRDIMM has been changed. It is now very close to 32GB RDIMM. Please check on HP website (provided a link on NLST message board also).
    Great thread for people like me – software folks needing more hardware !

    • You are right – there is a link for HP 32GB LRDIMM which places the price closer to 32GB RDIMMs (4-rank) – as it SHOULD BE – since both are made using similar 4Gbit x 2 DDP memory packages – so the LRDIMM should cost slightly more than the RDIMM variety.

      This maybe a sign that 32GB LRDIMMs have actually become available in the market – while earlier they were not – or were priced so no one would try to buy them (?).

      Or they are pricing in preparation for arrival of 32GB HyperCloud which is based on 4Gbit monolithic memory packages – and so is at least cheaper to produce than both the 32GB LRDIMM and the 32GB RDIMM (4-rank) which are based on DDP designs.

      However, the IBM link I give in the article is still listing the 32GB LRDIMM at those extraordinarily high prices of nearly 2x the 32GB RDIMM (4-rank) – which is hard to understand.

  5. I thought a long time about making the starting point be the “desired” memory size then convinced myself that most people start the process by selecting motherboard so I went with that.

    I tested all my cases against your entry and it all looked ok (to me).
    no rush, lets see how the memory starting point looks.

    thanks

  6. Don’t forget: The DIMMs that are supported by a given server aren’t simply a matter of processors, number of memory channels and DPC. I would expect that system-level criteria such as EMI, thermals and power come into it, as well as market forces such as supply and product positioning.

    David Watts
    IBM Redbooks

    • Right – there are two points of view here – the end-user point of view of “how to choose memory” – and that seems complex enough (unless one thinks through the permutations and a simple rule comes out then ok).

      And the second viewpoint are the considerations the manufacturer had to juggle to decide what to support on a particular motherboard.

      There are currently – in the currently available IBM/HP options for HyperCloud/LRDIMM:

      – LRDIMM is available in low voltage 1.35V also while HyperCloud is not. There would be users who would be willing to sacrifice speed if they can’t compromise on the 1.35V. HyperCloud may appear in low voltage later – or if they have buyers at 1.5V they may continue to focus on 1.5V (although Netlist has said they are working on expanding the 16GB HyperCloud to more servers – I don’t know if that includes offering a 1.35V although Netlist did announce a 1.35V version some time back).

      – LRDIMMs while based on the single-sourced Inphi LRDIMM buffer chipset still have multiple module producers who are each capable of pushing some variety of the LRDIMM (standard, low voltage) – their combined capabilities allow them to produce a wider array of modules combined (or qualify on a wider range of modules). In that way HyperCloud is a small player compared to all those LRDIMMs combined – the hope is that like Apple is becoming a smaller player compared to the whole Android variety, yet is still a big player if you compare with individual companies in the Android space, the hope is that HyperCloud while remaining smaller than the whole LRDIMM space, may still occupy a significant position when compared to any single LRDIMM player (at least that is the hope from a pro-Netlist viewpoint as I hold)

    • How would you describe the positioning for IBM x3650 M4 and the IBM x3750 M4 (higher processing power/hard disk space).

      Would you say the x3650 M4 is more likely to be used for virtualization/data center tasks where the tasks are not processor bound. Virtualization also needs more hard disk space.

      While the x3750 M4 is squarely for the HPC (high performance computing/supercomputing) arena where processor speed is paramount (not as much emphasis on hard disk space). Since the x3750 M4 has 4 processors while having the same hard disk area (and total volume) as the x3650 M4.

      I understand there is probably some leeway either way – but is that the general sense of how these two servers maybe positioned ?

  7. I started to edit the diagram per your comments, then I realized that in order to take into account other considerations such as power, performance, density, price & availability one probably needs a positioning chart instead.

    I will see if I can come up with one.

  8. Pingback: Memory buying guide – including 1.35V memory for Romley | ddr3memory

  9. I changed the complex decision graph into a simple table

    http://www.kloudpedia.com/2012/06/27/rdimm/

    Please check it out if you like. Otherwise, simply ignore.

    This is now, as we discussed, a table that shows the end user what DIMM type will get them which memory density at which speed grade.

    thanks

    note: need to get HCDIMMs listed in this site too !

    http://www.findthebest.com/

    • Yes, easier to read.

      Maybe add a column for 640GB (even though it is not used) – so the x-axes are linear.

      The x-axis being total memory – with 1 DPC, 2 DPC, 3 DPC shifting to the right for the 32GB seems like a good way to illustrate.

      The 32GB RDIMM (4-rank) doesn’t run at 1333MHz at 1 DPC, but 1066MHz – see IBM speeds for that:

      RDIMM – quad-rank (4-rank) – at 1.5V
      – 1 DPC at 1066MHz
      – 2 DPC at 800MHz
      – 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)
      .
      RDIMM – quad-rank (4-rank) – at 1.35V
      – 1 DPC at 800MHz
      – 2 DPC at 800MHz
      – 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)

      Maybe have a similar graph at the bottom for the 1.35V data would complete the picture.

    • Thanks.

      I’ll try to send something out today.

    • I have just e-mailed you the infographic I was going to post.

      Thanks for suggesting the idea of an infographic, and posting your graphic, as it helped me incorporate some of your ideas.

      Please let me know of any mistakes, or if there is some confusion, or info that should go on there.

      • The document you sent is excellent.
        you can tell who is the expert.
        I did not find any mistakes yet.

        The only comment I had was regarding this statement:
        “32GB HyperCloud 1.5V/1.35V due mid-2012”
        you mean by this that they will be qualified for IBM and HP, right?

        would be good to also add one line about the fact that
        Hypercloud is compatible with RDIMMs on any server even if not
        qualified by the vendor. For example, Supermicro motherboards.

        once published, I will link to it.

      • Updated with your suggestions.

        quote:
        —-
        The only comment I had was regarding this statement:
        “32GB HyperCloud 1.5V/1.35V due mid-2012”
        you mean by this that they will be qualified for IBM and HP, right?
        —-

        Changed that to:

        Availability:
        32GB HyperCloud 1.5V/1.35V due mid-2012 as IBM HCDIMM, HP HDIMM

        This is based on comments in the recent Netlist conference call regarding 32GB availability, and 1.35V is suggested by the 32GB white paper (and the logical reasoning presented in “Memory buying guide – including 1.35V memory for Romley”).

        quote:
        —-
        would be good to also add one line about the fact that
        Hypercloud is compatible with RDIMMs on any server even if not
        qualified by the vendor. For example, Supermicro motherboards.
        —-

        Ok, thanks – managed to get that added there also (maybe need Planar-X to increase infographic real-estate – just kidding):

        Interoperability:
        – LRDIMMs incompatible with RDIMMs
        require BIOS update
        (included on most new Romley servers)
        – HyperCloud interoperable with RDIMMs
        usable on Romley and pre-Romley servers
        IBM/HP recommend using all-HyperCloud for max
        load reduction and rank multiplication impact

  10. Pingback: Infographic – memory buying guide for Romley 2-socket servers | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s