Examining LRDIMMs

LRDIMMs vs. the RDIMM standard

UPDATE: 07/06/2012 – VMware certifies Netlist as sole memory vendor
UPDATE: 07/27/2012 – confirmed HCDIMM similar latency as RDIMMs
UPDATE: 07/27/2012 – confirmed LRDIMM latency and throughput weakness

Load Reduced DIMMs (LRDIMMs) are a new JEDEC-ratified standard for memory modules.

The key difference between RDIMMs and LRDIMMs is load reduction and rank multiplication capability.

– load reduction – making a memory module have lower electrical load on the memory bus – electrical load is important because it slows down achievable bandwidth if you use too many memory modules on a memory channel
.
– rank multiplication – making a memory module seem to have a lower “rank” – there is an 8 rank per memory channel limit on Romley servers – 32GB RDIMMs have 4-rank and cannot be used at 3 DIMMs per channel (3 DPC)

For more info on why load reduction and rank multiplication are needed:

https://ddr3memory.wordpress.com/2012/05/24/intels-need-for-lrdimms-on-roadmap-to-ddr4/
Intel’s need for LRDIMMs on roadmap to DDR4
May 24, 2012

https://ddr3memory.wordpress.com/2012/05/24/the-need-for-high-memory-loading-and-its-impact-on-bandwidth/
The need for high memory loading and it’s impact on bandwidth
May 24, 2012

For an explanation of how memory is installed on servers, check out:

https://ddr3memory.wordpress.com/2012/05/24/installing-memory-on-2-socket-servers-memory-mathematics/
Installing memory on 2-socket servers – memory mathematics
May 24, 2012

https://ddr3memory.wordpress.com/2012/06/29/infographic-memory-buying-guide-for-romley-2-socket-servers/
Infographic – memory buying guide for Romley 2-socket servers
June 29, 2012

Why a new JEDEC standard for memory ?

LRDIMMs are pin-compatible with current DIMM slots on motherboards. That is, they look like RDIMMs.

The only difference is that LRDIMMs provide load reduction and rank multiplication capability.

However, this capability by itself does NOT require creation of a new standard.

HyperCloud (RDIMM) does not need a new standard

For example, Netlist’s HyperCloud memory does not impose such a restriction.

Netlist is the inventor and holder of key IP in load reduction and rank multiplication. Netlist’s HyperCloud does load reduction and rank multiplication and trump LRDIMMs on performance, latency, price and on IP issues.

Yet HyperCloud did not require creation of a new “standard”.

The reason is that HyperCloud is compatible with standard RDIMM. In fact, you can mix HyperCloud and RDIMMs on the same motherboard. Although realistically you would not want to do that because it defeats the load reduction effect you are presumably trying to achieve by using HyperCloud (it is perhaps for this reason that IBM/HP recommend using in an all-HyperCloud configuration).

There is a reason why HyperCloud does not require a BIOS update – and it is related to IP that Netlist holds in this area – I suspect it has something to do with “Mode C” (which is mentioned in the court documents in Netlist’s litigation against Google) – but I cannot definitely say that that is the sole reason for this capability.

The bottom line is that HyperCloud does not require a new standard.

And LRDIMM does.

https://ddr3memory.wordpress.com/2012/07/03/examining-netlist/
Examining Netlist
July 3, 2012

LRDIMMs require a BIOS update and thus a new standard

So why the need to standardize LRDIMMs ? Why was JEDEC needed ?

Firstly the construction of the LRDIMM buffer chipset (used to make the memory module) requires standardization if a lot of different companies are going to be making the LRDIMM buffer chipset (Inphi, IDTI and Texas Instruments are the top 3 buffer chipset makers for RDIMMs).

Secondly the construction of the memory module itself would have to be standardized so there is little variation between products from different memory module makers (for example Samsung, Micron, Smart Modular).

However, the third reason is the most strategically interesting because it moves the standardization BEYOND the memory module and expands the scope of the standardization effort to the motherboard makers.

And this happens because LRDIMMs require a BIOS update in order to work.

This means the motherboard makers need to implement a BIOS update for every motherboard the LRDIMMs are going to work on, and then to test if the update works well with the LRDIMM varieties being sold.

When a memory module requires changes by the motherboard makers, that requires much more coordination, and therefore you need a JEDEC standardization committee to establish how things are to be done – and require guidelines for the motherboard makers as well.

LRDIMMs will not work without this LRDIMM-specific BIOS update.

LRDIMMs are NOT compatible with RDIMMs.

You cannot put LRDIMMs and RDIMMs together on the same motherboard. This is because LRDIMMs impose a restriction about how the memory module talks to the BIOS and using them both at the same time breaks this presumption.

Difference between LRDIMMs and HyperCloud

The important distinction is then that HyperCloud is compatible with the RDIMM standard and requires no BIOS update, is “plug and play” and interoperable with regular RDIMMs.

Thus HyperCloud are leveraging the RDIMM standard – a pre-existing and widely deployed standard, and require no separate JEDEC standard in order to work.

LRDIMMs in contrast require a BIOS update to work, and therefore requires standardization across motherboard makers to ensure consistent application of such a BIOS update across the motherboard manufacturers.

The major impact of HyperCloud compatibility with RDIMMs is that:

– HyperCloud is RDIMM – except with internal features which make it’s load and rank look less (load reduction, rank multiplication)
.
– HyperCloud requires NO cooperation or corralling of motherboard makers to make HyperCloud work – in contrast, LRDIMM requires a BIOS update being implemented by all motherboard makers
.
– While LRDIMMs are a new standard that is incompatible with RDIMM, HyperCloud leverages the RDIMM standard – mainstream and in use and supported by all – and does not require JEDEC endorsement, or cooperation of motherboard makers

There is a reason why HyperCloud does not require a BIOS update – and it is related to IP that Netlist holds in this area – I suspect it has something to do with “Mode C” (which is mentioned in the court documents in Netlist’s litigation against Google) – but I cannot definitely say that that is the sole reason for this capability.

Here is an explanation given by Netlist for why HyperCloud requires no BIOS update – while LRDIMM does (the reference to “they do that mainly in software and can’t do the full rank-multiplication like our product does”):

http://78449.choruscall.com/netlist/netlist120228.mp3
Fourth Quarter and Full Year 2011 Conference Call
Tuesday, February 28 5:00pm ET

at the 30:00 minute mark ..

George Santana of Ossetian (?):

Just .. how long do you think NLST has as far as a head start on the 2-rank 32GB ?

Chuck Hong – CEO:

Well the .. the only other way to build a really .. a real 2-rank 32GB is with 8Gbit (DRAM) die from the semiconductor manufacturers.

I don’t think any body even has that on their roadmap – except maybe Samsung.

It looks like 4Gbit (DRAM die) will be the LAST viable .. uh .. monolithic die out in the industry.

So the industry is looking to go into some stacking methodologies that you have heard of 3DS and there are some other competing technologies (Hybrid Memory Cube etc.), so we think effectively we’ll have the only real 32GB 2-rank in the market for DDR3.

And DDR4 when products start stacking, you need rank-multiplication and HyperCloud is really the only product that does rank multiplication on the DIMM itself, so .. as you dig into how other technologies try to do that, they do that mainly in software and can’t do the full rank-multiplication like our product does.

So I think we have a pretty good .. uh .. advantage there.

Why LRDIMMs could only work with Romley rollout

The BIOS requirement imposed by LRDIMMs is a deal-breaker for LRDIMM use on pre-Romley systems and effectively prevented rollout of LRDIMMs prior to Romley.

This is because no data center would want to fiddle with BIOS settings on existing servers to get LRDIMMs to work – without being guaranteed of the outcome of such an upgrade effort.

For this reason LRDIMMs could only have been rolled out with a new platform i.e. Romley.

See the section “LRDIMMs require cooperation of motherboard makers” in the article below for LRDIMM end-user problems with pre-Romley SuperMicro servers:

https://ddr3memory.wordpress.com/2012/06/25/hypercloud-vs-lrdimms/
Is HyperCloud vs. LRDIMMs similar to Betamax vs VHS ?
June 25, 2012

This is the reason why LRDIMMs were rolled out with Romley – or HAD to be rolled out with Romley (regardless of their state of development).

In addition with Romley, the increasing processor speed requires greater memory capacity – to keep the processing power/memory ratio the same for virtualization.

OEMs were thus under pressure to list LRDIMM offerings in their guides – even though in many cases they listed LRDIMMs as being “Available later in 2012” (IBM docs at the time of Romley rollout).

https://ddr3memory.wordpress.com/2012/05/27/what-are-ibm-hcdimms-and-hp-hdimms/
What are IBM HCDIMMs and HP HDIMMs ?
May 27, 2012

How LRDIMM deployment benefitted HyperCloud

When Intel supported LRDIMMs for rollout with Romley, that created an opportunity for HyperCloud.

For high memory loading applications, the cost of memory dwarfs the cost of the server.

Additionally, OEMs have little interest in pushing a memory product that REDUCES the need for server boxes.

Intel’s well orchestrated push for LRDIMMs created a situation where OEMs HAD to offer LRDIMMs, because everyone else was doing so.

Once the OEMs accepted that LRDIMMs would be offered, they had less to lose by offering something better. For this reason IBM/HP – who account for 65% of server sales – chose to qualify HyperCloud on the HP DL360p and DL380p and the IBM x3650 M4 servers – which are high volume virtualization/data center servers. This is of course my own reading, as I don’t have knowledge of what the IBM/HP folks were really thinking.

HyperCloud conforms to the RDIMM standard and is interoperable with RDIMMs – so it required no special cooperation with motherboard makers or with the JEDEC.

HyperCloud has been available on pre-Romley servers and has numerous benchmarking results available comparing it to RDIMMs in various environments – EDA (electronic design automation) and race car simulation and other applications. In addition, it has been previously been qualified on pre-Romley Westmere platform by SuperMicro, Cirrascale, Viglen, NEC and Gigabyte. It has also been used on the Cray CX1 by Swift Engineering to demonstrate it’s use in HPC or high performance computing applications. It has been qualified by software vendors like MSC software and Nexenta and is one of two memory products listed on the VMware website as being qualified with VMware products.

Here is some background on Netlist and Nexenta:

http://www.prnewswire.com/news-releases/netlists-hypercloud-memory-certified-with-nexentas-openstorage-software-122219888.html
Netlist’s HyperCloud™ Memory Certified With Nexenta’s OpenStorage Software
16GB HyperCloud Memory enables higher utilization and better price-performance for NexentaStor OpenStorage solutions
IRVINE, Calif., May 19, 2011

UPDATE: 07/06/2012 – VMware certifies Netlist as sole memory vendor

VMware certifies Netlist as the sole memory vendor for it’s products. The Netlist 16GB, 32GB HyperCloud (supplied by IBM/HP) and the Netlist 16GB VLP RDIMM (supplied by IBM) are the only memory products certified for use with VMware:

https://ddr3memory.wordpress.com/2012/07/05/memory-for-vmware-virtualization-servers/
Memory for VMware virtualization servers
July 5, 2012

LRDIMMs have yet to demonstrate benchmark results, stability data or information about whether the BIOS updates required to make LRDIMMs work on motherboards will be requiring “bug fixes” later or not (i.e. a series of BIOS upgrades required at data centers ?).

In contrast, Netlist HyperCloud have been demonstrating HyperCloud memory with VMware since 2010.

So why did HyperCloud not sell in volume prior to Romley ?

The reason is that OEMs are not generally favorable towards solutions which lead to LOWER sales of server boxes. With processor speeds no longer a limitation, and memory capacity the main bottleneck for virtualization, if you can fit 2x the memory in a server, you can avoid having to buy an additional server box.

This is where HyperCloud has benefitted from the LRDIMM deployment.

There are several benefits of LRDIMMs for HyperCloud deployment. Firstly Intel’s aggressive push for LRDIMMs have ensured market understanding of load reduction and rank multiplication. Secondly, OEMs who are normally averse to push a product that leads to LOWER sales of server boxes (since by expanding memory for virtualization you need to buy FEWER servers), have been forced by Intel to support LRDIMM.

This has created a situation where the OEMs see no incremental disadvantage in offering HyperCloud. So we have IBM/HP pushing HyperCloud on their high volume servers for virtualization/data centers – the HP DL360p and DL380p and IBM x3650 M4 server lines (being sold as IBM HCDIMM and HP HDIMM).

The gates for load reduction and rank multiplication have been opened by the LRDIMM introduction at the OEMs.

Why load reduction and rank multiplication become essential for Romley

Beyond pressure from Intel, there is a natural NEED for load reduction and rank multiplication for Romley.

– increasing processor speed requires greater memory capacity – to keep the processing power/memory ratio the same for virtualization
.
– 32GB RDIMMs will be 4-rank for the foreseeable future and have abysmal speed slowdown at all speeds (they are non-viable vs. LRDIMM/HyperCloud)

Lack of information on LRDIMMs

Information on LRDIMMs only became available in early 2012.

Prior to that the only source of information on LRDIMMs were the comments made by Netlist in their conference calls.

Netlist – the maker of HyperCloud – is the inventor of load reduction and rank multiplication and their experience extends to close to 10 years in this area.

Just as Netlist had been warning of problems with MetaRAM (with their “stacked DRAM” approaches) – MetaRAM later conceded IP to Netlist and went out of business – similarly Netlist has been warning of problems with LRDIMMs.

After information on LRDIMMs became available in early 2012, these predictions were confirmed.

LRDIMMs have several weaknesses:

– unable to deliver 1333MHz at 3 DPC on regular Romley servers (this effectively makes 16GB LRDIMMs non-viable)
.
– high latency issues (effectively makes non-viable vs. HyperCloud) inherent in their asymmetrical lines and centralized buffer chipset approach (DDR4 tries to remedy this by copying HyperCloud in more detail)
.
– the need for BIOS updates on motherboards (and possibly secondary updates later to resolve bugs)
.
– lack of interoperability with RDIMMs
.
– does not own the IP behind load reduction and rank multiplication – and has suffered a severe blow at the USPTO in challenging Netlist patents in reexam (Netlist ‘537 and ‘274 patents survived reexams with all claims intact) – this will prevent Inphi from challenging Netlist IP when Netlist vs. Inphi resumes – it was stayed pending the reexams

Additionally, 32GB LRDIMMs use 4Gbit x 2 DDP – while 32GB HyperCloud uses 4Gbit monolithic memory packages (more expensive than 32GB HyperCloud).

LRDIMMs do not have a long history of use – primarily because they could not be deployed prior to Romley (requiring a BIOS update is undesirable for existing server users).

There is an absence of benchmarking information for LRDIMMs comparing them with RDIMMs and esp. comparing them with HyperCloud – the only information on this will be found on Netlist’s website.

Inphi – the sole-source maker of LRDIMM buffer chipsets – is famously known for avoiding mention of HyperCloud in it’s conference calls – even though Netlist is a direct competitor and immediate threat to the LRDIMM line of business (because of the allegations of stealing of Netlist IP by Inphi).

To make matters worse, JEDEC finalized the LRDIMM standard without securing appropriate licenses from Netlist. Thus LRDIMMs – which is a JEDEC standard – have no licensing cover. This leaves them open to potential injunction against the sale of infringing product (Netlist vs. Inphi).

On DDR4 borrowing from LRDIMM use of Netlist IP in “load reduction” and “rank multiplication”:

https://ddr3memory.wordpress.com/2012/06/08/ddr4-borrows-from-lrdimm-use-of-load-reduction/
DDR4 borrows from LRDIMM use of load reduction
June 8, 2012

https://ddr3memory.wordpress.com/2012/06/07/jedec-fiddles-with-ddr4-while-lrdimm-burns/
JEDEC fiddles with DDR4 while LRDIMM burns
June 7, 2012

All discussion by Inphi in it’s conference calls have focused on how the other LRDIMM buffer chipset makers (IDTI and Texas Instruments) are being left far behind.

IDTI for it’s part was an aggressive promoter of LRDIMMs, claiming that it had the first JEDEC-compliant LRDIMM buffer chipst on the market – in competition with Inphi. However, they have been deemphasizing LRDIMMs over the course of the last few conference calls.

Texas Instruments has not been interested in LRDIMMs.

Inphi, IDTI and TI are the top 3 buffer chipset makers for RDIMMs.

The Texas Instruments absence is probably related to settlement in Netlist vs. Texas Instruments – TI was allegedly the leaker of information obtained from Netlist under NDA to JEDEC.

IDTI has prudently skipped LRDIMM deployment for Romley altogether and have said they may target Ivy Bridge in late 2012. While IDTI has said this is related to the fact that LRDIMMs are only viable at 32GB (since 16GB LRDIMMs are non-viable) and 32GB may only gain volume later, this is not a legitimate argument to not compete in the early qualification process.

Buffer chipset manufacturers usually try to get in EARLY on new deployments because entry at a later stage becomes very difficult at the OEMs. In addition, the memory module makers typically pick ONE supplier out of the many chipset suppliers, and therefore the buffer chipset makers (like Inphi, IDTI and TI) typically try to establish relationships early with the memory module makers.

https://ddr3memory.wordpress.com/2012/05/24/lrdimm-buffer-chipset-makers/
LRDIMM buffer chipset makers
May 24, 2012

Excerpt from the above article where Inphi points out:

It’s not a market that somebody from outside can come into easily just because of the long qualification cycles and the fact these are getting deployed across a wide range of SKUs (stock keeping units ?) .. the OEMs that (unintelligible) the memory module makers don’t want to qualify multiple suppliers because they have to deploy them across a wide set of SKUs ..

A more likely reason maybe IDTI’s prudence in the face of Inphi losses at the USPTO in challenging Netlist IP (‘537 and ‘274 patent reexams) which spell trouble for Inphi and LRDIMMs in general when Netlist vs. Inphi resumes (it was stayed pending the reexams).

For more on the LRDIMM buffer chipset makers:

https://ddr3memory.wordpress.com/2012/05/24/lrdimm-buffer-chipset-makers/
LRDIMM buffer chipset makers
May 24, 2012

https://ddr3memory.wordpress.com/2012/06/21/is-montage-another-metaram/
Is Montage another MetaRAM ?
June 21, 2012

Risk factors for LRDIMMs

Inphi was the most aggressive in challenging Netlist IP at the USPTO.

Inphi’s loss at the USPTO in challenging Netlist ‘537 and ‘274 patent reexams spells trouble for Inphi when Netlist vs. Inphi resumes.

LRDIMM was also standardized by JEDEC without securing the appropriate licensing.

Inphi is currently the ONLY provider of buffer chipsets (Montage – a new chinese entrant financed by Intel Capital – has emerged but their products seem to underperform the Inphi buffer chipsets) – as IDTI and TI have retreated.

If Inphi is prevented from pushing LRDIMM buffer chipsets, it will create problems for those who have bought LRDIMM memory modules. Not only in buying new LRDIMMs, but in being able to replace faulty LRDIMMs.

Since LRDIMMs ONLY work with other LRDIMMs, this may create a sustainability issue for users who have already populated their servers with LRDIMMs – as they will be forced to leave faulty LRDIMM slots empty.

Inphi thus faces risk factors:

https://ddr3memory.wordpress.com/2012/06/05/lrdimms-future-and-end-user-risk-factors/
LRDIMMs future and end-user risk factors
June 5, 2012

https://ddr3memory.wordpress.com/2012/06/15/why-are-lrdimms-single-sourced-by-inphi/
Why are LRDIMMs single-sourced by Inphi ?
June 15, 2012

Since LRDIMMs require a BIOS update, there is a risk that the BIOS may require bug fix updates at a later date.

Since LRDIMMs have not had a long history of use, and Intel hurriedly pushed them out for Romley rollout – with many OEMs having LRDIMMs listed as being “Available later in 2012” – this possibility cannot be excluded. If so, that would again be a deal-breaker for data center operators who try to avoid such disruptive upgrades.

LRDIMMs – cul-de-sac for an “end-of-life” product

As noted above, details about LRDIMMs emerged only at late as early 2012.

Because of LRDIMMs need for a BIOS update, it only became viable when a new platform – Romley – was rolled out.

In contrast HyperCloud has been available on pre-Romley systems (Westmere) for a while. It has been benchmarked and available to a number of Netlist customers in the CAD, financial services, and other industries for test and benchmarking. Netlist – being the inventor of HyperCloud with almost a 10 year exposure to the problem, and their experience with these end-users has given it valuable understanding of the problem and pitfalls.

LRDIMMs in contrast first revealed information in early 2012 – which confirmed all of the things Netlist had suggested were problematic about the LRDIMM design.

As a side-note, DDR4 has made amends and has gone further in copying the symmetrical lines and distributed buffer chipset approach on the HyperCloud.

https://ddr3memory.wordpress.com/2012/05/31/lrdimm-latency-vs-ddr4/
LRDIMM latency vs. DDR4
May 31, 2012

https://ddr3memory.wordpress.com/2012/06/08/ddr4-borrows-from-lrdimm-use-of-load-reduction/
DDR4 borrows from LRDIMM use of load reduction
June 8, 2012

https://ddr3memory.wordpress.com/2012/06/07/jedec-fiddles-with-ddr4-while-lrdimm-burns/
JEDEC fiddles with DDR4 while LRDIMM burns
June 7, 2012

However, since LRDIMMs have already been rolled out (and standardized at the JEDEC) – there is an inherent inertia related to the design and any changes which may be possible now.

If LRDIMM buffer chipset makers WERE to adopt some of Netlist designs – for example if they were to license Netlist IP – even then the first LRDIMMs featuring those changes would appear mid-2013 or later (because of the nearly 1 year cycle required to test, debug and qualify at the OEMs).

By that time it will be getting close to DDR4 rollout in 2014, and memory module makers would be better served moving to DDR4 directly.

For these reasons, regardless of what happens – i.e. even if the JEDEC were to license Netlist IP for DDR4 (and LRDIMMs in the process), the LRDIMM design is unlikely to change.

Manufacturers will be wary of tinkering with it now – and would much rather focus on DDR4 instead (which is even more dependent on Netlist IP in load reduction and rank multioplication, but also in it’s architectural implementation).

For these reasons, Netlist has called the LRDIMM design as an “end-of-life” product.

https://ddr3memory.wordpress.com/2012/05/30/legal-issues-with-lrdimms-repeating-metaram-2/
LRDIMMs similarities with MetaRAM
May 30, 2012

Excerpt from the above article – where Netlist states:

While HyperCloud’s technology will scale to DDR4 and beyond, by the contrast the current monolithic architecture of LRDIMM will go end-of-life with DDR3.

All of this points to HyperCloud being a technology which is a generation ahead today, while possessing the horsepower to remain in the lead through DDR4.

Thus LRDIMMs will continue to be sold with their current design until DDR4 arrives sometime in 2014.

See the section “Licensing and third-party manufacture of HyperCloud” in the article:

https://ddr3memory.wordpress.com/2012/07/03/examining-netlist/
Examining Netlist
July 3, 2012

UPDATE: 07/27/2012 – confirmed HCDIMM similar latency as RDIMMs
UPDATE: 07/27/2012 – confirmed LRDIMM latency and throughput weakness

HyperCloud HCDIMM latency similarity with 16GB RDIMM (2-rank) latency has been confirmed. And LRDIMM latency and throughput weakness vs. HCDIMM – even when running at the SAME lowered speeds of the LRDIMM – confirmed:

https://ddr3memory.wordpress.com/2012/07/26/latency-and-throughput-figures-for-lrdimms-emerge/
Latency and throughput figures for LRDIMMs emerge
July 26, 2012

Advertisements

20 Comments

Filed under Uncategorized

20 responses to “Examining LRDIMMs

  1. Are you saying that IBM and HP were forced to qualify HCDIMMs since they had to offer a load reduction DIMM so might as well chose the best one?

    Would this be the main reason that they do not “qualify” HCDIMM on ALL their products?

    I understand that HCDIMM is compliant with the standard yet I do not see that end users will select that type of memory for an IBM or HP server unless it is blessed by IBM. For example, as you mentioned in the other post, the IBM system x 3850 used for the SAP HANA appliance needs TB of memory yet they do not offer HCDIMMs. So much for the rule of >256GB, use HCDIMM.

    in a sense that, despite that HCDIMM is totally compatible and complies to the standard, people will not buy until it is listed a “qualified” product, yet the server vendors want to sell more boxes not more memory so they qualify only on enough boxes to say they support the load reduction effort and that’s it. This leads to a drag on the deployment. Until the demand for more memory per core grows to the point where RDIMMs can no longer do the job, the server vendors will drag their feet. is that it?

    This means that the revenue is going to be mainly from the licensing for DDR4 in 2014. It is the “qualified” memory DIMM label that I can not get my head around. I have a standard compliance DIMM but not “qualified”, hence not useable???

    • quote:
      —-
      Are you saying that IBM and HP were forced to qualify HCDIMMs since they had to offer a load reduction DIMM so might as well chose the best one?
      —-

      No – I meant that IBM/HP and other OEMs had to qualify LRDIMMs – thanks to Intel push.

      Of course, this is just my opinion – I have no evidence about how IBM/HP were thinking – perhaps all OEMs wanted something like this – since processors are increasing in capability so there is pressure to be able to load more memory per server.

      Otherwise ability to add more memory on the cheaper servers reduces the need for server boxes.

      I had a quote from Inphi saying much the same – to add to the article – but I can’t find it right now.

      But once they are having to offer LRDIMMs, there is little incremental disadvantage (if there is some in offering load reduction/rank multiplication memory) to offer something that is even better – it gives them a leg up on the competition.

      Technically, you could probably use HCDIMM on other servers as well – if you are willing to not wait for qualification.

      HyperCloud IS a RDIMM JEDEC standard DIMM. It is supposed to work like an RDIMM.

      And as pointed out in comment discussions before – this is the main strategic reason there is absolutely no pressure on Netlist as a small player to “dominate” the market. They can operate like an RDIMM memory producer – since they are leveraging the RDIMM standard.

      Not like LRDIMMs – which are leveraging the LRDIMM standard.

      Also note, LRDIMMs have just arrived – they have little history of use (though I think some may have been sold on SuperMicro servers – and there is a litany of complaints people had on that).

      Who is to say that an LRDIMM may not require a BIOS “update” later to resolve some yet unseen bug ?

      Add to that you only have one single-source for LRDIMM buffer chipsets (Inphi – and that too under duress from litigation).

      quote:
      —-
      Would this be the main reason that they do not “qualify” HCDIMM on ALL their products?
      —-

      I think they still need to qualify HCDIMM on all products – also there may have been limitations on how much Netlist can supply (or something ?) which may have led Netlist to choose the high volume virtualization/data center ones.

      However, Netlist has said they are going to have qualifications on more servers – as quoted in the “Examining Netlist” article.

      quote:
      —-
      I understand that HCDIMM is compliant with the standard yet I do not see that end users will select that type of memory for an IBM or HP server unless it is blessed by IBM. For example, as you mentioned in the other post, the IBM system x 3850 used for the SAP HANA appliance needs TB of memory yet they do not offer HCDIMMs. So much for the rule of >256GB, use HCDIMM.
      —-

      “with the standard” – with the RDIMM standard.

      Right – but they can offer it later.

      Also you have to note – I made up that rule – IBM/HP didn’t so are under no obligation to follow that 🙂

      Also note that much of the leverage of load reduction products (i.e. truly hockey stick growth) will come with 32GB – at that time 32GB RDIMMs will be non-viable (as posted in the “Non-viability of 32GB RDIMMs” article).

      quote:
      —-
      in a sense that, despite that HCDIMM is totally compatible and complies to the standard, people will not buy until it is listed a “qualified” product, yet the server vendors want to sell more boxes not more memory so they qualify only on enough boxes to say they support the load reduction effort and that’s it. This leads to a drag on the deployment. Until the demand for more memory per core grows to the point where RDIMMs can no longer do the job, the server vendors will drag their feet. is that it?

      This means that the revenue is going to be mainly from the licensing for DDR4 in 2014. It is the “qualified” memory DIMM label that I can not get my head around. I have a standard compliance DIMM but not “qualified”, hence not useable???
      —-

      Well, I suspect this is related not to IBM/HP foot-dragging (at this point – now that industry is moving on load reduction products with LRDIMMs).

      The adoption of load reduction is a compulsion – because if not now it will become so when 32GB RDIMMs have to be pushed out (32GB RDIMMs non-viable as noted above).

      The slow ramp may just be that LRDIMMs are a buffer maker – they can leverage into varieties far faster with many memory module makers operating in their own way. Netlist is ONE memory module maker – and has to choose it’s qualification efforts wisely – addressing those servers first which immediately will give it payback.

      For these reasons the cheaper virtualization servers make the list.

      The IBM x3850 type servers you mention – those will benefit from LRDIMMs/HyperCloud at the 32GB memory size.

      Since Netlist does not have a 32GB HyperCloud announced yet (but disclosed in conference calls to be mid-2012 or so) – there is no need for them to address the IBM x3850 type servers.

      So it is probably a combination of Netlist having to do “triage” about what it wants to support first, which ones will use 16GB HyperCloud and can benefit from it, and they will probably announce the other stuff with 32GB HyperCloud.

      I suspect 32GB HyperCloud will be a much more mainstream product – as it is applicable to ALL servers at greater than 384GB per 2-socket server ratio – and the reason there is that 32GB RDIMMs are non-viable.

      Please note that all that I have presented has been by going through the constraints etc.

      However I have yet to hear anywhere that 32GB RDIMMs are non-viable. Everyone who mentions 32GB – Netlist, Inphi – talk of the market being there. But they never mention it in such stark terms.

      Where else have you heard that 32GB RDIMMs will be non-viable ?

      However, it comes out directly from the analysis.

      • quote:
        ——–
        quote:
        —-
        Would this be the main reason that they do not “qualify” HCDIMM on ALL their products?
        —-

        I think they still need to qualify HCDIMM on all products – also there may have been limitations on how much Netlist can supply (or something ?) which may have led Netlist to choose the high volume virtualization/data center ones.

        However, Netlist has said they are going to have qualifications on more servers – as quoted in the “Examining Netlist” article.
        ——–

        Actually let me nuance that a bit – it is pretty clear that Netlist has qualified on EXACTLY the servers they needed to.

        Perhaps the servers could have been expanded more – but for servers like the IBM x3750 M4 which has the memory bus tweak – there is no need for a 16GB HyperCloud.

        https://ddr3memory.wordpress.com/2012/06/02/memory-choices-for-the-ibm-system-x-x3750-m4-servers-2/
        Memory choices for the IBM System X x3750 M4 servers
        June 2, 2012

        But there is for the HP DL360p and DL380p and IBM x3650 M4 servers.

        They are selling only 16GB HyperCloud at this point.

        Also as I stated 16GB HyperCloud 1.35V would not be viable – so voila! 1.35V is not available right now.

        So in a way there is a method to the madness – i.e. Netlist is qualifying products probably after careful analysis with IBM/HP (and keeping in mind Netlist’s own constraints).

        Netlist has spent almost $10M in filling the IBM/HP pipelines (or to gear up production for that) – much of that has come from selling shares in small packets in an “at the market” (ATM) offering (thus the pressure on the shares for last couple of months possibly – generally Netlist has low volume and rises on the slightest of volume or buying interest).

        So Netlist has spent $10M (from the pockets of existing shareholders basically) to fill the pipeline (without raising debt) – how much revenue is it going to expect from that ?

        Going forward they should be using debt financing to keeping the pipeline going (short-term bank loan).

        Contrast this with LRDIMMs which are being pushed willy nilly – IBM is listing a 16GB LRDIMM – even though 16GB LRDIMMs are non-viable (or perhaps people won’t realize it until they read the article for WHY they are non-viable):

        https://ddr3memory.wordpress.com/2012/06/19/why-are-16gb-lrdimms-non-viable/
        Why are 16GB LRDIMMs non-viable ?
        June 19, 2012

        The qualification process for the 32GB is supposed give results mid-2012.

        The market for 32GB is almost universally “greater than 384GB” on all servers – this is not because of HyperCloud – it is because 32GB RDIMMs are non-viable.

        https://ddr3memory.wordpress.com/2012/06/20/non-viability-of-32gb-rdimms/
        Non-viability of 32GB RDIMMs
        June 20, 2012

        I suspect that when Netlist announced the 32GB HyperCloud it will or should be available in both 1.5V and 1.35V varieties.

        Since 32GB also has a wider applicability – I suspect there may have been more work done getting it qualified on more servers.

        You also have to remember that 32GB HyperCloud uses 4Gbit monolithic – which may make them cheaper than the 32GB RDIMM and 32GB LRDIMM both – at least to produce – they may still be priced similarly (since HyperCloud has the performance, latency and IP advantages over the LRDIMM).

        So at 32GB the whole market (with the caveat “greater than 384GB”) is open to the 32GB HyperCloud (and the 32GB LRDIMM if HyperCloud is not available on your server):

        https://ddr3memory.wordpress.com/2012/06/29/infographic-memory-buying-guide-for-romley-2-socket-servers/
        Infographic – memory buying guide for Romley 2-socket servers
        June 29, 2012

      • I would point your attention to a very interesting comment by Netlist on why they do not require a BIOS update.

        I suspect this is related to NLST IP that LRDIMMs did not dare to copy – there is some mention of this IP in the Netlist/Google litigation and is referred to as “Mode C”.

        However it has also been variously described as doing something in “software” (i.e. BIOS) vs. doing it in “hardware” (like Netlist does).

        Search the article above for “they do that mainly in software and can’t do the full rank-multiplication like our product does”.

        You have to understand that Netlist has been doing this for a LONG time – and being the inventor doesn’t hurt either.

        The problem is different when you are copying stuff.

        I suspect however that the JEDEC LRDIMM committee may not have touched the NLST IP in avoiding BIOS modification – because it may have been too hot for them to include (purely speculating here) – since it was already the subject of an exhaustive “discovery” effort in Google vs. NLST (where Google was seeking to be left alone from potential injunction against it’s servers). The court forced Google to turn over it’s server using that infringing IP to Netlist lawyers (now this is around the time when Google was super-secretive about it’s servers). Initially Google said they were not using that technique, and when they were forced to turn over the server they acknowledged that they were. That IP may have been too “hot” to have included in the JEDEC LRDIMM standard. The other NLST stuff they may have thought they can “wing” it – of course that has bounced back from USPTO reexams and is biting them now as well. All in all not a good situation at the JEDEC.

      • One must also not forget that Inphi is funded by Samsung which has some players on the JEDEC commmittee – which may explain why Inphi is single-sourcing LRDIMM buffer chipsets while IDTI and TI have balked.

        In addition there are some MetaRAM folks (smarting from their concession to Netlist from earlier – when they had to concede IP as part of settlement in Netlist vs. MetaRAM and then went out of business). What is surprising is that those SAME people make an appearance on the Inphi team (the CEO of MetaRAM as “Technical Advisor” to Inphi – I mean, come on ???) so no wonder you get the same type of cavalier behavior mirrored by Inphi.

        And you can understand why IDTI have pulled out (from being gung-ho to skipping Romley altogether – why would IDTI NOT want to get in early with the OEMs ?).

        Recall that MetaRAM shut it’s doors in a hurry – my guess is (and this was the understanding at the time by NLST shareholders) that some of the MetaRAM VCs may have found that there was in fact some fishy stuff going on and decided to pack things up in a hurry.

        MetaRAM said they had “destroyed” all infringing product in court documents – why the hurry to “destroy” ?? One of the MetaRAM VCs was Khosla Ventures – and a former partner is now the CEO of Inphi. So a very interlinked business there.

      • Also interesting is that while everyone talks of LRDIMMs, no one asks why IDTI and TI are not selling LRDIMM buffer chipsets.

        IDTI is a MAJOR supplier for RDIMMs – and claimed to have the first LRDIMM buffer chipset available.

        Analysts have asked this at NLST conference calls.

        The analysts who call in on the Inphi conference calls don’t dare ask that of Inphi – why ? Though in the last conference call they were half-asking some of these types of questions (i.e. there was a bit of discontent – i.e. they were all feeling there was something heavy in the room – but no one was explicitly pointing out that there was an elephant in the room).

      • thanks.

        The speed of sales is then going to be determined by how fast Netlist can qualify HCDIMM on other server models, and the speed by which they can qualify the 32GB HCDIMM that they already list in their product section.

      • Right.

        As long as the market understands what they are dealing with (and from NLST/IBM/HP dealings you know the memory planners there know exactly what they are doing – qualifying where it is applicable).

        Simplistically – right now the market is 16GB HyperCloud – NLST is selling that where it is applicable.

        When 32GB rolls around – it should be used wherever you would use 32GB RDIMM – and that is “greater than 384GB”.

        So as market shifts to 32GB – greater than 384GB will require HyperCloud – for the foreseeable future.

        And when you require 64GB (which NLST can built with 4 PCB Planar-X using 4Gbit monolithic, or with 2 PCB Planar-X and using 4Gbit x 2 DDP memory packages) – then the same thing applies – you need the load reduction solution.

        Since the cost to qualify on more servers is probably finite – and it is usually time which is the factor, I suspect that eventually HyperCloud will be available on all servers (eventually) – and this is similar to what NLST has suggested i.e. will become the defacto standard for all high memory loading applications.

        The only difference is when you hear NLST say it – it is just a statement – and you either believe them or you don’t.

        But what I have demonstrated on this blog is that there ARE certain constraints – and if you bring them all together – the outcome is the SAME as NLST is saying. Except you also see what the reasoning is behind it.

        In fact, because you have the analysis, you can see exactly MORE than what NLST has said even – because the whole landscape for memory applications is clear.

        The reason such a simple thing is not clear from the outset is that most players do NOT say negative things about others – for example they will not say 32GB RDIMMs are non-viable. They will say when 32GB arrives, the load reduction solution will dominate etc. etc.

  2. The conference calls for this quarter for all these companies are going to be very interesting.

  3. Pingback: Memory for VMware virtualization servers | ddr3memory

  4. Pingback: Is Montage another MetaRAM ? | ddr3memory

  5. Pingback: LRDIMM buffer chipset makers | ddr3memory

  6. Pingback: Infographic – memory buying guide for Romley 2-socket servers | ddr3memory

  7. Pingback: Examining Netlist | ddr3memory

  8. Pingback: A second-source for HyperCloud ? | ddr3memory

  9. Pingback: HyperCloud to own the 32GB market ? | ddr3memory

  10. Pingback: Inphi to report July 25 | ddr3memory

  11. Pingback: Latency and throughput figures for LRDIMMs emerge | ddr3memory

  12. Pingback: Inphi reports Q2 2012 results | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s