Market opportunity for load reduction

Industry estimates of “attach rates”

UPDATE: added 06/19/2012: IDTI comments
UPDATE: added 06/22/2012: analysts on attach rates
UPDATE: added 07/03/2012: Netlist conservative $500M revenue estimate
UPDATE: added 07/03/2012: Netlist conservative $7.5B revenue estimate for 2014
UPDATE: added 07/04/2012: isuppli.com on LRDIMM market

For DDR3, load reduction is essential at:

– 3 DPC with 16GB memory modules (16GB RDIMMs 2-rank experience slowdown)

– 3 DPC and 2 DPC with 32GB memory modules (32GB RDIMMs 4-rank experience slowdown)

The 32GB RDIMMs will only be available at 4-rank (because 8Gbit DRAM die will not be available for a few years if ever). 4-rank memory experiences slowdown at 2 DPC as well. In addition, at 3 DPC you cannot even use 4-rank memory because of the “8 ranks per memory channel” limit of current systems. Thus “load reduction” and “rank multiplication” technology becomes essential for 32GB at both 3 DPC and 2 DPC.

Since 16GB LRDIMMs underperform the 16GB RDIMM 2-rank even at 3 DPC (LRDIMMs unable to deliver 1333MHz at 3 DPC and have high latency issues) – the 3 DPC at 16GB market belongs to HyperCloud.

Similarly at the 32GB level, the 3 DPC and 2 DPC market belongs to HyperCloud.

See the section “32GB memory module market” in the article:

https://ddr3memory.wordpress.com/2012/05/24/lrdimm-buffer-chipset-makers/
LRDIMM buffer chipset makers
May 24, 2012

In addition, the buffer chipset for LRDIMMs is single-sourced by Inphi which is being accused of having copied load reduction and rank multiplication IP without the appropriate licensing. To top if off, Inphi’s challenge of Netlist IP in load reduction and rank multiplication has failed to invalidate that IP (it has survived reexams with all claims intact). This will cause problems for Inphi in Netlist vs. Inphi which was stayed pending that reexamination.

For more on this, checkout:

https://ddr3memory.wordpress.com/2012/06/05/market-for-hcdimmhdimmshypercloud/
Market for HCDIMMs HDIMMs
June 5, 2012

What are the estimates of “attach rates” for load reduction

That is how much of the memory module market will require load reduction.

Inphi (the maker of the buffer chipset which are used by memory module makers to make the LRDIMM memory modules) estimates the market to be 20% in 6-8 quarters (from the date of the conference call).

I presume they are talking about 32GB LRDIMMs (since 16GB LRDIMMs are known to not be competitive vs. 16GB RDIMMs 2-rank – as evidenced by comments by HP/Samsung at the IDF conference on LRDIMMs).

http://www.veracast.com/stifel/tech2011/main/player.cfm?eventName=2133_inphic
Stifel Nicolaus
Technology, Communications & Internet Conference 2011
Inphi Corporation
2/10/2011; 4:25 PM

DISCLAIMER: please refer to the original conference call or transcript – only use the following as guidance to find the relevant section

at 28:45 minute mark ..

Question:
what do we expect attach rate to be for LRDIMM ?

Answer:
so this is sort of the $64,000 question .. uh .. you can talk to some people – depends you’re talking to end-uesr or someone who’s on the other side of the .. chip design .. some people who believe it’s 5-8% attach rate or 5-10% attach rate.

I think that we can show the power signature of LRDIMM was the same as an RDIMM, those people tend to just gravitate up in terms of maybe the higher end of that range – in terms of what their expectation of attach rate was.

As we were out on the road show, Young (?) is optimistic – I think he believes that .. uh .. 20% of the server market is high-end and memory intensive and that the attach rate will ultimately be about 20%.

It will take 6-8 quarters to sort of phase into that .. that level of volume consumption.

You know we got a call on the conference call from one of the analysts that they had heard that it could be as high as 30 or 35%.

I talked to somebody that actually spoke to the CIO where that was quoted from – and this CIO is very familiar with LRDIMM and he is a big financial institution CIO and his view was that anyone who was going to implement the next-generation VMware or try to do a virtualization implementation was gonna want LRDIMM in the configuration and for that reason he believed the attach rate would be more like 30 or 35%.

So when you are trying to gauge the demand here I think it is important to talk to .. uh .. data center end-users. That’s just one data point – may not be accurate .. uh .. for us anything over say mid single digits is is gravy relative to the street forecast today.

Netlist – revenue impact of attach rate

Netlist has a more modest estimate – about 10% attach rate for load reduction products.

However, the impact of attach rate on revenue for Netlist is far greater (than for Inphi – the maker of LRDIMM buffer chipsets).

– Inphi sells a $20 (approx.) buffer chipset per LRDIMM memory module.

– Netlist makes the complete $500-$1200 (16GB/32GB approx.) HyperCloud memory module (IBM HCDIMM, HP HDIMM).

So the revenue contribution for Netlist vs. Inphi is an order or magnitude higher.

For Netlist, a 2%-3% market for 32GB is a significant market.

From the Netlist Q3 2011 CC:

http://viavid.net/dce.aspx?sid=00008EC4
Netlist Third Quarter, Nine-Month Results Conference Call
Thursday, November 10, 2011 at 5:00 PM ET

DISCLAIMER: please refer to the original conference call or transcript – only use the following as guidance to find the relevant section

at the 18:20 minute mark:

Chuck Hong:

Yeah Rich, I think there has been a handful of reports .. um .. that have been written up .. about the .. uh .. LRDIMM market.

That marketplace is the exact .. target market .. uh .. for HyperCloud.

That we’re targeting.

There’s probably anywhere between 70M and 80M registered DIMM or server memory modules being shipped worldwide today.

Those reports indicate that over time .. uh .. the LRDIMM .. uh .. may become 10-15% .. uh .. of that market.

My my personal view is that it will probably NOT be that large.

The difference in .. uh .. uh .. the way that chip manufacturers, buffer manufacturers like an Inphi .. uh .. address that business opportunity is different from ours.

They are selling a chipset that .. uh .. you know that is valued at $10-$20, whereas we are selling an entire memory module .. uh .. that is valued is valued at anywhere between $300-$400 up to $1200-$1500 depending on the density. Primarily it will be 16GB and 32GB.

at the 19:50 minute mark:

So .. we believe the market will be certainly in the millions of units .. uh .. come next year.

With the LRDIMM and the HyperCloud .. um .. and .. uh .. at some point down the road as the Romley matures .. uh .. that it may .. the percentages may get into the teens (i.e. above 10%).

For next year, I think it will be a smaller portion, but for us it’s still a a tremendous opportunity .. uh .. you know ..
.
.
.
.
at the 20:25 minute mark:

Rich Kugele at Needham:

(here he interrupts Chuck Hong)

From selling .. selling the module .. so much more .. than if you were just selling a chip .. right ?
.
.
.
.
Chuck Hong:

Absolutely. Absolutely.

at the 20:35 minute mark:

You know even at a let’s say if there are 70M RDIMMs being shipped today, registered DIMMs, and um .. the .. opportunity for the the high performance module is about a million units .. uh uh .. at an ASP (average selling price) of .. uh .. $500 let’s say.

That’s a $500M market opportunity.

So Netlist is saying the load reduction market may “over time” get to a more modest 10% of memory modules for Romley.

However by “next year” i.e. 2012 they are expecting the market to be in the millions of units – I am guessing they are referring to the “run rate” by end of 2012. Also since this is a comment made in 2011 it may not have anticipated the few months to a quarter’s worth of delay in Romley (by some OEMs).

For a million units (about 1.5% of the 70M RDIMMs sold) it’s revenue impact would be $500M. This is significant for Netlist whose current revenues are about $65M (almost entirely non-HyperCloud revenue).

UPDATE: added 07/03/2012: Netlist conservative $500M revenue estimate

Netlist estimates the attach rate to be 1% as the basis of it’s guidance for revenue impact of $500M.

This is significant because Netlist current revenue is about $65M/year – based almost entirely on NVvault (non-volatile memory) and flash products.

While they estimate the overall load reduction market to be 10%, and take a very conservative 1% for HyperCloud to yield a conservative market of $500M revenue per year for HyperCloud:

http://www.netlist.com/investors/investors.html
UBS Global Technology and Services Conference
Thursday, November 17, 2011 9:00:00 AM ET
http://cc.talkpoint.com/ubsx001/111511a_im/?entity=63_EIUMYWQ

at the 02:05 minute mark:

So as we talk about cloud computing – we are focused on the applications that require high .. uh .. amounts of DRAM.

So large capacity DRAM – so you are looking at things like high-performance computing, securities trading, things that we want to put a database off a disk and move it right into working memory.

at the 02:25 minute mark:

The market for cloud server units is expected to grow at about a 20% clip over the next 4 years.

So it’s an exciting market for us.

If we take a very quick snapshot of the size of the market for us.

Industry analysts are estimating about 20% of the newest and latest Intel (INTC) family of servers – Romley family – would use a “load reduced” or a “rank multiplied” memory.

And that’s what we call our “HyperCloud” memory.

at the 02:55 minute mark:

Whether you agree with that or not, let’s just use a ONE percent (1%) of that number – to keep the math really simple.

You get an idea of how big and how fast this market can grow.

So if we use a 1% estimate – there’s about 9M servers sold in the world this year – so let’s take 1%, let’s call it a 100,000 servers for next year.

And each server that uses high density memory typically fully loads that memory – and that can be anywhere from 12 to 24 sockets (DIMM sockets/slots) in each of these servers.

Let’s just use 10 to 12 (sockets/slots) to keep the numbers easy again.

So we take a 100,000 servers – we take 10 DIMMs per .. 10 memory modules in each one – you’ve got a million (1M) units.

Well millions of anything is not a great market .. USUALLY. Because we are talking semiconductors in the conference here today .. uh .. chips are $10 to $20, $30 .. but in our case we are selling subsystems.

And our subsystems average between the 16GB and 32GB around $500 each.

at the 03:50 minute mark:

So even at a very very conservative estimate .. 1% of the servers, only 10 DIMMs per server, we are looking at a $500 ASP (average selling price) or $500M in revenue for next year.

Now that’s a pretty significant growth from where we are today .. so .. hence the excitement about the opportunity in this market.

UPDATE: added 07/03/2012: Netlist conservative $7.5B revenue estimate for 2014

For DDR4 however they raise that estimate to “50% of all servers” – taking an conservative estimate of 10% of that as Netlist’s share – with use of higher density 32GB and 64GB memory modules – selling at $500 average unit price – they estimate a market of about $7.5B:

http://www.netlist.com/investors/investors.html
UBS Global Technology and Services Conference
Thursday, November 17, 2011 9:00:00 AM ET
http://cc.talkpoint.com/ubsx001/111511a_im/?entity=63_EIUMYWQ

at the 15:10 minute mark:

So now we .. remember we talked about what the market looked like for next year and we just said .. “well if it was just 10%” .. uh .. or 1% rather .. 1% of the market.

We had a $500M market opportunity.

Now let’s look out for 2014 .. because as we increase to DDR4 speeds, the frequency goes WAY up .. and when the frequency goes up, the effect of the bus .. the memory bus is huge.

at the 15:30 minute mark:

So the industry’s estimating they’ll need 50% of all servers .. and it will be about 13M to 14M units, up from 9M today .. uh .. will require some kind of “load reduction” (technology) .. HyperCloud-type technology (or the LRDIMM which is infringing NLST IP – though LRDIMMs have latency issues).

If we take 10% of that, let’s call it 1.3M servers and let’s use 12 DIMMs per server, as an average .. now the densities move up .. so instead of 16GB and 32GB today, we’ll talk 32GB and 64GB .. 3 years from now .. we’re looking at ABOUT a $7.5B market size.

UPDATE: added 06/19/2012: IDTI comments

IDTI – 16GB LRDIMMs non-viable – 32GB LRDIMMs attach rate

IDTI confirms the analysis made elsewhere here for:

– the non-viability of 16GB LRDIMMs vs. 16GB RDIMMs (2-rank) – 16GB LRDIMMs cannot outperform the 16GB RDIMMs (2-rank). This is because at 3 DPC the 16GB LRDIMMs cannot do better than the 16GB RDIMMs. Compare this to the 16GB HyperCloud which outperforms the 16GB RDIMMs at 3 DPC by delivering 1333MHz at 3 DPC.
.
– the viability of 32GB LRDIMMs vs. 32GB RDIMMs (4-rank) – the 4-rank higher load creating more speed slowdown, and also preventing use beyond 2 DPC because of the “8 ranks per memory channel” limit for current systems. The reason the 32GB RDIMMs will only be available in 4-rank is so because 32GB RDIMM 2-rank requires 8Gbit DRAM die and these won’t be available for a few years if ever (because of the high cost associated with going to 8Gbit DRAM die). So at 32GB there is a need for “load reduction” and “rank multiplication” at both 3 DPC and 2 DPC.

IDTI also estimates the 32GB LRDIMMs market to be 2%-3% of the Romley market.

In addition, IDTI guides for 15%-20% of the market for Ivy Bridge (post-Romley).

http://ir.idt.com/eventdetail.cfm?EventID=107803
IDT Third Quarter Fiscal Year 2012 Financial Results
Jan 30, 2012 at 1:30 PM PT

DISCLAIMER: please refer to the original conference call or transcript – only use the following as guidance to find the relevant section

at the 41:45 minute mark ..

For the second part of your question with respect to LRDIMM.

We have been very consistent in our .. uh .. discussion of the size of the LRDIMM market.

We believe .. that in the Sandy Bridge .. uh .. generation of Romley .. uh .. that the attach rate for LRDIMM will be small.

It will be probably 2 or 3 percent (2%-3%) of all of those Romley .. of all of those servers.

Now, remember Intel’s got this tick tock strategy .. uh .. so the tick is the Sandy Bridge and then there is a die-shrink which is the tock .. which is Ivy Bridge.

Now, Ivy Bridge is 1600MHz, whereas Sandy Bridge is only 1333MHz.

Ivy Bridge also allows for 3 DIMMs per channel (3 DPC), whereas Sandy Bridge only allows for 2 DIMMs per channel (2 DPC) (NOTE: probably mean at full speed).

at the 42:40 minute mark ..

So if you go through the analysis .. which I am not going to bore you with here .. and you look at the benefits of LRDIMM in Sandy Bridge, the cost-performance tradeoff is not .. uh .. not very favorable.

It turns out – now just give you the answer .. uh .. that you can build a DIMM using .. uh .. uh .. 64 .. I’m sorry 4Gbit DRAM and standard Registered DIMM (RDIMM) that has .. really a lower cost and roughly equal performance to what you would get with LRDIMM – that’s why the attach rates for LRDIMM in Sandy Bridge is relatively small.

The the only place where LRDIMM will give you a performance tradeoff in the Sandy Bridge generation is in the 32GB DIMMs, not in the 16GB DIMMs.

So the 32GB DIMMs are only about a 2-3% of the total market.

at the 43:45 minute mark ..

That that .. that’s the explanation for why that attach rate is small.

Now go to Ivy Bridge where you’ve got 1600MHz (and) 3 DIMMs per channel (3 DPC) – go through the same analysis – it is MUCH more favorable for LRDIMM.

And so we anticipate that in the Ivy Bridge generation, the attach rate will be 15%-20%.

at the 44:05 minute mark ..

But that .. that’s a long winded answer .. uh .. but there’s actually some careful analysis that goes behind our .. our market size estimates.

UPDATE: added 06/22/2012: analysts on attach rates

As late as Dec 5, 2011, we have Sundeep Bajikar of Jeffries & Co. estimating the market to be 10% (HPC market) and then extends that to “much broader” if you include “the cloud” i.e. virtualization/data centers.

It could be argued that the cloud applications are a more appropriate market than HPC, since HPC often emphasizes faster processing speed while virtualization emphasizes greater memory per server and thus higher memory loading per server.

Sundeep Bajikar of Jeffries & Co:

http://www.twst.com/yagoo/JKBajikar10.html
Next Generation Memory Buffers For High Performance Computing May Double Penetration In The Near Term; Memory Buffers Can Increase Virtualization Efficiency
December 5, 2011

TWST: Which of the companies in your coverage are your top picks right now and why?

Mr. Bajikar: Within the small/medium-cap space, my top idea is Inphi, their ticker is IPHI. They went public last year. They have a couple of different areas of business, and 70% of the revenues come from the server market. They provide memory buffers that allow you to back more memory inside a standard server than you would otherwise be able to do. That is good for people that are building data centers or even the next-generation cloud infrastructure because it improves efficiency of virtualization just by packing in more memory. And it’s relatively low cost because you are paying the cost of the memory buffer rather than upgrading your entire microprocessor or even the server. So that’s 70% of Inphi’s revenues. The other 30% of Inphi’s revenues come from high-speed communications, primarily optical components that are used to plug fiber optic cable into routers and switches for long-distance high-speed transmission, whether it’s 10 gigabits per second or 40 gigabits per second, and an increasingly 100 gigabits per second.

The initial story at IPO was they would have a large product cycle on Intel’s next-generation servers, which are predicted to come to market in Q1. Intel is shipping their chips in the market this quarter, and then the server OEMs are expected to bring servers to market next quarter in Q1 of 2012. These servers will feature Inphi’s next-generation memory buffer, which effectively increases the content for Inphi and it basically quadruples the amount of memory that you can pack in the server. So it’s very exciting technology. It’s a portion of the server market that’s referred to as HPC, high-performance compute. It’s only about 10% of the market, so it’s sort of a niche market. But as we did more work on this, what we figured out is that the application for this next-generation memory buffer is actually much broader because it can go into the core of the cloud. It can increase your virtualization efficiency and therefore have much broader relevance than just the initial 10%. So we have a view that penetration could be anywhere from 20% or more into the server market, which effectively translates to a large upside for Inphi if that actually happens. That’s a big part of the near-term story for Inphi.

UPDATE: added 07/04/2012: isuppli.com on LRDIMM market

Thanks to Diya Soubra for providing this link:

http://www.isuppli.com/Abstract/P13648_20110610084137.pdf

Load Reduced DIMMs will represent 5% in Q2 2012.
That’s 5% of about 10M modules shipping per quarter.

However, these estimates are dated June 9, 2011 – so they would not have taken into account the couple of months delay in Romley rollout which became known later.

The article does however have a good description of LRDIMMs which matches information on this blog:

– allow speed improvement over RDIMM
– have higher latency issues
– LRDIMMs not compatible with older systems and therefore forced to ship with new systems (reason why they had to ship with Romley rollout)

The article fails to point out a few things.

LRDIMMs are a new JEDEC standard – the reason is they require a BIOS update in order to work – and this process needs to be standardized.

Compare that to HyperCloud which do not require a JEDEC standard, or BIOS update, or cooperation with motherboard manufacturers to ensure it works.

The reason is that HyperCloud is interoperable with RDIMMs – and behaves like a “better RDIMM”. It leverages the RDIMM standard and behaves like an RDIMM that just appears to have a lower load and lower rank (load reduction and rank multiplication).

This capability is not present in the LRDIMMs – which require a BIOS update and thus cooperation with motherboard makers and a JEDEC standard to ensure there are no problems of inconsistency across motherboard makers. That is an added level of complexity for the LRDIMM approach.

Understanding of the “market” for LRDIMMs

When Inphi or IDTI talk about the market for LRDIMMs – they are essentially talking about the market for:

– 32GB LRDIMMs
.
– and not 16GB LRDIMMs – since 16GB LRDIMMs are non-viable vs. 16GB RDIMMs (2-rank)

When Netlist talks about the market for HyperCloud – they are talking about the market for:

– 32GB HyperCloud
.
– 16GB HyperCloud (since at 3 DPC this trumps 16GB RDIMM (2-rank)).

Summary

Inphi estimates the market for a “load reduction” and “rank multiplication” product to be 20% in 6-8 quarters (from the date of the conference call in Feb 2011).

Netlist is saying the load reduction market may “over time” get to a more modest 10% of memory modules for Romley.

However by “next year” i.e. 2012 they are expecting the market to be in the millions of units – I am guessing they are referring to the “run rate” by end of 2012. Also since this is a comment made in 2011 it may not have anticipated the few months to a quarter’s worth of delay in Romley (by some OEMs).

For a million units (about 1.5% of the 70M RDIMMs sold) it’s revenue impact would be $500M. This is significant for Netlist whose current revenues are about $65M (almost entirely non-HyperCloud revenue).

IDTI also estimates the 32GB LRDIMMs market to be 2%-3% of the Romley market.

In addition, IDTI guides for 15%-20% of the market for Ivy Bridge (post-Romley).

Advertisements

10 Comments

Filed under Uncategorized

10 responses to “Market opportunity for load reduction

  1. Pingback: Market for HCDIMMs HDIMMs | ddr3memory

  2. Pingback: Why are LRDIMMs single-sourced by Inphi ? | ddr3memory

  3. Pingback: LRDIMM buffer chipset makers | ddr3memory

  4. Pingback: Examining Netlist | ddr3memory

  5. in this market report of 2011, figure 3, iSuppli forecasts that

    http://www.isuppli.com/Abstract/P13648_20110610084137.pdf

    Load Reduced DIMMs will represent 5% in Q2 2012.
    That’s 5% of about 10M modules shipping per quarter.

    • Thanks – it has a good write up on LRDIMMs also which matches info here – latency issues, bandwidth improvement over RDIMM and non-compatibility with old systems/RDIMM.

      The numbers are interesting also.

      A more relevant question is – what is the expected 3 DPC market on HP DL360p and DL380p and IBM x3650 M4 servers ? Because 16GB HyperCloud on these servers should be the dominant market (for Netlist at least – in HyperCloud) – until 32GB kicks in – at which time it will be an even wider market requirement (because 32GB RDIMMs are non-viable).

  6. Pingback: Awaiting 32GB HCDIMMs | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s