Examining Netlist

Revenue trajectory – $65M revenue to $500M and $7.5B in 2014 with DDR4

UPDATE: 07/03/2012: third-party manufacture of HyperCloud
UPDATE: 07/03/2012: what to expect in the near future
UPDATE: 07/03/2012: sunk factory costs
UPDATE: 07/06/2012 – non-viability of LRDIMMs
UPDATE: 07/06/2012 – VMware certifies Netlist as sole memory vendor
UPDATE: 07/27/2012 – confirmed HCDIMM similar latency as RDIMMs
UPDATE: 07/27/2012 – confirmed LRDIMM latency and throughput weakness

How did a small player become so well positioned for future memory ?

Netlist current revenue is about $65M/year – or $16M per quarter. This is based almost entirely on NVvault (non-volatile memory) and flash products.

They are currently at EBITDA breakeven for a couple of quarters – and at the start of a rampup of new products that are selling for Romley:

– NVvault transition to mainstream DDR3 memory for Romley (collaboration with Intel)
– HyperCloud arrival with LRDIMM for Romley – trumps LRDIMM on performance and IP front
– VLP memory for IBM blade servers

The collaboration with Intel to deliver NVvault for Romley is mentioned in the section “Netlist collaboration with Intel”:

https://ddr3memory.wordpress.com/2012/07/03/would-non-volatile-dram-have-reduced-amazon-outage/
Would non-volatile DRAM have reduced Amazon outage ?
July 3, 2012

Most of these products have been in development for a while – NVvault was used in DDR2 format for Dell PERC RAID cards in high volume – HyperCloud was available for pre-Romley servers and VLP while new was originally the invention of Netlist when they shipped it as DDR1 for IBM blade servers (prior to their turnaround from commodity low margin memory player to high margin IP-based memory producer).

Netlist historical revenue trajectory (as of Q1 2012 CC)

Netlist has revenues of about $65M/year – or $16M per quarter. This is based entirely on NVvault (DDR2 non-volatile memory sales to Dell PERC) and flash products.

The revenues do not include significant revenues from HyperCloud, VLP or NVvault DDR3 for Romley – which are expected to ramp at high volume as Romley ramps up.

Figures taken from NLST PRs for Q1 2010 to Q1 2012:

Q2 2009 revenue – $3.2M
Q3 2009 revenue – $6.4M
Q4 2009 revenue – $6.9M
Q1 2010 revenue – $7.9M
Q2 2010 revenue – $9.3M
Q3 2010 revenue – $10.6M
Q4 2010 revenue – $10.1M
Q1 2011 revenue – $12M
Q2 2011 revenue – $16M
Q3 2011 revenue – $16.3M
Q4 2011 revenue – $16.4M
Q1 2012 revenue – $14.0M

Q2 2009 gross profit – $0.2M – 7.7% of revenues
Q3 2009 gross profit – $1.6M – 24.3% of revenues
Q4 2009 gross profit – $1.7M – 25.1% of revenues
Q1 2010 gross profit – $1.8M – 23% of revenues
Q2 2010 gross profit – $1.8M – 19.5% of revenues
Q3 2010 gross profit – $3.0M – 28.6% of revenues
Q4 2010 gross profit – $3.3M – 33% of revenues
Q1 2011 gross profit – $3.8M – 32% of revenues
Q2 2011 gross profit – $4.9M – 31% of revenues
Q3 2011 gross profit – $5.5M – 34% of revenues
Q4 2011 gross profit – $6.0M – 37% of revenues
Q1 2012 gross profit – $5.4M – 39% of revenues

Q2 2009 net loss – $4.0M – $0.20 loss per share
Q3 2009 net loss – $2.9M – $0.11 loss per share
Q4 2009 net loss – $3.0M – $0.15 loss per share
Q1 2010 net loss – $3.0M – $0.14 loss per share
Q2 2010 net loss – $4.0M – $0.16 loss per share
Q3 2010 net loss – $4.9M – $0.20 loss per share
Q4 2010 net loss – $3.2M – $0.13 loss per share
Q1 2011 net loss – $2.8M – $0.11 loss per share
Q2 2011 net loss – $1.5M – $0.06 loss per share
Q3 2011 net loss – $1.0M – $0.04 loss per share
Q4 2011 net loss – $0.227M – $0.01 loss per share
Q1 2012 net loss – $1.1M – $0.04 loss per share

Q1 2010 cash equivalents – $26.4M
Q2 2010 cash equivalents – $23.1M
Q3 2010 cash equivalents – $19.0M
Q4 2010 cash equivalents – $15.9M
Q1 2011 cash equivalents – $12.3M
Q2 2011 cash equivalents – $12.1M
Q3 2011 cash equivalents – $11.0M ($10.6M from conference call)
Q4 2011 cash equivalents – $11.0M ($10.9M from conference call)
Q1 2012 cash equivalents – $14.3M

Starting Q3 2011, NLST has started to release EBITDA numbers (and prior year’s numbers) as well:
Q3 2010 EBITDA – $4.0M loss
Q4 2010 EBITDA – $2.300M loss
Q1 2011 EBITDA – $1.9M loss
Q3 2011 EBITDA – $0.032M (“achieved EBITDA breakeven)
Q4 2011 EBITDA – $0.718M
Q1 2012 EBITDA – $0.043M

Q1 2012 – total assets – $36.9M, working capital – $21.0M, total debt – $2.8M – stockholder’s equity – $23.4M

Performance, price and IP superiority

As we have examined here before, HyperCloud trumps LRDIMMs and is currently being sold by IBM/HP on the top 3 high volume server lines (for virtualization/data center applications) – the HP DL360p and DL380p and the IBM System x3650 M4 server lines. It has no competition at 3 DPC with 16GB memory modules.

And at 32GB it will trump LRDIMMs – and it will trump RDIMMs when memory is loaded above 256GB on a 2-socket server at 1.5V and above 384GB at 1.35V – as examined here:

https://ddr3memory.wordpress.com/2012/06/29/infographic-memory-buying-guide-for-romley-2-socket-servers/
Infographic – memory buying guide for Romley 2-socket servers
June 29, 2012

Since 32GB HyperCloud will be made using 4Gbit monlithic memory packages (leveraging the Netlist Planar-X IP) compared to the 32GB RDIMMs and 32GB LRDIMMs which are produced using 4Gbit x 2 DDP memory packages (which are more expensive). So 32GB HyperCloud will have price superiority as well over RDIMM and LRDIMM.

For more on 32GB RDIMM/32GB LRDIMM use of 4Gbit x 2 DDP memory packages:

https://ddr3memory.wordpress.com/2012/06/25/ddp-vs-monolithic-memory-packages/
DDP vs. monolithic memory packages and their impact
June 25, 2012

https://ddr3memory.wordpress.com/2012/07/01/multi-die-vs-multi-pcb-to-increase-memory-density/
Multi-die vs. multi-PCB to increase memory density
July 1, 2012

In addition Netlist has IP superiority over Inphi (which is the sole-source of LRDIMM buffer chipsets – as IDTI has exited the LRDIMM space for Romley and Texas Instruments has exited the space for good following settlement with Netlist some years ago – allegedly for having leaked Netlist NDA info to JEDEC).

On the risk factors for LRDIMM:

https://ddr3memory.wordpress.com/2012/06/05/lrdimms-future-and-end-user-risk-factors/
LRDIMMs future and end-user risk factors
June 5, 2012

https://ddr3memory.wordpress.com/2012/06/15/why-are-lrdimms-single-sourced-by-inphi/
Why are LRDIMMs single-sourced by Inphi ?
June 15, 2012

UPDATE: 07/06/2012: non-viability of LRDIMMs

On the non-viability of LRDIMMs in general:

https://ddr3memory.wordpress.com/2012/07/05/examining-lrdimms/
Examining LRDIMMs
July 5, 2012

See the section entitled “Difference between LRDIMMs and HyperCloud” which I am reproducing below:

Difference between LRDIMMs and HyperCloud

The important distinction is then that HyperCloud is compatible with the RDIMM standard and requires no BIOS update, is “plug and play” and interoperable with regular RDIMMs.

Thus HyperCloud are leveraging the RDIMM standard – a pre-existing and widely deployed standard, and require no separate JEDEC standard in order to work.

LRDIMMs in contrast require a BIOS update to work, and therefore requires standardization across motherboard makers to ensure consistent application of such a BIOS update across the motherboard manufacturers.

The major impact of HyperCloud compatibility with RDIMMs is that:

– HyperCloud is RDIMM – except with internal features which make it’s load and rank look less (load reduction, rank multiplication)
.
– HyperCloud requires NO cooperation or corralling of motherboard makers to make HyperCloud work – in contrast, LRDIMM requires a BIOS update being implemented by all motherboard makers
.
– While LRDIMMs are a new standard that is incompatible with RDIMM, HyperCloud leverages the RDIMM standard – mainstream and in use and supported by all – and does not require JEDEC endorsement, or cooperation of motherboard makers

There is a reason why HyperCloud does not require a BIOS update – and it is related to IP that Netlist holds in this area – I suspect it has something to do with “Mode C” (which is mentioned in the court documents in Netlist’s litigation against Google) – but I cannot definitely say that that is the sole reason for this capability.

Here is an explanation given by Netlist for why HyperCloud requires no BIOS update – while LRDIMM does (the reference to “they do that mainly in software and can’t do the full rank-multiplication like our product does”):

http://78449.choruscall.com/netlist/netlist120228.mp3
Fourth Quarter and Full Year 2011 Conference Call
Tuesday, February 28 5:00pm ET

at the 30:00 minute mark ..

George Santana of Ossetian (?):

Just .. how long do you think NLST has as far as a head start on the 2-rank 32GB ?

Chuck Hong – CEO:

Well the .. the only other way to build a really .. a real 2-rank 32GB is with 8Gbit (DRAM) die from the semiconductor manufacturers.

I don’t think any body even has that on their roadmap – except maybe Samsung.

It looks like 4Gbit (DRAM die) will be the LAST viable .. uh .. monolithic die out in the industry.

So the industry is looking to go into some stacking methodologies that you have heard of 3DS and there are some other competing technologies (Hybrid Memory Cube etc.), so we think effectively we’ll have the only real 32GB 2-rank in the market for DDR3.

And DDR4 when products start stacking, you need rank-multiplication and HyperCloud is really the only product that does rank multiplication on the DIMM itself, so .. as you dig into how other technologies try to do that, they do that mainly in software and can’t do the full rank-multiplication like our product does.

So I think we have a pretty good .. uh .. advantage there.

UPDATE: 07/06/2012 – VMware certifies Netlist as sole memory vendor

VMware certifies Netlist as the sole memory vendor for it’s products. The Netlist 16GB, 32GB HyperCloud (supplied by IBM/HP) and the Netlist 16GB VLP RDIMM (supplied by IBM) are the only memory products certified for use with VMware:

https://ddr3memory.wordpress.com/2012/07/05/memory-for-vmware-virtualization-servers/
Memory for VMware virtualization servers
July 5, 2012

LRDIMMs have yet to demonstrate benchmark results, stability data or information about whether the BIOS updates required to make LRDIMMs work on motherboards will be requiring “bug fixes” later or not (i.e. a series of BIOS upgrades required at data centers ?).

In contrast, Netlist HyperCloud have been demonstrating HyperCloud memory with VMware since 2010.

$65M revenue jump to $500M for DDR3 and $7.5B for DDR4

Just looking at the HyperCloud potential, Netlist estimates attach rates that are more conservative than ones given by Inphi for LRDIMMs.

Netlist estimates an eventual 10% market share for load reduction products for DDR3 for Romley – i.e. for LRDIMM/HyperCloud products.

Despite the performance, latency, price and IP superiority factors mentioned above, Netlist illustrates with a conservative 1% estimated attach rate for HyperCloud, and derives a $500M revenue opportunity for Netlist.

See the section “Netlist conservative $500M revenue estimate” in:

https://ddr3memory.wordpress.com/2012/06/06/market-opportunity-for-load-reduction/
Market opportunity for load reduction
June 6, 2012

Going forward into DDR4, there is going to be an increasing dependence on Netlist IP – this is because Netlist IP underlies LRDIMMs – which copies the load reduction and rank multiplication IP – which is essentially what distinguishes LRDIMMs from RDIMMs. However DDR4 goes further and copies the symmetrical lines and decentralized buffer chipset approach of HyperCloud as well.

On DDR4 borrowing from LRDIMM use of Netlist IP in “load reduction” and “rank multiplication”:

https://ddr3memory.wordpress.com/2012/06/08/ddr4-borrows-from-lrdimm-use-of-load-reduction/
DDR4 borrows from LRDIMM use of load reduction
June 8, 2012

https://ddr3memory.wordpress.com/2012/06/07/jedec-fiddles-with-ddr4-while-lrdimm-burns/
JEDEC fiddles with DDR4 while LRDIMM burns
June 7, 2012

Netlist estimates an attach rate of “50% of all servers” for load reduction products for DDR4.

It uses a very conservative estimate of 10% as Netlist’s share – with use of higher density 32GB and 64GB memory modules – selling at $500 average unit price – they estimate a market of about $7.5B:

See the section “Netlist conservative $7.5B revenue estimate for 2014” in:

https://ddr3memory.wordpress.com/2012/06/06/market-opportunity-for-load-reduction/
Market opportunity for load reduction
June 6, 2012

Therefore we arrive at a company which has revenue of $65M with new products to ramp in high volume – selling through IBM (HyperCloud and VLP) and HP (HyperCloud) leading to revenue ramp just from the HyperCloud alone of $500M eventually for DDR3 on Romley, and going to $7.5B for DDR4 in 2014.

Why Netlist IP is so important for DDR4

As covered in the articles:

https://ddr3memory.wordpress.com/2012/06/08/ddr4-borrows-from-lrdimm-use-of-load-reduction/
DDR4 borrows from LRDIMM use of load reduction
June 8, 2012

https://ddr3memory.wordpress.com/2012/06/07/jedec-fiddles-with-ddr4-while-lrdimm-burns/
JEDEC fiddles with DDR4 while LRDIMM burns
June 7, 2012

There are certain reasons why Netlist IP is important for DDR4, and why the attach rate of 10% for DDR4 is not an implausible figure to contemplate.

Here NLST gives an overview of the problems inherent with memory design for higher speeds – and why for DDR4 the movement is away from the LRDIMM approach of a centralized buffer and towards a “distributed buffer architecture”. That is, LRDIMMs copies Netlist IP in load reduction and rank multiplication – but DDR4 goes further and copies the LRDIMM stuff but also the symmetrical lines and decentralized buffer chipset of HyperCloud as well.

It also gives a historical perspective on HyperCloud and how it was developed – it arose out of work that Netlist did for Apple servers some years back:

http://www.netlist.com/investors/investors.html
UBS Global Technology and Services Conference
Thursday, November 17, 2011 9:00:00 AM ET
http://cc.talkpoint.com/ubsx001/111511a_im/?entity=63_EIUMYWQ

at the 10:30 minute mark:

Uh .. nice endorsement recently came out as we introduced our 32GB just this week.

And we demonstrated it for the first time at the Supercomputing (SC’11) show.

Uh .. this is an endorsement .. uh .. from one of the engineering vice-presidents over at Hewlett-Packard (VP Engineering at HP) who says his customers are looking for greater memory capacity AND bandwidth, which is what we just talked about.

And that the NLST HyperCloud product helps customers achieve this.

at the 10:55 minute mark:

There are alternate technologies that people are pushing on the market, to try to get more capacity on a server.

One is called LRDIMM – they are “load reduced DIMM” .. so I’d like to compare very quickly so you see where we stand versus that.

The LRDIMM is on the top of this chart, and it contains one very large memory buffer in the middle.

And you see ours by comparison has a standard sized register – although we have some secret sauce in that register.

And a 9 isolation devices along the (bottom) – that is called a “distributed architecture”.

at the 10:25 minute mark:

So the monolithic architecture on the top, you can see from the chart .. the data paths .. so for a signal to go from the edge of the connector and .. to pull memory out to come back, it has to follow the blue traces .. all the way in to the memory buffer.

Follow the orange traces to the particular DRAM, the orange traces BACK to the memory buffer, and the blue traces all the way back out to the edge (of the memory module) card.

at the 11:55 minute mark:

So those are some fairly significant-sized highways, if you will.

If you are thinking about navigating this as a city, and that .. we call that “latency” ..

So that latency for when you want memory to when you get it, is much greater on a monolithic design, becaues the highways are so long ..

By contrast, the HyperCloud memory, has very short data paths.

at the 12:15 minute mark:

So what we do in .. one clock, our competition takes 4 to 6 clocks to do.

So that’s significant for those high performance computing applications that not only need high density, but they need to access that data quickly.

at the 12:30 minute mark:

So the distributed architecture was rather .. uh .. aggressive at it’s time, when we first came out with that.

But what we found since is that as the industry standards bodies (JEDEC) looks ahead for 3 and 4 years, and they look to the next memory density at DDR4 .. which will come out in about 2014 .. they realized that at the higher frequencies .. that the distributed architecture’s really the ONLY way to achieve those speeds .. without inducing such tremendous latency penalties.

at the 12:55 minute mark:

So .. here we show a drawing of the .. uh .. distributed buffer concept that the .. JEDEC is promoting for DDR4 and below it you see our actual DIMM design for DDR3 and you notice the similarities (laughs) .. that they are very .. almost identical, aren’t they ?

at the 13:10 minute mark:

So there’s there’s a reason for that .. so as the industry standardizes on that distributed architecture that something again that we have a lot of IP around .. there’s a LOT of known-how as well on how to make .. the .. the buffers along the edges work with the register, and in the center .. and you really need to design the thing WITH it’s end-application in mind ..

You can’t just approach it as a semiconductor-only company .. and say I’ll make a chip that in .. because there ARE a lot of timing nuances that you need to understand between the two.

at the 13:40 minute mark:

So we feel VERY well positioned for DDR4 .. we feel this architecture .. we have a significant lead in the industry .. in making this product work.

at the 13:50 minute mark:

And we have been doing that a long time – as I mentioned .. you know .. over 10 years of working directly with our customers engineering teams.

We actually got the idea for this “rank multiplication” back in 2003 through our working with Apple (AAPL). So Apple’s very dear to our heart.

Apple was using a PowerPC .. uh .. processor in their Xserve .. uh .. server and there was some rank limitations .. and how many .. how many memory they could access and they wanted a higher density memory and didn’t know how to do it.

at the 14:17 minute mark:

So as got involved and we were able to .. to figure out we could double these ranks on the DIMM, we could effectively build a larger .. uh .. DIMM size for them and we did.

And we sold several millions of dollars on that, but more importantly, it gave us the ideas of what we could do by building some controller technology onto the memory subsystem .. and building some silicon.

at the 14:35 minute mark:

So we were able to use some programmable logic at first .. in using our ideas, but then as the frequencies increased, we went with our own ASICs (application-specific integrated circuit).

And so we did that for DDR2 and now for DDR3 and we are well positioned for DDR4, so you can see we’ve .. several patents that started way back in 2004 along this way ..

And we continue to innovate .. uh .. with this and just recently we announced a couple of collaboration agreements with some very large .. uh .. customers of ours, as we look not only for the next-generation but but another generation beyond .. as a use (of) HyperCloud technology.

at the 15:10 minute mark:

So now we .. remember we talked about what the market looked like for next year and we just said .. “well if it was just 10%” .. uh .. or 1% rather .. 1% of the market.

We had a $500M market opportunity.

Now let’s look out for 2014 .. because as we increase to DDR4 speeds, the frequency goes WAY up .. and when the frequency goes up, the effect of the bus .. the memory bus is huge.

at the 15:30 minute mark:

So the industry’s estimating they’ll need 50% of all servers .. and it will be about 13M to 14M units, up from 9M today .. uh .. will require some kind of “load reduction” (technology) .. HyperCloud-type technology (or the LRDIMM which is infringing NLST IP – though LRDIMMs have latency issues).

If we take 10% of that, let’s call it 1.3M servers and let’s use 12 DIMMs per server, as an average .. now the densities move up .. so instead of 16GB and 32GB today, we’ll talk 32GB and 64GB .. 3 years from now .. we’re looking at ABOUT a $7.5B market size.

at the 16:05 minute mark:

So .. significant growth .. we think we are well positioned for where the industry NEEDS to go, where it wants to go, and how to get there.

And our technology scales very well .. along that.

at the 16:15 minute mark:

So that wraps the .. HyperCloud IP part of our product line.

How did Netlist get so big – is it too big for it’s boots ?

Firstly, Netlist has been an inventor of a number of innovations in memory modules:

– first producer of 4-rank memory
– the inventor of VLP which was shipped for IBM blade servers

They were a supplier of memory for Apple, and the inventor of VLP (very low profile) memory and originally shipped DDR1 VLP for IBM blade servers some years ago.

The reason Netlist looks like a small company today is that as margins were reduced in the commodity memory industry (and companies experienced losses while selling memory modules at low margin), Netlist decided to move AWAY from low margin memory to high margin IP-based memory products.

These would be products that were military grade flash (high margin), non-volatile memory (NVvault) and the IP related to manufacture of memory modules for next-generation needs:

– load reduction and rank multiplication
– Planar-X for sandwiching multiple PCBs for low latency operation

While Netlist revenues were small, they have streamlined the company over the years and delivered EBITDA breakeven (or close to it) for the last couple of quarters.

– a market cap of $65M
– debt of $2.85M
– cash $13.87M
– shares outstanding 28.33M
– high insider ownership – 21.86% (CEO holds a ton of shares)
– institutional ownership – 22.20% (up from 5% last year)

source:
http://finance.yahoo.com/q/ks?s=nlst

All this while waiting for their IP-based products to ramp at high volume when IBM/HP start shipping them with Romley.

This has been the business execution of the company.

The company also holds a significant IP position related to LRDIMMs and DDR4.

Strong IP position that underlies future memory

The company also holds a significant IP position which has entangled Google (Google vs. Netlist and Netlist vs. Google) and Inphi (Netlist vs. Inphi and the now unilaterally retracted Inphi vs. Netlist).

None of these litigations are constraining Netlist (since Google is just asking for relief – to be left alone, and Inphi has unilaterally retracted it’s retaliatory suit – possibly because of double-patenting behavior by Inphi at the USPTO).

But this litigation presents a clear and present danger to Inphi (which is the sole-source for LRIMM buffer chipsets for LRDIMMs). Inphi is thus in a position of having delivered a product that is inferior to HyperCloud in performance, but are also being accused of having copied Netlist IP – and to make matters worse to have based an IPO on the potential of LRDIMMs for Romley (something which may come back to bite them).

Google is under similar threat – as their case involves asking relief from the court from potential injunction against Google servers. It turns out Google had asked Netlist to consult on a project to improve memory – they were shown Netlist IP under NDA – but then they proceeded to do it on their own. In the process, Google hired subcontractors to the do the job of making memory modules for it’s internal consumption – this consumption could be hundreds of thousands of servers. Google runs a memory module production division for internal consumption (which many people may not be aware of).

In addition the JEDEC committee finalizing the DDR4 standard has goofed by having not secured the relevant IP that would unencumber LRDIMMs from legal threat. They are currently on a path of finalizing the DDR4 standard without having secured the appropriate licensing as well (again). Inphi has investment from Samsung – so there could be company-related interests at work – however these are complications which put LRDIMMs and DDR4 in a dangerous position (if Netlist vs. Inphi resumes and the judge issues an injunction against all infringing product).

Once the DDR4 standard is finalized, the JEDEC will be in an even worse negotiating position with regard to Netlist IP. Because all negotiations are best done while you have some ability to avoid having to license. However this strategy may not apply to DDR4 as they may not have a choice but to use the LRDIMM/HyperCloud ideas – because of the nature of the problem they face going to high frequencies – problems that Netlist has solved already with it’s HyperCloud memory – which looks very similar to what DDR4 has now proposed (!).

NOTE: the situation is very different from Rambus where Rambus was front-running the JEDEC activities by patenting in advance of JEDEC decisions. With Netlist and JEDEC, we have a case of alleged leakage of NDA info by TI to JEDEC (circumstantially confirmed by TI keeping out of the LRDIMM space altogether – possibly as part of the settlement with Netlist) – and that leaked info at JEDEC then being the basis of LRDIMMs and DDR4 later.

These issues have been covered in the following articles:

On DDR4 borrowing from LRDIMM use of Netlist IP in “load reduction” and “rank multiplication”:

https://ddr3memory.wordpress.com/2012/06/08/ddr4-borrows-from-lrdimm-use-of-load-reduction/
DDR4 borrows from LRDIMM use of load reduction
June 8, 2012

https://ddr3memory.wordpress.com/2012/06/07/jedec-fiddles-with-ddr4-while-lrdimm-burns/
JEDEC fiddles with DDR4 while LRDIMM burns
June 7, 2012

Netlist – a history of strong settlements

Netlist has a history of strong settlements in it’s favor – with MetaRAM (precursor of Inphi/LRDIMMs) which went out of business after conceding IP to Netlist as compensation – Texas Instruments which was allegedly the original leaker of Netlist IP obtained under NDA to the JEDEC (which proceeded to propose designs based on Netlist IP) which settled with Netlist and now you find that Texas Instruments is not a player in the LRDIMM space (TI is the 3rd largest buffer chipset maker for RDIMMs).

IDTI which is the second largest buffer chipset maker has postponed their earlier enthusiastic support of LRDIMMs – and are skipping LRDIMMs for Romley altogether.

The buffer chipset makers and their attitude towards LRDIMMs for Romley has been covered here:

https://ddr3memory.wordpress.com/2012/05/24/lrdimm-buffer-chipset-makers/
LRDIMM buffer chipset makers
May 24, 2012

In addition, Netlist has a strong patent portfolio in load reduction and rank multiplication – which underlies LRDIMMs and DDR4 – with a long string of continuation patents that have included prior art that Google, Smart Modular and Inphi have posed in challenges of Netlist patents at the USPTO (in patent reexaminations).

Patent reexamination process is slow – and it has allowed both Google and Inphi to stay the court cases (Google vs. Netlist was a few months short of a jury trial, and Inphi has unilaterally retrated Inphi vs. Netlist because of possible weaknesses in Inphi patent position – a case of possible double patenting which would have invalidated one or two Inphi patents if they had proceeded).

However, we are seeing progress in the patent reexams related to the Google litigation, and for the Inphi litigation – the ‘537 and ‘274 patent reexams have come through and the USPTO has re-validated both Netlist patents with all claims intact.

This is a powerful statement and bodes poorly for both Google and Inphi – but more immediately for Inphi and LRDIMMs which now face the prospect of having no case when Netlist vs. Inphi resumes. This is because patents which survive reexamination cannot be challenged on the same issues again by the challenger.

In addition, the USPTO has been awarding Netlist a stream of continuation patents which have included the prior art presented in the other reexams.

Google vs. Netlist is not constraining for Netlist because Google makes no claims against Netlist – but is instead asking for relief from possible injunction on it’s servers (which use the Netlist IP).

Netlist – superior execution

However, superiority of IP will only get you so far – as the competition can keep you occupied in the courts.

What has helped Netlist even more than IP superiority is it’s execution on the ground.

Netlist position as inventor of load reduction and rank multiplication has allowed it to understand the problem while others have just chosen to copy it (Inphi with LRDIMMs and JEDEC with DDR4).

Netlist has been shipping HyperCloud (in small numbers) to end-users for a while on pre-Romley servers (Westmere) and has a greater familiarity with this technology.

LRDIMMs have not had the luxury of being tested by end-users prior to Romley (in fact LRDIMMs could not work on pre-Romley servers without a BIOS upgrade which severely limited it’s viability in the pre-Romley period).

As a result the HyperCloud available for Romley now is superior to LRDIMMs (which suffer from high latency issues and are unable to deliver 3 DPC at 1333MHz on standard Intel PoR servers like the HP DL360p and DL380p and the IBM System x3650 M4 servers that Netlist HyperCloud is available on).

It is not surprising that an inventor has better understanding of the problems with a technology – for example Netlist has been saying for years that products by MetaRAM and later Inphi would not perform well because of design mistakes (MetaRAM for their “stacked DRAM” which introduce asymmetrical line lengths and asymmetrical heating issues which make it difficult to tune parameters, and Inphi for their asymmetrical line lengths and centralized buffer chipset). Those assertions have turned out to be true as LRDIMMs saw the light of day in early 2012.

– LRDIMMs produced by Inphi have high latency issues
– LRDIMMs are unable to deliver 1333MHz at 3 DPC on standard Romley servers

Perhaps indicative of these issues, IDTI which was an aggressive competitor of Inphi, suddenly lost interest mid-stream and has skipped the Romley rollout altogether.

Prior to this, IDTI was an aggressive competitor of Inphi in the LRDIMM space.

Right now, Netlist delivers superior performance over the LRDIMMs, and with 32GB memory modules will be able to produce cheaper product (because they will use 4Gbit monolithic, while 32GB RDIMMs and 32GB LRDIMMs are based on 4Gbit x 2 DDP memory packages). This is because Netlist has the advantage of leveraging it’s Planar-X IP which allows the sandwiching of 2 PCBs to make one memory module (increasing the real-estate available on a memory module). Netlist is using Planar-X on the 32GB HyperCloud as well as the 16GB VLP RDIMM that is being sold for Romley-based IBM blade servers.

For more information about what levels of high memory loading on a 2-socket server demand use of HyperCloud, checkout:

https://ddr3memory.wordpress.com/2012/06/29/infographic-memory-buying-guide-for-romley-2-socket-servers/
Infographic – memory buying guide for Romley 2-socket servers
June 29, 2012

UPDATE: 07/03/2012: third-party manufacture of HyperCloud

Licensing and third-party manufacture of HyperCloud

Since we have talked about how LRDIMMs and DDR4 will require HyperCloud licensing, one should also talk about third-party manufacture of HyperCloud (using HyperCloud buffer chipsets – for example from Netlist-nominated reseller Diablo or Toshiba – who have occasionally been mentioned in Netlist SEC filings).

However, realistically – because of the long cycle of almost a year required to test, build and qualify memory at the OEMs – anyone trying to manufacture HyperCloud will have missed the boat on Romley – as well as Ivy Bridge in 2012, placing them sometime in 2013.

That will be awfully close to DDR4, and so the likely scenario is that we may not see anyone attempt to manufacture a HyperCloud.

For DDR4 it becomes likely that third-parties could produce HyperCloud.

Which means that for 2012-2013 one can expect there to be a continuation of the current HyperCloud vs. LRDIMM standoff.

During this time there may be license cover for LRDIMMs (possible if DDR4 licenses Netlist IP prior to or after finalization of DDR4 standard in mid-2012), or there may not – in either case it benefits Netlist if Inphi remains healthy as there will be a possibility of infringement compensation from Inphi to Netlist if Inphi remains healthy.

When DDR4 arrives, we should see licensing of Netlist IP in load reduction and rank multiplication to take place.

Thus for this 2012-2013 period, HyperCloud will be the only one making HyperCloud. However, fortunately the volumes for HyperCloud during this period will be sufficient to be satisfied by NLST facilities – which are capable of millions of memory modules.

Once DDR4 arrives, the volumes may increase by an order of magnitude and would require some third-party manufacture and licensing agreement.

Here is an analyst asking Netlist if there is pressure on them from the OEMs to enable a second-source:

http://seekingalpha.com/article/592411-netlist-s-ceo-discusses-q1-2012-results-earnings-call-transcript
Netlist’s CEO Discusses Q1 2012 Results – Earnings Call Transcript
May 15, 2012

at the 19:45 minute mark:

Rich Kugele – Needham & Co:

Thank you and good afternoon.

A couple of questions.

Uh first, when it comes to .. uh .. customers of such scale as HP and IBM .. uh .. obviously .. since they have signed contracts with you, they are comfortable with your ability to supply, but have they asked for anyone else to be a .. uh .. second-source in any way for these HyperCloud modules.

Have they asked you to license it out in any way.

at the 12:15 minute mark:

Chuck Hong – CEO:

Uh .. Rich .. uh ..

Uh .. we’ve had some of those discussions in the past.

I think as we see traction and the volumes increase over time .. uh .. we’ll get into more serious discussions about a potential second-source.

For now I think they are comfortable with us supporting .. uh .. product.

We’ve got products that have already been shipped into the hubs worldwide .. uh .. for both HP and IBM and .. uh .. we’ve started to .. uh .. fill the pipeline.

So I think we’ll discuss second-sourcing as the volumes increase.

UPDATE: 07/03/2012: what to expect in the near future

What to expect in the near future

A number of things are due. This is based on Netlist comments in conference calls about expected timings for various events.

We may therefore see one or two of these happening in the near future:

– qualification on additional servers at IBM/HP for 16GB HyperCloud

– 32GB HyperCloud as IBM HCDIMM/HP HDIMM due soon – hopefully available in both 1.5V and 1.35V. This will become the mainstream memory for high memory loading (greater than 256GB at 1.5V and greater than 384GB at 1.35V) on Romley according to analysis presented here

– VMware has already certified both 16GB and 32GB HyperCloud for use on it’s servers

– 32GB IBM VLP RDIMM to arrive slightly later than the 32GB HCDIMM

– something on NVvault DDR3 for general use on Intel Romley servers

– JEDEC maybe announcing DDR4 final standard – was due mid-2012. Anyone have a clue when they will deliver ?

– licensing of NLST IP for LRDIMM – related to NLST vs. Inphi. Inphi patent reexams gone poorly for Inphi – those will plod along. But with overhang on LRDIMM, there would be some pressure to get legal cover.

16GB HyperCloud will sell at 3 DPC.

32GB HyperCloud will be a mainstream product i.e. preferable over all other RDIMM, LRDIMMs at 1 DPC, 2 DPC and 3 DPC.

16GB VLP RDIMM (4-rank) also is a mainstream product i.e. usable at 1 DPC and 2 DPC on the 2 DPC blade servers. And superior to other offerings if Netlist is to be believed (it will certainly be cheaper).

UPDATE: 07/03/2012: sunk factory costs

Sunk factory costs

Netlist is a $65M company operating at breakeven with large sunk costs – for example it’s factory in China has sunk costs and as a result an increase in memory module volume does not increase costs that much.

If revenue from new product lines contributes even a fraction of the $16M, it will be significant.

From the NLST Q4 2010 conference call:

http://www.netlist.com/investors/investors.html
Netlist Fourth Quarter, Year-End Results Conference Call
Wednesday, March 2nd at 5:00 pm ET
http://viavid.net/dce.aspx?sid=00008211

at the 13:30 minute mark

The year-over-year gross profit dollars and margins improved due to the 105% increase in revenue as well as the increased absorption of manufacturing costs as we produced 88% more units than the year earlier quarter with no related increase in the cost of factory labor and overhead.

From the NLST Q1 2011 conference call:

http://viavid.net/dce.aspx?sid=0000853C
Netlist First Quarter Results Conference Call
Wednesday, May 11th at 5:00 pm ET

at the 11:05 minute mark:

This improvement was due to the 52% increase in revenue, a favorable DRAM cost environment, as well as increased absorption of manufacturing cost, as we produced 64% more units than the year earlier quarter with only a slight 4% increase in the cost of factory labor and overhead.

From the NLST Q2 2011 conference call – 101% greater units produces with only a 16% increase in cost of factory labor and overhead:

http://viavid.net/dce.aspx?sid=00008B06
Netlist – 2011 Second Quarter and Six-Month Results Conference Call
Aug 15, 2011 05:00 PM (ET)

at the 11:30 minute mark:

This improvement was due to the 72% increase in revenue and favorable DRAM cost environment as well as the increased absorption of manufacturing costs, as we (unintelligible) 101% more units than the year earlier quarter, with a 16% increase in the cost of factory labor and overhead.

UPDATE: 07/27/2012 – confirmed HCDIMM similar latency as RDIMMs
UPDATE: 07/27/2012 – confirmed LRDIMM latency and throughput weakness

HyperCloud HCDIMM latency similarity with 16GB RDIMM (2-rank) latency has been confirmed. And LRDIMM latency and throughput weakness vs. HCDIMM – even when running at the SAME lowered speeds of the LRDIMM – confirmed:

https://ddr3memory.wordpress.com/2012/07/26/latency-and-throughput-figures-for-lrdimms-emerge/
Latency and throughput figures for LRDIMMs emerge
July 26, 2012

Advertisements

12 Comments

Filed under Uncategorized

12 responses to “Examining Netlist

  1. What happened to the rule of posting only comments related to the technical aspects?
    (OK, it is your blog after all, so you can write what you want)

    I would say that all the numbers above will be confirmed once we see some sort of a market report that splits out the number of servers that ship with >256GB of RAM.

    Please also keep in mind that:
    – IBM and HP only have 60% market share in the server space
    – System X and Proliant Gen8 are only a certain % of sales of IBM and HP
    – There is a strong momentum to use commodity servers and to put the complexity in the software instead. Same that is going on for the routers and switching area. For example, instead of adding more memory, one adds more machines and then use distributed processing.
    (I still think that the natural progression of technology means that eventually all servers will ship with 768GB of RAM)

    for Hypercloud,
    once you multiply all these percentages of market share, server type, model breakdown, and >256GB together, then final number is much smaller than you expect.

    Need to see the server market breakdown for Q2. This report will give more answers.

    • quote:
      Need to see the server market breakdown for Q2. This report will give more answers.

      Please let me know as well. Thanks.

      quote:
      What happened to the rule of posting only comments related to the technical aspects?
      (OK, it is your blog after all, so you can write what you want)

      Well, I didn’t want to discuss IPHI too much – though I may in the future. That article was about the steep decline in institutional ownership at Inphi and it’s IPO etc.

      Regarding the market for NLST – those numbers are for “later” in DDR3 and DDR4 in 2014. It will be a slow move up to $7.5B 🙂

      But yes, nobody’s thinking it will be the same NLST going to $7.5B revenue – probably there will be changes along the way. But the numbers are a valid exercise.

      In reality, I cannot see NLST being the sole provider of HyperCloud – as while they have capacity for millions of memory module at their facility in china – this will work for 2012 and 2013 – but not 2014 with DDR4 mainstream needs.

      I realistically do not see anyone building HyperCloud – as they time it will take for them to qualify THEIR version of HyperCloud will probably take them a year from now – which means they may as well tackle DDR4.

      They have already missed the boat for Romley – and won’t make it in time for Ivy Bridge either in late 2012.

      So I see a huge gap in this 2012-2013 period – during which HyperCloud will ramp – and LRDIMMs may or may not get legal cover along the way – but there just cannot be a HyperCloud memory module from another memory manufacturer (at least there have been no murmurs along these lines from NLST).

      So I see any realistic manufacturing by third parties to happen only for DDR4 and beyond.

      But this scenario is not unrealistic because the volumes for HyperCloud will be sufficient to be satisfied by NLST facilities – which are capable of millions of memory modules.

      Will there be LRDIMM sales – probably – but it all adds to the infringement. In fact it benefits NLST if IPHI does well – i.e. if IPHI can pay infringement expenses.

      And you have to realize NLST is a $65M company ! And they have controlled costs so that they are breakeven with all the sunk costs of their factory in china – NLST has said their costs increase little with volume – so increasing volume is a welcome problem for them.

      Two things distinguish NLST:

      – it’s revenue is only $65M while at breakeven – they could sneeze and double that revenue

      – they get $500-$1200 revenue per memory module (16GB/32GB) – compare that to $20 for Inphi

      And you know what all the analysts have said about Inphi’s potential (they sold a whole IPO based on that).

      So as an investment it has huge potential and little downside. Most of the potential that people were waiting for 2-3 years has started executing – the ramp has started (actually it matters not a lot how steep the ramp is – as any ramp is huge for a $65M company).

      But from the point of view of a memory USER – all this matters little. As they trust they will get the best memory for the price eventually. In any case, if a below par product was being sold and nothing else they would still have to buy it.

      So what we have here is a battle between HyperCloud as an RDIMM vs. LRDIMM as a new standard.

      And at least people have a choice – and only because HyperCloud is a regular RDIMM in effect. If it was a separate standard you would only have LRDIMMs to choose from.

      In any case, examining NLST is interesting because of all the different threads that are operating which all point in one direction. Performance, price and IP advantages – and plus it is RDIMM compatible.

      An examination of NLST is important beyond the investment argument – as it illustrates how a company leveraging it’s unique IP is able to survive the sharks and emerge (but after much effort).

      If an examination of the IP issues in Oracle vs. Google is important, then so is an examination of these issues in the memory industry.

      However, you will not find much information on these issues – because everybody and NLST itself tread lightly – because it is all related to partnerships in the future – so everything is done on the quiet.

      Result – lack of information. You would think these types of issues would be covered more often.

      • quote:
        Need to see the server market breakdown for Q2. This report will give more answers.

        Do you think these may give a breakdown of number of servers requiring greater than 256GB ?

        That would be great because I have not see such breakdowns before.

      • I had seen this:

        “Starting Q3 2011, IDC began to track the new form-factor called hyper-scale servers.
        In response to evolving market demand within large scale, web 2.0, hosting, and HPC environments, new server designs have been developed for these markets. Hyper-scale servers are designed for large scale datacenter environments where parallelized workloads are prevalent. The form-factor serves the unique needs of these datacenters with streamlined system designs that focus on performance, energy efficiency, and density. Hyper- scale servers forego the full management features and redundant hardware components found in traditional enterprise servers as these capabilities are accomplished primarily through software.
        Hyper-scale server demand grew 8.7% year over year in Q3 2011 to $428.5 million as unit shipments increased 4.3% to 118 888 servers. Hyper-scale servers now represent 3.4% of all server revenue and 5.7% of all server shipments. 73.6% of all hyper-scale server revenue was generated in the U.S. in the quarter.”

        http://www.xbitlabs.com/news/other/display/20111202022603_IBM_and_HP_Remain_Top_Server_Vendors_amid_Stabilizing_Market_Growth.html

        eventually, they are bound to breakout the servers with >256GB. That’s clearly a category on its own.

      • Excellent.

        Netlist needs about 2222 of these servers running 384GB using 16GB HyperCloud per quarter to double revenue beyond current breakeven level of $16M per quarter revenue.

        16GB HyperCloud sells for $550 – if it gets $300 to Netlist, they need 53000 sold per quarter to double revenue. At 24 DIMMs per server that is 2222 servers.

  2. Pingback: Examining LRDIMMs | ddr3memory

  3. Pingback: Memory for VMware virtualization servers | ddr3memory

  4. Pingback: Is Montage another MetaRAM ? | ddr3memory

  5. Pingback: Infographic – memory buying guide for Romley 2-socket servers | ddr3memory

  6. Pingback: A second-source for HyperCloud ? | ddr3memory

  7. Pingback: HyperCloud to own the 32GB market ? | ddr3memory

  8. Pingback: Inphi to report July 25 | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s