Is Montage another MetaRAM ?

Intel Capital invests again

UPDATE: 07/06/2012 – non-viability of LRDIMMs
UPDATE: 07/06/2012 – now a Montage 1333MHz version like Inphi

One memory module maker makes a departure from usual practice (a memory module maker usually picks ONE of the buffer chipset suppliers in order to minimize SKUs) by producing LRDIMMs based on BOTH an Inphi LRDIMM buffer chipset, as well as one from Montage.

This might be indicative of a perception at the memory module markers of higher risk associated with LRDIMMs/Inphi.

The Montage-based LRDIMMs however seem to underperform even the Inphi-based LRDIMMs.

The paucity of players in the LRDIMM buffer chipset space has been examined previously:

https://ddr3memory.wordpress.com/2012/06/15/why-are-lrdimms-single-sourced-by-inphi/
Why are LRDIMMs single-sourced by Inphi ?
June 15, 2012

With Inphi being the only player, with IDTI having skipped Romley, and Texas Instruments being not interested in this space.

Now it seems we have Hynix using not just Inphi for supply of LRDIMM buffer chipsets, but also a company called Montage, a China-based company with executives from IDTI.

Montage as another MetaRAM

Montage is being supported by Intel Capital:

http://www.montage-tech.com/investor.html

For a history of MetaRAM and Inphi/Intel link to MetaRAM:

https://ddr3memory.wordpress.com/2012/05/30/legal-issues-with-lrdimms-repeating-metaram-2/
LRDIMMs similarities with MetaRAM
May 30, 2012

Montage seems to have a number of employees from IDTI:

http://www.montage-tech.com/managementteam.html

A webpage describing Montage’s LRDIMM buffer chipset:

http://www.montage-tech.com/Product_MB6000.htm
M88MB6000 – Memory Buffer For DDR3 LRDIMM

Montage seems to be a new entrant in the LRDIMM buffer chipset space – their LRDIMM buffer chipset is being used by Hynix:

http://www.intel.com/content/www/us/en/platform-memory/ddr3-lrdimm-e5-family-memory-list.html
LRDIMM System-Level Validation Results

Sk Hynix HMT42GL7BMR4A-H9 16GB 9 C Sk Hynix H5TC4G43BMR-H9A 2Gb x4 1140 Inphi GS02A 1
Sk Hynix HMT42GL7BMR4A-H9 16GB 9 C Sk Hynix H5TC4G43BMR-H9A 2Gb x4 1140 Inphi GS02A 1
Sk Hynix HMT42GL7BMR4A-H9MBAC 16GB 9 C Sk Hynix H5TC4G43BMR-H9A 2Gb x4 1207 Montage C0 1
Sk Hynix HMT84GL7MMR4A-H9MBAC 32GB 9 C Sk Hynix H5TC8G43MMR-H9A 4Gb x4 1207 Montage C0 1
1 – DDP (Dual Die Package)

In fact, Hynix is covering it’s bases by offering BOTH a Inphi and a Montage-based version. This is highly unusual, since memory module makers generally tend to pick ONE of the vendors and go with that (to reduce SKU i.e. stock-keeping unit proliferation) – as suggested by Inphi:

http://www.veracast.com/stifel/tech2011/main/player.cfm?eventName=2133_inphic
Stifel Nicolaus
Technology, Communications & Internet Conference 2011
Inphi Corporation
2/10/2011; 4:25 PM
Mr. John Edmunds
Chief Financial Officer

DISCLAIMER: please refer to the original conference call or transcript – only use the following as guidance to find the relevant section

at 24 minute mark ..

(comments on the competitive landscape)

in servers .. because of the qualification cycles. . there really are some incumbent competitors .. like IDT and TXN (Texas Instruments) .. and so the 3 of us tend to split the market.

IDT and Inphi would probably share 80% of the market. TXN would be somewhere in 10-15% range.

TI is not developing an LRDIMM to our knowledge and .. uh .. their interest level seems to wax and wane at times.

We go head to head with IDT – we respect them as competitors and we think market is going to want multiple suppliers.

It’s not a market that somebody from outside can come into easily just because of the long qualification cycles and the fact these are getting deployed across a wide range of SKUs (stock keeping units ?) .. the OEMs that (unintelligible) the memory module makers don’t want to qualify multiple suppliers because they have to deploy them across a wide set of SKUs ..

Montage buffer chipset vs. Inphi

Some observations on the Intel pdf above:

– all the 16GB and 32GB LRDIMMs are still using DDP memory packages (2Gbit x 2 DDP for 16GB and 4Gbit x 2 DDP for 32GB)

– most of the 16GB LRDIMMs and 32GB LRDIMMs are rated at 3 DPC at 1066MHz

– the few 1333MHz LRDIMMs that ARE mentioned are rated at 2 DPC at 1333MHz (and not 3 DPC at 1333MHz)

These results are in line with the info presented here on the speeds that OEMs have reported for LRDIMMs. These results also confirm that the LRDIMMs are unable to deliver 3 DPC at 1333MHz.

The Samsung, Micron, Crucial and Elpida LRDIMMs are ALL based on Inphi LRDIMM buffer chipsets – no sign of IDTI or Texas Instruments – as explained previously:

https://ddr3memory.wordpress.com/2012/06/15/why-are-lrdimms-single-sourced-by-inphi/
Why are LRDIMMs single-sourced by Inphi ?
June 15, 2012

The Montage-based LRDIMMs are all rated at 3 DPC at 1066MHz.

Montage is not being used for the 2 DPC at 1333MHz LRDIMMs.

Since 16GB LRDIMMs are non-viable, and the 32GB LRDIMMs suffer because of:

– inability to deliver 3 DPC at 1333MHz
.
– high latency issues (due to asymmetrical lines and centralized buffer chipset)

The Montage-based LRDIMMs add little to that capability by only being rated at 3 DPC at 1066MHz, and do not make an appearance in the 2 DPC at 1333MHz list. Which is worse performance than normal for LRDIMMs.

UPDATE: Montage now has a 1333MHz version like Inphi

For more information about the non-viability of 16GB LRDIMMs:

https://ddr3memory.wordpress.com/2012/06/19/why-are-16gb-lrdimms-non-viable/
Why are 16GB LRDIMMs non-viable ?
June 19, 2012

UPDATE: 07/06/2012: non-viability of LRDIMMs

On the non-viability of LRDIMMs in general:

https://ddr3memory.wordpress.com/2012/07/05/examining-lrdimms/
Examining LRDIMMs
July 5, 2012

For more information about the non-viability of 32GB RDIMMs (4-rank):

https://ddr3memory.wordpress.com/2012/06/20/non-viability-of-32gb-rdimms/
Non-viability of 32GB RDIMMs
June 20, 2012

Conclusion

Montage supplied LRDIMM buffer chipsets seem to underperform even the Inphi LRDIMM buffer chipsets.

Yet Hynix has chosen to qualify with multiple buffer chipset suppliers – both Montage and Inphi.

Usually memory module players choose one supplier and stick with it – as it minimizes the SKUs they need to manage.

Hynix decision to qualify using two sources might be indicative of a perception of higher risk associated with LRDIMMs.

Picking a Chinese-based company may ensure supply of LRDIMMs – at least for the Chinese market (?) in case Inphi is unable to supply LRDIMM buffer chipsets in the future.

On the risk factors for LRDIMM:

https://ddr3memory.wordpress.com/2012/06/05/lrdimms-future-and-end-user-risk-factors/
LRDIMMs future and end-user risk factors
June 5, 2012

https://ddr3memory.wordpress.com/2012/06/15/why-are-lrdimms-single-sourced-by-inphi/
Why are LRDIMMs single-sourced by Inphi ?
June 15, 2012

UPDATE: 07/06/2012 – now a Montage 1333MHz version like Inphi

Thanks to Carlos Bustamante (see comments below) for pointing out that Montage now has a 1333MHz version listed just like the Inphi for Hynix.

http://www.intel.com/content/dam/www/public/us/en/documents/platform-memory/ddr3-lrdimm-e5-family-memory-list.pdf
LRDIMM System-Level Validation Results

Sk Hynix HMT42GL7CMR4A-H9 16GB 9 C Sk Hynix H5TC4G43CMR-H9A 2Gb x4 1205 Inphi GS02A 1
Sk Hynix HMT42GL7CMR4A-H9MBAC 16GB 9 C Sk Hynix H5TC4G43CMR-H9A 2Gb x4 1207 Montage GS02A 1
Sk Hynix HMT42GL7MMR4A-H9MBAC 32GB 9 C Sk Hynix H5TC4G43MMR-H9A 2Gb x4 1207 Montage C0 1
1 – DDP (Dual Die Package)

Advertisements

31 Comments

Filed under Uncategorized

31 responses to “Is Montage another MetaRAM ?

  1. Pingback: Why are LRDIMMs single-sourced by Inphi ? | ddr3memory

  2. thank you for the excellent information.
    I have one question:
    What if the normal user does not put the memory speed at the top of the priority list?

    as in, 28% faster here or there sounds really cool to read but how would a normal user trying to order from IBM or HP figure out that it is worth paying attention to the details and that eventually that 28% is going to make a difference in his application?
    I look on the IBM and HP website and you see rDIMM and LRDIMM all over the place with all these configurations for CPU models and speeds. What is a normal user to do?

    In the case of this post, sure, build it in china, lower speed but so what.
    24 x much lower cost would add up to a lot of savings over 1000 machines to install.

    What is the factor that is going to make people regret not having installed HCDIMMs from the start? Inquiry minds want to know.

    thanks again for all the information, much appreciated.

    • You are right that an average user will not know the difference between the LRDIMM and the HyperCloud (let alone the legal issues and risks associated with LRDIMM).

      However someone buying 384GB on a 2-socket server (populating 3 DPC using 16GB memory modules) SHOULD know that – they will be spending $12,000 (24 DIMMs x $500) or so for memory alone there – a cost which DWARFS the cost of the server itself.

      In fact it could be argued that the choice of memory becomes MORE important when you are loading memory at those levels on a server.

      If the reasons given on this blog seem clearer it is because there are really one or two big issues at the core of the problem:

      – greater memory load on the bus reducing available bandwidth
      – rank limitations

      And “load reduction” and “rank multiplication” are both Netlist IP !!

      So no wonder everybody tiptoes around the main issues – leading to explanations which make no sense or seem ad hoc at best.

      LRDIMM has an issue that you require a BIOS update for that motherboard (pre-Romley) and for Romley need to do that homework at the OEM to make sure the motherboard is LRDIMM-compliant.

      HyperCloud does not have this problem – does not require a bBIOS update.

      However you will not find this info in a discussion about LRDIMMs by the OEMs.

      Why ? Because it does not help sell product. Or sell the product that they have. It will only lead people to ask questions, then the next set of questions, which will eventually lead to “then why are you selling LRDIMMs”.

      IBM and HP seem to have recognized this and their documentation is very clear (although I agree that documentation has many versions and in the case of HP I have seen typos as well – so this also doesn’t help).

      But the right documents from IBM and HP are very clear about where you would use HyperCloud.

      For example, IBM docs clearly delineate that the new thing on offer is HyperCloud which speeds up if you want 3 DPC. Which is EXACTLY the market for HyperCloud at 16GB.

      Same for HP – which is going further and constraining the sale of HyperCloud to a FIO (factory installed option) at 3 DPC on the DL360p and DL380p.

      When 32GB HCDIMM becomes qualified (estimated mid-2012 by Netlist), I think these documents will become clearer still.

      As at 32GB the discussion of LRDIMM/HyperCloud will dominate how 32GB memory is used (since 3 DPC and 2 DPC both requireand maybe even 1 DPC if IBM docs to be believed).

      32GB LRDIMMs will also be more expensive than 32GB HyperCloud – because LRDIMMs will use 4Gbit x 2 DDP and HyperCloud will use the 4Gbit monolithic (using Planar-X IP).

      In fact both LRDIMMs and RDIMMs will use 4Gbit x 2 DDP – so it is possible that 32GB HCDIMM maybe CHEAPER than both LRDIMMs/RDIMMs.

      quote:
      —-
      as in, 28% faster here or there sounds really cool to read but how would a normal user trying to order from IBM or HP figure out that it is worth paying attention to the details and that eventually that 28% is going to make a difference in his application?
      —-

      quote:
      —-
      In the case of this post, sure, build it in china, lower speed but so what.
      24 x much lower cost would add up to a lot of savings over 1000 machines to install.
      —-

      The IBM HCDIMMs and LRDIMMs are priced similarly – and similar for HP.

      So I think IBM/HP are quite clear in their neutralness on the subject – now the user should decide – so only someone who has not bothered to research what they are buying will pick the LRDIMMs over the HCDIMM.

      But you are right there many versions of documents for the server – and only the most recent may have info on the HyperCloud.

      I think the OEMs are also going against a tide of conventional wisdom (Intel support of LRDIMMs). And no one wants to be the one who points out the emperor has no clothes (i.e. problems with LRDIMMs – from execution of product, to IP issues). So every OEM had to make sure they were seen as supporting LRDIMMs. Yet in that environment IBM/HP bucked the trend and offered HyperCloud as well – and they will benefit from that.

      The problem with LRDIMMs is that even if the licensing issue is “cured” by getting licensing from Netlist eventually, that does not cure the performance issues with LRDIMMs for those who will have already bought LRDIMMs.

      quote:
      —-
      What is the factor that is going to make people regret not having installed HCDIMMs from the start? Inquiry minds want to know.
      —-

      The factor that is going to make them regret is if what they don’t know now becomes common knowledge later – and then they will look like fools for not knowing that retroactively.

      The other factor is is the court issues an injunction against Inphi – that is quickly going to get people’s attention.

      However realistically I think they will probably license NLST IP well before that.

      However that may cure the licensing issues for LRDIMM, but it won’t cure the performance issues with LRDIMMs.

    • quote:
      —-
      The IBM HCDIMMs and LRDIMMs are priced similarly – and similar for HP.
      —-

      The 16GB versions are priced similarly.

      The 32GB LRDIMM is $4000+ i.e. very expensive.

      A 32GB HCDIMM is not listed yet but due mid-2012 (according to NLST).

      Both the 32GB LRDIMMs and RDIMMs are based on 4Gbit x 2 DDP memory packages – which should be more expensive than the 32GB HCDIMMs (use 4Gbit monolithic memory packages and leverage NLST Planar-X IP).

  3. Thanks for taking the time for the reply.

    Hence, you predict that, given no other unforeseen technological invention in the DDR space, that by the end of the year or early next year most servers configured with >256GB of memory will be using Hypercloud DIMMs. right?

    Since HP and IBM own over 65% of the server market then that will mean Hypercloud will dominate the DDR3 market until a new technology comes along. (unless DDR4 uses IP from Netlist)

    • quote:
      —-
      Hence, you predict that, given no other unforeseen technological invention in the DDR space, that by the end of the year or early next year most servers configured with >256GB of memory will be using Hypercloud DIMMs. right?
      —-

      Given the constraints I have outlined (IP, execution of product), that is the outcome. And that is what NLST has hinted at – i.e. dominate memory for the next decade (i.e. DDR3 to DDR4).

      HyperCloud says it is already producing a memory that transitions smoothly to DDR4 – as opposed to LRDIMMs which NLST has called “end-of-life”.

      However, practically speaking – for the servers that HyperCloud is available on now – HP DL360p and DL380p and the IBM x3650 M4 – that is probably true. Anyone buying memory on those servers will have to know what the difference is between HyperCloud and LRDIMMs.

      One has to understand the constraints operating in this space however – and once you understand that the path forward becomes clear.

      There are certain IP constraints – you can ignore that as slow moving.
      There are certain memory package availability constraints – you cannot ignore those (32GB will need a load reduction solution at 3 DPC and 2 DPC or more).
      There are certain performance constraints – LRDIMMs have asymmetrical lines and centralized buffer chipset, cannot run at 1333MHz at 3 DPC and have high latency issues on top of that.

      The last of these was predicted by NLST for some years, but only in early 2012 – when LRDIMMs actually started to see the light of day – did it become apparent that NLST was correct in their analysis of the competition.

      And it was immediately apparent after that – that given LRDIMMs was a JEDEC standard, that redesign of memory modules is very tricky, that OEM qualification window is nearly a year, that other LRDIMM makers had deemphasizing LRDIMMs – it became clear that there was not going to be any change to the LRDIMM situation from what we were seeing in early 2012.

      And that essentially locked in the trajectory of what LRDIMM would or COULD do for the rest of 2012.

    • quote:
      —-
      And that essentially locked in the trajectory of what LRDIMM would or COULD do for the rest of 2012.
      —-

      Once the trajectory for LRDIMMs is locked in – you can see that the EARLIEST they could do something to remedy (which is hard to see given the JEDEC standard and all that stuff) would be maybe middle of 2013 (or later). By the way, memory module redesign is NOTORIOUS for problems and it is very hard to redesign and be sure it will work well (so add that on). Plus add on the time to make a prototype and have it fail at the OEMs and the time to delivery just gets longer and longer.

      Now that is awfully close to DDR4 – and it would make more sense for folks to instead plan for DDR4.

      So what this means is that realistically speaking the LRDIMM space becomes sort of a dead-space (and NLST has thus correctly labelled it an “end-of-life” product) – as it can’t improve prior to DDR, and it cannot transition well to DDR4 either (architecturally dissimilar to DDR4).

      I therefore find it amusing (i.e. are they misleading people or maybe the writers do not know ?) when I see an article which talks about LRDIMMs and how they are transitioning to DDR4 – when they have nothing in common with DDR4 architecturally.

      The only thing they have in common is the load reduction and rank multiplication (NLST IP).

      And the other thing that is special about DDR4 is ALSO taken from NLST HyperCloud (symmetrical lines and decentralized buffer chipset).

    • However, when NLST talks about taking over the memory industry for decade I don’t think they mean actually supplying all the memory for the industry.

      They have talked about a licensing model (resale of NLST buffer chipsets via third party – namely Diablo Technology as one such supplier).

      However, even though NLST has talked about such a licensing strategy, I cannot see how it will help LRDIMM buyers.

      I can see NLST licensing for DDR4.

      But I do not see how NLST licensing will help LRDIMMs – it may make them kosher for use, but it will not cure their architectural problems.

      So I do not see why an end-user would choose to buy an LRDIMM vs. an HCDIMM.

      Unless the HCDIMM has not been qualified on that server, not available in the voltage they need (low voltage has not been qualified for instance on that server).

      In such cases, it is possible that LRDIMMs may be bought.

      However NLST has said they have targeted the 3 largest server lines – and that targeting of the HP DL360p and DL380p and IBM x3650 M4 may have been specifically for data center/virtualization type users who will be buying 3 DPC – thus that targeting makes sense.

      However at 32GB that market will expand from 3 DPC to 2 DPC and will be more mainstream.

      For this reason, 32GB HCDIMM will eventually have to be qualified on many more servers than just a couple of 3 DPC related ones.

    • quote:
      —-
      For this reason, 32GB HCDIMM will eventually have to be qualified on many more servers than just a couple of 3 DPC related ones.
      —-

      Or as NLST says in their recent CC:

      http://seekingalpha.com/article/592411-netlist-s-ceo-discusses-q1-2012-results-earnings-call-transcript
      Netlist’s CEO Discusses Q1 2012 Results – Earnings Call Transcript
      May 15, 2012

      quote:
      —-
      Once adopted by these kinds of end-market users and ultimately deployed in a variety of memory-hungry applications, we believe that fully populated servers with HCDIMMs will become the standard high-end memory configuration for the industry and become replicated in data centers around the world.

      We’re working to expand our qualification footprint at both IBM and HP as we increase the number of server platforms available to ship with 16-gigabyte HCDIMMs. We’re also working together with our OEM partners to qualify the next level of density 32-gigabyte HCDIMMs in Romley-based servers. We expect this testing to be completed over the next few months.
      —-

      Ok, seeing that it seems they want to expand their 16GB HCDIMMs to other servers also.

  4. I need time to digest all this information.

    What jumps to mind though is the Betamax vs VHS battle.
    In that one, the lesson was that the best technology does not necessarily always win.

    it is very interesting though the fact that you bring out that the cost of the memory is far more than the cost of the server itself!

    thanks again for all the detailed explanations.

    • Yes, the situation with Betamax vs. VHS is quite appropriate to bring up (the inferior technology wins given correct market conditions).

      The problem for LRDIMMs however is that what to do if the 16GB LRDIMMs do not outperform the 16GB RDIMMs ?

      That has nothing to do with HyperCloud. Obviously they can’t say buy the LRDIMMs over the RDIMMs.

      So what happens – stop mentioning 16GB LRDIMMs altogether. For this reason you will not find any info on the choice between 16GB LRDIMMs vs. 16GB RDIMMs.

      And the IBM/HP user guides do not mention existence of a 16GB LRDIMM.

      So at 16GB what happens – IBM and HP only mention 16GB HyperCloud – by default.

      Now what happens at 32GB ?

      At 32GB you will have the choice of 32GB RDIMM, 32GB LRDIMM and 32GB HCDIMM.

      The 32GB RDIMM will evidently underperform (4-rank slowdown at 3 DPC and 2 DPC). So they will immediately be eliminated by the 32GB LRDIMMs.

      The competition now becomes between 32GB LRDIMMs and 32GB HCDIMMs.

      The memory charts will show the speed differences at 3 DPC – 32GB LRDIMMs performing at 1333MHz at 3 DPC. However they will still remain better than the RDIMM at 2 DPC, since they should be able to do 1333MHz at 2 DPC.

      32GB LRDIMMs will be based on 4Gbit x 2 DDP. 32GB HyperCloud on 4Gbit monolithic. So LRDIMMs maybe more expensive.

      If the speed is better and the cost is better, a certain portion of buyers will notice.

      In Betamax vs. VHS there was an issue of a “standard” and a critical mass. That is, it is a flip-flop type situation.

      This was exactly the concern with HyperCloud – however I quickly eliminated the issue early on.

      The reason was that when a “proprietary” technology appears there usually is a problem – because you want it to work with other stuff out there.

      You need the cooperation of other players so that your product can work with their motherboards.

      The real problem starts when you need the motherboard makers to modify their motherboards to support your product.

      Then you are no longer independent – you have to do a “deal” with them.

      You concede something, they concede something.

      This is exactly the situation with LRDIMMs – thus the need for “standardization”. LRDIMMs could NOT have been pushed out without cooperation from the motherboard makers. The reason is that the BIOS needs to be modified for every motherboard to support LRDIMMs. Each requires testing and effort.

      An independent LRDIMM producer could not have sold LRDIMMs to the motherboard makers without conceding a lot.

      The situation with HyperCloud is none of the above – HyperCloud is plug and play, requires no BIOS modifications. In fact it behaves like a regular RDIMM.

      Do RDIMM makers require tweaking of motherboards ?

      This is a crucial and strategically significant bit of difference.

      For this reason, HyperCloud does not require ANY cooperation from anybody else. It could be sold direct by NLST as a “better RDIMM”.

      In fact Cirrascale called it exactly that:

      quote:
      —-
      http://www.cirrascale.com/press/PR111511.asp
      HyperCloud Achieves Server Memory Speed Breakthrough at SC11
      Demonstration Highlights HyperCloud’s Advantages over commodity RDIMM, LRDIMM
      SAN JOSE, CA—November 15, 2011

      The successful demonstration also highlights the fundamental differences between HyperCloud and industry’s commodity offering, LRDIMM (load reduced dual inline memory module). Unlike LRDIMM’s monolithic signal architecture, HyperCloud’s distributed signal architecture improves performance by eliminating data path delays and system-level latency. Also, while LRDIMM requires a special BIOS configuration, HyperCloud provides seamless plug-and-play operation with past, current and future generations of Intel processors.
      —-

      In fact, HyperCloud is touted as being interoperable with standard RDIMMs. This is a feature LRDIMM does not have.

      quote:
      —-
      it is very interesting though the fact that you bring out that the cost of the memory is far more than the cost of the server itself!
      —-

      Yes, and you can understand why the OEMs would not care to qualify a memory which only reduces the need for server boxes (for virtualization for instance).

      This esp. when they are getting no cut from the memory sale.

      For this reason you can see that IBM and HP have developed a “branded” HyperCloud – this way they get a cut of the significant revenue off the 3 DPC (i.e. fully loaded server) sales.

      And it is probably the “quid pro quo” that allows HyperCloud access to the IBM/HP sales channel, and IBM/HP a cut in return.

      Otherwise, you could buy HyperCloud direct from NLST.

      In fact IBM/HP DO support independent memory – but without the error logging etc. features as on the “HP Smart Memory”.

      For this reason NLST has Intel to thank – by pushing through LRDIMMs at the OEMs, Intel created a sales channel for NLST to enter. So it was Intel and it was Romley being faster (so greater memory per server needed), and it was probably the impending 32GB space where a load reduction and rank multiplication solution was needed else 3 DPC and 2 DPC would experience slowdown.

      Because of these factors, the OEM had to offer LRDIMMs.

      Once offering LRDIMMs, why not offer HyperCloud.

      So LRDIMMs may have helped HyperCloud in a way perhaps.

    • quote:
      —-
      The memory charts will show the speed differences at 3 DPC – 32GB LRDIMMs performing at 1333MHz at 3 DPC. However they will still remain better than the RDIMM at 2 DPC, since they should be able to do 1333MHz at 2 DPC.
      —-

      Sorry, that should be – the IBM/HP user guides (for the servers HCDIMM is available on) show the 32GB LRDIMMs performing at 1066MHz at 3 DPC.

      Regarding Betamax vs. VHS again.

      The crucial difference is that in Betamax vs. VHS – if the competitor gained market share, you would not be able to sell your product as no one wants it.

      With HyperCloud if LRDIMM gains market share it matters not a whit – as they cannot exclude HyperCloud as HyperCloud is levering the (far bigger) “regular RDIMM” standard.

      Since HyperCloud behaves like regular RDIMM.

      Thus there is no “exclusionary” situation.

      LRDIMM absence from the 16GB space is also significant.

      It will allow HyperCloud to gain mindshare as it will be the only load reduction and rank multiplication solution which is viable (as attested by IBM/HP).

      By the time 32GB RDIMMs become higher volume, NLST will not be an unknown player.

      When DDR4 rolls around, NLST will have had a good 1-2 years of sales. And this will be plenty of time for the IP situation to have matured – patent reexams to have asserted their effect on the court cases.

      However, with DDR4 I suspect this time they may try to license NLST IP before the standardization is complete (i.e. not repeat the LRDIMM situation – and perhaps get some protection for LRDIMM in the process).

  5. I agree with your arguments for the market and the technology.

    “Markets can remain irrational longer than you can remain solvent.”
    I believe that this comment applies in this case.

    The question now is whether Netlist will be able to keep its sales and revenue going until such a time as the sales of Hypercloud finally switch on in a massive fashion.

    • I wonder if you have any insight into this question.

      What is the market for 3 DPC use at the 16GB level – i.e. 24 DIMMs in a 2-socket server using 16GB memory modules.

      How many of these do you think IBM/HP may need to sell for the HP DL360p and DL380p and IBM x3650 M4 servers combined ?

      These seem to be data center/virtualization type servers.

      NLST raised nearly $10M (without increasing debt by selling shares) in order to fund the “filling the pipeline” for IBM/HP distribution centers.

      Now $10M is like 20,000 memory modules (at $500 a 16GB HCDIMM).

      How long to consume this many memory modules on those IBM/HP servers ?

    • quote:
      Now $10M is like 20,000 memory modules (at $500 a 16GB HCDIMM).

      Since these are used at 3 DPC i.e. 24 DIMMs on a 2-socket server.

      This is 20,000/24 DIMMs = 833 servers.

      Is 833 of these servers a lot or what ?

      Remember NLST revenue per quarter is about $16M and is near breakeven.

      This revenue is derived from non-HyperCloud products (like NVvault).

  6. I had seen a market study that mentioned sales of 2M servers per quarter in Q3 2011. I will try to find it again.

    I had not seen a breakdown for server sales by memory loading yet.
    My guess is that the number of servers sold will continue to grow for the next few years and the number of servers with >256GB of memory will increase as a % of the total.

    I am not clear on how to guess for how many of those servers shipped will have 24 DIMMs.

    20 000 DIMMs will fit only 833 servers, @ 24DIMMs per server.

    • In the article “Market opportunity for load reduction”, there is a NLST comment about their conservative “attach rate”.

      https://ddr3memory.wordpress.com/2012/06/06/market-opportunity-for-load-reduction/
      Market opportunity for load reduction
      June 6, 2012

      They don’t agree with the high estimates from Inphi etc., but say “certainly in the millions of units .. uh .. come next year”.

      And that was said in late 2011 – adding in the few months of delay in Romley, and we are talking millions of units (run rate) in late 2012 or early 2013.

      If it is millions of units later, can it be 20,000 units now ? Per quarter ?

      And this is just a filling of the IBM/HP pipeline – presumably the whole pipeline maybe some days worth of sales (is it a whole quarter’s worth ?) – initially more maybe since this is a new product.

      I wonder what is a typical pipeline fill worth – about a quarter’s worth of product or some days – 10 days or whatever ?

  7. I prefer not to guess since this is where people choose the numbers that make them feel good.

  8. Pingback: Memory buying guide – when to use RDIMMs ? | ddr3memory

  9. Pingback: Financial institutions retreat from Romley LRDIMM story | ddr3memory

  10. Pingback: Examining LRDIMMs | ddr3memory

  11. Pingback: Examining Netlist | ddr3memory

  12. carlos bustamante

    ddr3
    please check intel qualificaiton report at intel.com/technology/memory. Montage is fully qualified including 1333. Please update your blog.

    • The direct link is:

      http://www.intel.com/content/dam/www/public/us/en/documents/platform-memory/ddr3-lrdimm-e5-family-memory-list.pdf

      It now shows Montage in the 1333MHz section (along with the Inphi).

      Do you have any idea about how these companies intend to deal with the licensing to get legal cover for LRDIMMs ?

      Thanks.

      • carlos bustamante

        Dont know but am hearing that IDT is also getting into the market. Some $$ definitely going to change hands…..

      • Yes, IDTI has said they are going to skip Romley and target the 1600MHz (I assume they mean at 3 DPC) for Ivy Bridge late 2012.

        They had said they will try to catch the latter part of Romley if they can – which maybe hard to do since Inphi will have developed all the relationships.

        Inphi has said they have a 9 months head start over their LRDIMM competitors – i.e. IDTI etc.

      • Do you see the licensing issue as a palpable concern at the LRDIMM makers ?

        If you are talking about IDTI and entry with $$ – then you maybe talking about IDTI licensing or second-sourcing deal with NLST – for LRDIMMs or DDR4. If such a deal emerges, it would make perfect sense for IDTI to have delayed such an important thing as a Romley launch (establishes early relationships with OEMs) in order to secure a legitimate product for Ivy Bridge at least.

        While I had suspected that IDTI delay was suspicious and may be related to their prudence, their mention of a Ivy Bridge target did provoke questions whether that might include Netlist IP – or be essentially a second-source of Netlist HyperCloud – by using the Diablo/Toshiba resellers of Netlist buffer chipsets. However, it was also possible that IDTI was just pushing back expectation for LRDIMMs as it had been doing in last few conference calls – and the targeting of Ivy Bridge may also suffer similar delay.

        My expectation of a Netlist licensed product was for DDR4 – as a LRDIMM maker doing deals now would only get product out from OEMs in mid-2013 or later.

        However, if IDTI does have a licensing deal – then their targeting of Ivy Bridge would make perfect sense – and would justify skipping of Romley. As the products they produce may go beyond LRDIMMs – as a true second-sourcer can produce a RDIMM-compatible HyperCloud (like Netlist) – so why bother with LRDIMMs.

        This would match Netlist’s comments about LRDIMMs as an “end-of-life” product.

        And it would make sense in terms of timing also – as IDTI may have had a head start from many months back – an advantage Inphi does not have (which has busied itself in fruitless challenge of Netlist IP).

        Inphi has said they have a 9 month advantage over other LRDIMM competitors (IDTI).

        If IDTI pulls this trump card they would have a legal, licensed LRDIMM (or better an second-sourced RDIMM-compatible HyperCloud) available for Ivy Bridge – and would place IDTI a year or more ahead of Inphi.

        There have been murmurs about acquisition or accumulation of Netlist shares on the stock boards as well – because of the NLST price action which is suggestive of that (low volume taking down of stock price).

      • But IDTI would not be the only beneficiary of such a deal – as Netlist also needs a second-source in the long run.

        But in the long run means “now” in this industry (because of the 1 year lead time).

        They have been asked about this in conference call – and Netlist has said that they don’t need it now but with volumes they will have to think about it.

        Now volumes for 32GB load reduction and rank multiplication product will start to increase starting late 2012 and early 2013 and be ramping thereafter.

        Netlist will not have the time to establish a second-source then.

        Obviously some second-source will have to be arranged now.

        And IDTI’s sudden delay and targeting of a post-Romley Ivy Bridge in late 2012 fits the bill.

        If IDTI had this under wraps for a while i.e. they already have stable product working – this would allow IDTI to participate in the 32GB market which will only start ramping by late 2012 – so they will still have time to establish themselves – esp. if Inphi is not able to compete in this space.

        Now if they were to market it as an RDIMM-compatible load reduction and rank multiplication product i.e. a “better RDIMM” (HyperCloud) – that would solve another of the industry problems – LRDIMMs.

        Which means by mid-2013, many people will want to forget they ever heard about such a thing as an LRDIMM – as they will be using RDIMM-compatible load reduced product (HyperCloud) and be awaiting DDR4 in 2014.

  13. Pingback: Inphi reports Q2 2012 results | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s