Non-viability of 32GB RDIMMs

The primacy of load reduction and rank multiplication for 32GB

UPDATE: added 07/06/2012: Netlist on 8Gbit DRAM die non-availability
UPDATE: added 07/06/2012: Inphi on 8Gbit DRAM die non-availability

32GB RDIMMs are all 4-rank (quad-rank).

4-rank creates a high memory load on the memory bus.

While we have seen that 16GB RDIMMs (2-rank) experience slowdown at 3 DPC, for 32GB RDIMMs (4-rank):

– 4-rank will not work at 3 DPC (need “rank multiplication” technology)
.
– 4-rank experience slowdown at 2 DPC (need for “load reduction” technology) – IBM docs suggest a slowdown even at 1 DPC

32GB RDIMM dual-rank requires 8Gbit DRAM die

– 16GB RDIMMs 2-rank are made using 4Gbit DRAM die
– 32GB RDIMMs 2-rank require 8Gbit DRAM die
.
– 8Gbit DRAM die will not be available for a few years, if ever
.
– 32GB RDIMMs will thus remain stuck at 4-rank for the foreseeable future

The non-availability of 8Gbit DRAM die

8Gbit DRAM die will not be available for “2.5 to never in years” in the future and “$25B investment needed in the DRAM industry” – from NLST comments at Craig-Hallum conference.

http://www.netlist.com/investors/investors.html
Craig-Hallum 2nd Annual Alpha Select Conference
Thursday, October 6th at 10:40 am ET
http://wsw.com/webcast/ch/nlst/

DISCLAIMER: please refer to the original conference call or transcript – only use the following as guidance to find the relevant section

Question:

at the 26:30 minute mark:

(unintelligible)

Chris Lopes:

Sure, from a competitive standpoint for HyperCloud, there’s really only two ways that we know today to get to the higher density.

One is you stack DRAM and you slow the bus down to talk to that. As long as you can overcome the rank limitation.

So .. so IPHI and I think there are one or two other companies (IDTI ?) trying to build the interface chips to do the load-reduction.

But I think IPHI is the only one out in the market today .. is the primary guy out there.

In terms of just making larger RDIMMs (registered DIMMs), standard RDIMMs, you look at the silicon companies themselves like Samsung, Micron and Hynix and when they will have 8Gbit technology available to build a standard RDIMM to then do what our product does with the 4Gbit technology.

And some analysts are telling us that’s 2.5 to never in years (laughs) to when that happens.

And they’ve got some challenges in doing that – besides the lithography of getting to 10nm, there is an interface change from DDR3 to DDR4.

So how much money do you put into a DDR3 version of an 8Gbit (DRAM) if that market is going to shift to a new transit, new speed and new interface voltages, RIGHT when your chip will be available.

at the 28:05 minute mark:

So that would be kinda Samsung’s problem. Everybody else has just introduced 4Gbit and they are on a 2.5 to 3 year cycle for density.

Even if they could, if they could overcome the technology challenges, TIME to get to 8Gbit is about a 2.5 year window.

So we think we are very well positioned there.

I think in the 16GB (16 gigabyte memory modules) we did not have this advantage.

Because 4Gbit chips (DRAM) when you have plenty of 4Gbit chips – so they can get down in price to obviate the need for 2Gbit rank-doubled.

So that cross-over is starting to happen already.

We don’t see that cross-over happening again – at least for 2.5 years .. if ever (meaning newer higher density chips won’t become too cheap – in fact won’t even be available for 2.5 years).

It IS a more exciting story today than it was when we introduced the product several years ago because of that.

It seems the investment required to go to 8Gbit DRAM die are huge and while only Samsung may have the capability to attempt that, that decision is further complicated by the DDR3 to DDR4 transition happening at the same time.

Here are the comments related to the “$25B investment needed” and why Samsung maybe the only one who could do this:

at the 07:15 minute mark:

On the supply side there are some very big holes that need filling.

One is silicon itself in the DRAM has a very difficult time migrating to next-generation technologies.

at the 07:30 minute mark:

The physics of DRAM preventing a fast scaling.

It looks like 8Gbit DRAM maybe the lowest or the last monolithic die today.

Today 4Gbit (not gigabytes) just hit the market. And it is an estimated $25B investment needed in the DRAM industry to get to the final lithographies needed to get to 8Gbit (DRAM) cost-effectively.

It is really .. Samsung’s probably the only .. only player with the pockets to do that. ‘Cause they’re making money on Galaxy Tabs (Android tablet computer) and everything else seems like .. today.

at the 08:00 minute mark:

So the industry says we still need a solution. INTC’s got a problem, HP’s got a problem, IBM, AMD – all these big guys rely on large amounts of memory being available so that their servers can get to market and do what they’re supposed to do.

UPDATE: added 07/06/2012: Netlist on 8Gbit DRAM die non-availability

Here is another comment from Netlist about the memory roadmap – saying that the 4Gbit DRAM die may be the last viable monolithic die in the industry:

http://78449.choruscall.com/netlist/netlist120228.mp3
Fourth Quarter and Full Year 2011 Conference Call
Tuesday, February 28 5:00pm ET

at the 30:00 minute mark ..

George Santana of Ossetian (?):

Just .. how long do you think NLST has as far as a head start on the 2-rank 32GB ?

Chuck Hong – CEO:

Well the .. the only other way to build a really .. a real 2-rank 32GB is with 8Gbit (DRAM) die from the semiconductor manufacturers.

I don’t think any body even has that on their roadmap – except maybe Samsung.

It looks like 4Gbit (DRAM die) will be the LAST viable .. uh .. monolithic die out in the industry.

So the industry is looking to go into some stacking methodologies that you have heard of 3DS and there are some other competing technologies (Hybrid Memory Cube etc.), so we think effectively we’ll have the only real 32GB 2-rank in the market for DDR3.

And DDR4 when products start stacking, you need rank-multiplication and HyperCloud is really the only product that does rank multiplication on the DIMM itself, so .. as you dig into how other technologies try to do that, they do that mainly in software and can’t do the full rank-multiplication like our product does.

So I think we have a pretty good .. uh .. advantage there.

UPDATE: added 07/06/2012: Inphi on 8Gbit DRAM die non-availability

http://www.media-server.com/m/acs/163bece6db75b960d5544f630fb06765
Q4 and Full Year 2011 Inphi Corp Earnings Conference Call
Wednesday, February 01, 2012, 5:00 pm EST

at the 42:15 minute mark ..

Sandeep Bajikar of Jeffries and Company:

Ok, and then just a quick followup on that.

So .. uh .. there there is at least one vendor, one memory module vendor that has talked about 64GB .. uh .. LRDIMM modules.

What’s going to be your outlook on the relative proportion of 64GB demand out there compared to, sir, what you have with 32GB ?

outgoing CEO Young Sohn:

I think that’s a really really very good insightful question and there are mainly because that you know the .. uh .. lithography of memory is getting more difficult.

So until now, Moore’s law prevailed and every 18 months you got next-generation memory that showed up – every 18 months to 24 months.

So today, 4Gbit (DRAM) dies is available.

And everybody took time (?) and it came (?).

Now the question is, can they get to 8Gbit (DRAM) die and the .. that seems to be very difficult to do, so the thought (?) in industry is in still taking two .. typical 2 years .. it may take 4 years to get there, and may require different technology to get there.

So given that, LRDIMM is only way you can actually get to 32GB and 64GB without using 8Gbit (DRAM) dies.

So it actually, in a way, LRDIMM prolong the life of the 4Gbit (DRAM) die achieving higher capacity point.

And if you are a at stake (?), I am very optimistic that over years higher capacity points will be needed and those solutions will come with LRDIMM.

Here Inphi suggests a 2-4 year time frame for non-availability of 8Gbit DRAM die – and suggests the 32GB and 64GB space is open to a load reduction and rank multiplication solution.

Performance figures for 32GB RDIMM

HP does not provide figures for 4-rank performance, but borrowing figures from IBM docs for Romley:

http://www.redbooks.ibm.com/abstracts/tips0850.html
IBM System x3650 M4
IBM Redbooks Product Guide

Table 5. Maximum memory speeds:

RDIMM – dual-rank (2-rank) – at 1.5V
– 1 DPC at 1333MHz
– 2 DPC at 1333MHz
– 3 DPC at 1066MHz

RDIMM – quad-rank (4-rank) – at 1.5V
– 1 DPC at 1066MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)

LRDIMM – at 1.5V
– 1 DPC at 1333MHz
– 2 DPC at 1333MHz
– 3 DPC at 1066MHz

HCDIMM – at 1.5V
– 1 DPC at 1333MHz at 1.5V
– 2 DPC at 1333MHz at 1.5V
– 3 DPC at 1333MHz at 1.5V

You can see that 32GB RDIMMs (4-rank) have abysmal performance not only at 3 DPC, but also at 2 DPC (IBM docs show even 1 DPC falls by a speed grade).

This is because of the greater rank (and load) from the 4-rank memory module.

So “load reduction” and “rank multiplication” (which are Netlist IP – but also copied in the LRDIMMs) would help to resolve this issue:

– at the 16GB level, these helped solve the problem at 3 DPC (previously discussed)
– at the 32GB level, these will be needed at both 3 DPC, and 2 DPC and possibly even 1 DPC (if IBM docs are to be believed)

Here are more detailed numbers from Dell:

http://www.dell.com/downloads/global/products/pedge/poweredge_12th_generation_server_memory.pdf
Memory for Dell PowerEdge 12 th
Generation Servers

From the table one can see 4-rank experiencing severe slowdown at higher DPC (as in IBM docs).

And the effect gets worse at lower voltage (which is what will happen when you go to DDR4 with it’s lower voltages).

LRDIMMs not able to deliver 1333MHz at 3 DPC (as in IBM docs).

Table 3 – PowerEdge memory speeds by type and loading (on pg. 10)

RDIMM – dual-rank (2-rank) – at 1.5V (1600MHz RDIMM)
– 1 DPC at 1600MHz
– 2 DPC at 1600MHz
– 3 DPC at 1066MHz
.
RDIMM – dual-rank (2-rank) – at 1.5V (1333MHz RDIMM)
– 1 DPC at 1333MHz
– 2 DPC at 1333MHz
– 3 DPC at 1066MHz
.
RDIMM – dual-rank (2-rank) – at 1.35V (1333MHz RDIMM)
– 1 DPC at 1333MHz
– 2 DPC at 1333MHz
– 3 DPC not supported (at low voltage)
.
.
RDIMM – quad-rank (4-rank) – at 1.5V
– 1 DPC at 1066MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)
.
RDIMM – quad-rank (4-rank) – at 1.35V
– 1 DPC at 800MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)
.
.
LRDIMM – at 1.5V or 1.35V
– 1 DPC at 1333MHz
– 2 DPC at 1333MHz
– 3 DPC at 1066MHz

SuperMicro guide explains how you can populate memory (see link below).

From the table one can see 4-rank experiencing severe slowdown at higher DPC (as in IBM docs):

http://www.supermicro.nl/support/resources/memory/X9_DP_memory_config.pdf
Super Micro Computer, Inc.
Memory Configuration Guide
X9 Series DP Motherboards

See pg. 4 – looking at the “DIMM slots present per CPU” = 12:
.
RDIMM – dual-rank (2-rank) – at 1.5V
– 1 DPC at 1600MHz
– 2 DPC at 1600MHz
– 3 DPC at 1066MHz
.
RDIMM – dual-rank (2-rank) – at 1.35V (low voltage)
– 1 DPC at 1333MHz
– 2 DPC at 1333MHz
– (3 DPC at 1066MH usually on IBM and HP) – SuperMicro says 3 DPC “not supported”
.
.
RDIMM – quad-rank (4-rank) – at 1.5V
– 1 DPC at 1066MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)
.
RDIMM – quad-rank (4-rank) – at 1.35V (low voltage)
– 1 DPC at 800Hz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)
.
.
LRDIMM (4-rank) – at 1.5V
– 1 DPC at 1333MHz
– 2 DPC at 1333MHz
– 3 DPC at 1066MHz
.
LRDIMM (4-rank) – at 1.35V (low voltage)
– 1 DPC at 1333MHz
– 2 DPC at 1066MHz
– 3 DPC at 1066MHz

32GB RDIMM performance at low voltage

32GB RDIMMs, because of their being 4-rank, are burdened with a higher memory load which causes performance issues.

Moving from 1.5V (standard) to 1.35V (low voltage) exacerbates the problem.

Impact of low voltage on performance

The impact of low voltage can be seen on the performance figures from IBM (Romley):

http://www.redbooks.ibm.com/abstracts/tips0850.html
IBM System x3650 M4
IBM Redbooks Product Guide

Table 5. Maximum memory speeds:
.
RDIMM – quad-rank (4-rank) – at 1.5V
– 1 DPC at 1066MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)
.
RDIMM – quad-rank (4-rank) – at 1.35V
– 1 DPC at 800MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)

And from Dell (Romley 12G servers):

http://www.dell.com/downloads/global/products/pedge/poweredge_12th_generation_server_memory.pdf
Memory for Dell PowerEdge 12 th
Generation Servers

Table 3 – PowerEdge memory speeds by type and loading (on pg. 10)

RDIMM – quad-rank (4-rank) – at 1.5V
– 1 DPC at 1066MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)
.
RDIMM – quad-rank (4-rank) – at 1.35V
– 1 DPC at 800MHz
– 2 DPC at 800MHz
– 3 DPC not supported (because 4 ranks x 3 DPC = 12 ranks which exceeds the 8 ranks per memory rank limit of current systems)

Thus the move to lower voltage (1.35V) will complicate the already hard problem of memory load for 4-rank memory.

Understanding of the “market” for LRDIMMs

At the 32GB memory level, the 32GB RDIMMs (4-rank) face significant challenge, which are only resolved if you employ “load reduction” and “rank multiplication” techniques (Netlist IP), as is done by 32GB LRDIMMs and 32GB HyperCloud (IBM HCDIMMS/HP HDIMMs).

This is the reason that IDTI predicts the market for LRDIMMs will go from 2%-3% for Romley to 15%-20% for Ivy Bridge (post-Romley).

For IDTI, the growth of LRDIMMs is directly related to the growth of 32GB LRDIMMs alone (since 16GB LRDIMMs are non-viable).

See the section “IDTI – 16GB LRDIMMs non-viable – 32GB LRDIMMs attach rate” in the article:

https://ddr3memory.wordpress.com/2012/06/06/market-opportunity-for-load-reduction/
Market opportunity for load reduction
June 6, 2012

https://ddr3memory.wordpress.com/2012/06/19/why-are-16gb-lrdimms-non-viable/
Why are 16GB LRDIMMs non-viable ?
June 19, 2012

As far as IDTI is concerned, the 32GB LRDIMMs will dominate the 32GB space (because of weakness in the 32GB RDIMM 4-rank) – with the growth of 32GB LRDIMMs mirroring the growth of the 32GB memory segment as a whole.

Market differs for LRDIMMs vs. HyperCloud

When Inphi or IDTI talk about the market for LRDIMMs – they are essentially talking about the market for:

– 32GB LRDIMMs
.
– and not 16GB LRDIMMs – since 16GB LRDIMMs are non-viable vs. 16GB RDIMMs (2-rank)

When Netlist talks about the market for HyperCloud (IBM HCDIMM/HP HDIMM) – they are talking about the market for:

– 32GB HyperCloud
.
– 16GB HyperCloud (since at 3 DPC this trumps 16GB RDIMM (2-rank)).

4-rank non-viability at 3 DPC and slowdown

As evident from the IBM docs, 4-rank memory modules will only work at:

– 1 DPC (possible speed slowdown ?)
.
– 2 DPC with speed slowdown
.
– 3 DPC (will not work at all)

32GB RDIMM (4-rank) experiences a speed slowdown not only at 3 DPC (as do 16GB RDIMM 2-rank), but also at 2 DPC (and possibly even at 1 DPC – according to IBM docs).

A “rank multiplication” solution is required to solve this problem.

32GB RDIMM (4-rank) cannot populate more than 2 DPC. At 3 DPC you would exceed the “8 ranks per memory channel” limit for current systems.

A “load reduction” solution is required to solve this problem.

On the need for “load reduction” and “rank multiplication”:

https://ddr3memory.wordpress.com/2012/05/24/the-need-for-high-memory-loading-and-its-impact-on-bandwidth/
The need for high memory loading and it’s impact on bandwidth
May 24, 2012

HyperCloud has IP advantages over LRDIMMs

Netlist holds significant IP in “load reduction” and “rank multiplication”. Netlist is the original inventor of VLP (very low profile) memory for blade servers, as well as an innovator in 4-rank memory modules.

HyperCloud (IBM HCDIMM/HP HDIMM) is based upon Netlist IP in “load reduction” and “rank multiplication”.

LRDIMMs are copying Netlist IP – in fact LRDIMMs face significant risk for this reason. DDR4 goes further and is copying the symmetrical lines and decentralized buffer chipset on the Netlist HyperCloud as well.

On the risk factors for LRDIMM:

https://ddr3memory.wordpress.com/2012/06/05/lrdimms-future-and-end-user-risk-factors/
LRDIMMs future and end-user risk factors
June 5, 2012

https://ddr3memory.wordpress.com/2012/06/15/why-are-lrdimms-single-sourced-by-inphi/
Why are LRDIMMs single-sourced by Inphi ?
June 15, 2012

On DDR4 borrowing from LRDIMM use of Netlist IP in “load reduction” and “rank multiplication”:

https://ddr3memory.wordpress.com/2012/06/08/ddr4-borrows-from-lrdimm-use-of-load-reduction/
DDR4 borrows from LRDIMM use of load reduction
June 8, 2012

https://ddr3memory.wordpress.com/2012/06/07/jedec-fiddles-with-ddr4-while-lrdimm-burns/
JEDEC fiddles with DDR4 while LRDIMM burns
June 7, 2012

HyperCloud has some clear architectural and performance advantages vs. LRDIMMs (which DDR4 will try to remedy).

HyperCloud outperforms LRDIMMs

At both the 16GB and the 32GB level, HyperCloud outperforms LRDIMMs:

– LRDIMMs have higher latency issues (asymmetrical lines and centralized buffer chipset)
.
– LRDIMMs unable to achieve 3 DPC at 1333MHz on Romley servers (HP DL360p, DL380p, IBM x3650 M4)
.
– LRDIMMs could face injunction against “infringing product” – since LRDIMMs are copying Netlist IP in “load reduction” and “rank multiplication” (DDR4 faces a similar situation)

On the latency issues with LRDIMMs:

https://ddr3memory.wordpress.com/2012/05/31/lrdimm-latency-vs-ddr4/
LRDIMM latency vs. DDR4
May 31, 2012

32GB HyperCloud cost advantage over LRDIMMs

At the 32GB level, HyperCloud has an additional cost advantage over LRDIMMs and RDIMMs:

– 32GB RDIMMs currently use 4Gbit x 2 DDP memory packages
.
– 32GB LRDIMMs currently ALSO use 4Gbit x 2 DDP memory packages
.
– 32GB HyperCloud (IBM HCDIMM/HP HDIMM) use 4Gbit monolithic memory packages and leverage their Planar-X IP to package those into one memory module
.
– Two 4Gbit monolithic memory packages are cheaper than one 4Gbit x 2 DDP memory packaging.

For this reason 32GB HyperCloud (IBM HCDIMM/HP HDIMM) based on monolithic memory packages will be cheaper than 32GB RDIMMs/32GB LRDIMMs that are produced using DDP memory packages.

Summary

At the 32GB level:

– LRDIMMs/HyperCloud outperform RDIMMs at 2 DPC and 3 DPC (and possibly even 1 DPC)
.
– HyperCloud outperforms LRDIMMs
.
– HyperCloud has a cost advantage over RDIMMs and LRDIMMs (which use DDP memory)
.
– LRDIMMs face risk of recall or non-availability in the future (could impact upgrades)

For this reason, if you are buying 32GB memory modules you would buy:

– 32GB RDIMM (4-rank) if you need to populate at 1 DPC (IBM docs list even 1 DPC running slow though !)
.
– 32GB HyperCloud if you need to populate at 2 DPC or 3 DPC (virtualization/cloud computing/data centers), or if you think you may need to upgrade to 2 DPC or 3 DPC later

For a 2-socket Romley server (2 processors), 4 memory channels per processor and DIMMs populated at 1 DPC:

2 processors x 4 memory channels per processor x 1 DPC = 8 DIMM slots
.
Using 32GB memory modules, that is:
.
8 DIMM slots x 32GB = 256GB

So basically if you need more than 1 DPC i.e. more than 256GB in a 2-socket server you HAVE to use a “load reduction” and “rank multiplication” solution – and HyperCloud trumps LRDIMM.

Postscript – 32GB RDIMM non-viable at 1 DPC as well

In practice, you would not use 32GB RDIMM (4-rank) at 1 DPC either – since using 16GB RDIMM (2-rank) at 2 DPC will accomplish the same thing at higher speed and lower price.

Thus you would not use 32GB RDIMMs (4-rank) at:

– 1 DPC (non-viable vs. 16GB RDIMM (2-rank) at 2 DPC)
.
– 2 DPC with speed slowdown (non-viable vs. 32GB HyperCloud)
.
– 3 DPC (will not work at all) (non-viable vs. 32GB HyperCloud)

This means 32GB RDIMM (4-rank) are completely non-viable vs. the alternatives (in both price and performance).

It could be argued that 32GB RDIMM (4-rank) may have value in some special use cases (where a server has only 1 DPC capability and thus 16GB RDIMM (2-rank) cannot be used at 2 DPC). However, even in such a scenario the 32GB RDIMM (4-rank) may turn out to be more expensive than the 32GB HyperCloud (which is based on 4Gbit monolithic memory packages instead of 4Gbit x 2 DDP memory packages as on the 32GB RDIMM (4-rank)).

So at 32GB you would use:

– for 256GB (32GB at 1 DPC) use 16GB RDIMM (2-rank) instead (i.e. 16GB at 2 DPC)
.
– above 256GB (32GB at 2 DPC and 3 DPC) use 32GB HyperCloud

Advertisements

14 Comments

Filed under Uncategorized

14 responses to “Non-viability of 32GB RDIMMs

  1. Pingback: LRDIMM buffer chipset makers | ddr3memory

  2. Pingback: Why are LRDIMMs single-sourced by Inphi ? | ddr3memory

  3. Pingback: Is Montage another MetaRAM ? | ddr3memory

  4. Pingback: HyperCloud vs. LRDIMMs | ddr3memory

  5. Pingback: DDP vs. monolithic memory packages | ddr3memory

  6. Pingback: Memory buying guide – when to use RDIMMs ? | ddr3memory

  7. Pingback: Where are the 1600MHz LRDIMMs/HyperCloud for Romley ? | ddr3memory

  8. Pingback: Infographic – memory buying guide for Romley 2-socket servers | ddr3memory

  9. Pingback: Memory for VMware virtualization servers | ddr3memory

  10. Pingback: HyperCloud to own the 32GB market ? | ddr3memory

  11. Pingback: Latency and throughput figures for LRDIMMs emerge | ddr3memory

  12. Pingback: Inphi reports Q2 2012 results | ddr3memory

  13. Pingback: Awaiting 32GB HCDIMMs | ddr3memory

  14. Pingback: Memory choices for the IBM System x3300 M4 | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s