The need for high memory loading and it’s impact on bandwidth

The need for high memory loading

The memory you use in your home desktop computer are UDIMMs (unregistered DIMMs).

You often use 2GB or 4GB of memory on such systems.

Memory requirements on servers for virtualization, CAD, in-memory databases, high performance computing are much greater – for example 256GB or 384GB or 768GB in a 2-processor server.

As an example, for virtualization, if you double memory capacity you can double the virtual machines on the same server.

You are able to reduce server footprint in the data center, server power footprint (and the associated UPS or generator capacity required for that data center). You support double the VMs with just the extra power and cost of double the memory.

This works because processor power is increasing with every generation. When processor power doubles, optimal strategy would be to double memory capacity so you can double the VMs you support (while keeping the same processing power/memory ratio).

Intel Romley rollout March 2012 has yielded processors which are much faster than the previous generation.

As a results users who previously installed 256GB on their servers, will want to install much more memory (to keep the processing power/memory size ratio the same for virtualization, for instance).

These are the forces driving high memory loading trends.

Unfortunately, the improvements in processor power are not matched by improvements in memory. There are problems if you use lots of memory.

The impact of high memory loading

Most people are familiar with UDIMMs(unregistered DIMMs) – you use them in your home desktop computer.

RDIMMs (registered RDIMMs) are used in servers, and include circuitry to improve reliability and to offer error correction etc.

A “2-socket” server has space for 2 processors on the motherboard.

The latest Intel Romley (Sandy Bridge) series of processors have 4 “memory channels”.

Each memory channel can be populated with memory modules at 1, 2 or 3 DIMMs per channel:

– 1 DPC
– 2 DPC
– 3 DPC (higher electrical load on the memory channel)

If you install memory at 3 DPC you get:

2 processors x 4 memory channels per processor x 3 DIMMs per channel = 24 DIMMs in a 2-socket server

At 3 DPC you have maximum memory electrical load on the memory channel.

As electrical load increases the achievable bandwidth goes down – so for example at 3 DPC the standard speed is NOT achievable but is a speed grade lower.

If standard memory speed is 1333MT/s (mega transfers per second – alternatively also sometimes written 1333MHz), then at 3 DPC this drops to 1066MHz or 800MHz.

For example:

– 1 DPC at 1333MHz
– 2 DPC at 1066MHz
– 3 DPC at 800MHz

This is a basic problem with memory for servers i.e. if you add lots of memory, it runs slower.



Filed under Uncategorized

5 responses to “The need for high memory loading and it’s impact on bandwidth

  1. Pingback: LRDIMMs future and end-user risk factors | ddr3memory

  2. Pingback: JEDEC fiddles with DDR4 while LRDIMM burns | ddr3memory

  3. Pingback: Why are LRDIMMs single-sourced by Inphi ? | ddr3memory

  4. Pingback: Non-viability of 32GB RDIMMs | ddr3memory

  5. Pingback: Examining LRDIMMs | ddr3memory

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s