Wednesday, March 17, 2010

DRAM types

This is the basic form, from which all others are derived. An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically 1 or 4) bidirectional data lines. There are four active low control signals:
  • /RAS, the Row Address Strobe. The address inputs are captured on the falling edge of /RAS, and select a row to open. The row is held open as long as /RAS is low.
  • /CAS, the Column Address Strobe. The address inputs are captured on the falling edge of /CAS, and select a column from the currently open row to read or write.
  • /WE, Write Enable. This signal determines whether a given falling edge of /CAS is a read (if high) or write (if low). If low, the data inputs are also captured on the falling edge of /CAS.
  • /OE, Output Enable. This is an additional signal that controls output to the data I/O pins. The data pins are driven by the DRAM chip if /RAS and /CAS are low, and /WE is high, and /OE is low. In many applications, /OE can be permanently connected low (output always enabled), but it can be useful when connecting multiple memory chips in parallel
This interface provides direct control of internal timing. When /RAS is driven low, a /CAS cycle must not be attempted until the sense amplifiers have sensed the memory state, and /RAS must not be returned high until the storage cells have been refreshed. When /RAS is driven high, it must be held high long enough for precharging to complete.

Asynchronous DRAM Types

  1. FPM RAM
  2. EDO RAM
  3. Burst EDO RAM
  4. Video RAM
Fast Page Mode (FPM) DRAM or FPRAM
Fast page mode DRAM is also called FPM DRAM, Page mode DRAM, Fast page mode memory, or Page mode memory.

In page mode, a row of the DRAM can be kept "open" by holding /RAS low while performing multiple reads or writes with separate pulses of /CAS. so that successive reads or writes within the row do not suffer the delay of precharge and accessing the row. This increases the performance of the system when reading or writing bursts of data.

Static column is a variant of page mode in which the column address does not need to be strobed in, but rather, the address inputs may be changed with /CAS held low, and the data output will be updated accordingly a few nanoseconds later.

Nibble mode is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of /CAS. The difference from normal page mode is that the address inputs are not used for the second through fourth /CAS edges; they are generated internally starting with the address supplied for the first /CAS edge.

Extended Data Out (EDO) DRAM
EDO DRAM is similar to Fast Page Mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved speed. It was 5% faster than Fast Page Mode DRAM, which it began to replace in 1993.

To be precise, EDO DRAM begins data output on the falling edge of /CAS, but does not stop the output when /CAS rises again. It holds the output valid (thus extending the data output time) until either /RAS is deasserted, or a new /CAS falling edge selects a different column address.

Single-cycle EDO has the ability to carry out a complete memory transaction in one clock cycle. Otherwise, each sequential RAM access within the same page takes two clock cycles instead of three, once the page has been selected. EDO's speed and capabilities allowed it to somewhat replace the then-slow L2 caches of PCs. It created an opportunity to reduce the immense performance loss associated with a lack of L2 cache, while making systems cheaper to build. This was also good for notebooks due to difficulties with their limited form factor, and battery life limitations. An EDO system with L2 cache was tangibly faster than the older FPM/L2 combination.

Single-cycle EDO DRAM became very popular on video cards towards the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM.

Burst EDO (BEDO) DRAM
An evolution of the former, Burst EDO DRAM, could process four memory addresses in one burst, for a maximum of 5-1-1-1, saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipelined stage allowing page-access cycle to be divided into two components. During a memory-read operation, the first component accessed the data from the memory array to the output stage (second latch). The second component drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, faster access time is achieved (up to 50% for large blocks of data) than with traditional EDO.

Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM . Even though BEDO RAM was superior to SDRAM in some ways, the latter technology gained significant traction and quickly displaced BEDO.

Video DRAM (VRAM)


VRAM is a dual-ported variant of DRAM which was once commonly used to store the frame-buffer in some graphics adaptors.

It was invented by F. Dill and R. Matick at IBM Research in 1980, with a patent issued in 1985 (US Patent 4,541,075). The first commercial use of VRAM was in the high resolution graphics adapter introduced in 1986 by IBM with the PC/RT system.

VRAM has two sets of data output pins, and thus two ports that can be used simultaneously. The first port, the DRAM port, is accessed by the host computer in a manner very similar to traditional DRAM. The second port, the video port, is typically read-only and is dedicated to providing a high-speed data channel for the graphics chipset.

Typical DRAM arrays normally access a full row of bits (i.e. a word line) at up to 1024 bits at one time, but only use one or a few of these for actual data, the remainder being discarded. Since DRAM cells are destructively read, each bit accessed must be sensed, and re-written. Thus, typically, 1024 sense amplifiers are typically used. VRAM operates by not discarding the excess bits which must be accessed, but making full use of them in a simple way. If each horizontal scan line of a display is mapped to a full word, then upon reading one word and latching all 1024 bits into a separate row buffer, these bits can subsequently be serially streamed to the display circuitry. This will leave access to the DRAM array free to be accessed (read or write) for many cycles, until the row buffer is almost depleted. A complete DRAM read cycle is only required to fill the row buffer, leaving most DRAM cycles available for normal accesses.

Such operation is described in the paper "All points addressable raster display memory" by R. Matick, D. Ling, S. Gupta, and F. Dill, IBM Journal of R&D, Vol 28, No. 4, July 1984, pp379-393. To use the video port, the controller first uses the DRAM port to select the row of the memory array that is to be displayed. The VRAM then copies that entire row to an internal row-buffer which is a shift-register. The controller can then continue to use the DRAM port for drawing objects on the display. Meanwhile, the controller feeds a clock called the shift clock (SCLK) to the VRAM's video port. Each SCLK pulse causes the VRAM to deliver the next datum, in strict address order, from the shift-register to the video port. For simplicity, the graphics adapter is usually designed so that the contents of a row, and therefore the contents of the shift-register, corresponds to a complete horizontal line on the display.

In the late 1990s, standard DRAM technologies (e.g. SDRAM) became cheap, dense, and fast enough to completely displace VRAM, even though it was only single-ported and some memory bits were wasted.

Synchronous Dynamic RAM (SDRAM)

Single Data Rate (SDR) SDRAM is a synchronous form of DRAM.

  1. SDR SDRAM
  2. DDR SDRAM
  3. DDR2 SDRAM
  4. DDR3 SDRAM
  5. DDR4 SDRAM
  6. Rambus RAM
  7. VC-RAM
  8. SGRAM
  9. GDDR2
  10. GDDR3
  11. GDDR4
  12. GDDR5
SDR SDRAM
Single Data Rate (SDR) SDRAM is a synchronous form of DRAM.

DDR SDRAM
DDR SDRAM (double data rate synchronous dynamic random access memory) is a class of memory integrated circuit used in computers. It achieves nearly twice the bandwidth of the preceding [single data rate] SDRAM by double pumping (transferring data on the rising and falling edges of the clock signal) without increasing the clock frequency.

With data being transferred 64 bits at a time, DDR SDRAM gives a transfer rate of (memory bus clock rate) × 2 (for dual rate) × 64 (number of bits transferred) / 8 (number of bits/byte). Thus with a bus frequency of 100 MHz, DDR SDRAM gives a maximum transfer rate of 1600 MB/s.

JEDEC has set standards for speeds of DDR SDRAM, divided into two parts: The first specification is for memory chips and the second is for memory modules. As DDR-SDRAM is superseded by the newer DDR2 SDRAM, the older version is sometimes referred to as DDR1-SDRAM.



All above listed (except DDR-300 [1]) are specified by JEDEC as JESD79. All RAM speeds in-between or above these listed specifications are not standardized by JEDEC — most often they are simply manufacturer optimizations using higher-tolerance or overvolted chips.

The package sizes in which DDR SDRAM is manufactured are also standardized by JEDEC.

There is no architectural difference between DDR SDRAM designed for different clock frequencies, e.g. PC-1600 (designed to run at 100 MHz) and PC-2100 (designed to run at 133 MHz). The number simply designates the speed that the chip is guaranteed to run at, hence DDR SDRAM can be run at either lower[2] or higher clock speeds than those for which it was made. These practices are known as underclocking and overclocking respectively.

DDR SDRAM for desktop computers DIMMs have 184 pins (as opposed to 168 pins on SDRAM, or 240 pins on DDR2 SDRAM), and can be differentiated from SDRAM DIMMs by the number of notches (DDR SDRAM has one, SDRAM has two). DDR for notebook computers SO-DIMMs have 200 pins which is the same number of pins as DDR2 SO-DIMMs. These two specifications are notched very similarly and care must be taken during insertion when you are unsure of a correct match. DDR SDRAM operates at a voltage of 2.5 V, compared to 3.3 V for SDRAM. This can significantly reduce power usage. Chips and modules with DDR-400/PC-3200 standard have a nominal voltage of 2.6 Volt.

Many new chipsets use these memory types in dual-channel configurations, which doubles or quadruples the effective bandwidth.

DDR2 SDRAM
DDR2 SDRAM or double-data-rate two synchronous dynamic random access memory is a random access memory technology used in electronic engineering for high speed storage of the working data of a computer or other digital electronic device.

It is a part of the SDRAM (synchronous dynamic random access memory) family of technologies, which is one of many DRAM (dynamic random access memory) implementations, and is an evolutionary improvement over its predecessor, DDR SDRAM.

Its primary benefit is the ability to operate the external data bus twice as fast as DDR SDRAM. This is achieved by improved bus signaling, and by operating the memory cells at half the clock rate (one quarter of the data transfer rate), rather than at the clock rate as in the original DDR. DDR2 memory at the same clock speed as DDR will provide the same bandwidth but markedly higher latency, providing worse performance.


Like all SDRAM implementations, DDR2 stores memory in memory cells that are activated with the use of a clock signal to synchronize their operation with an external data bus. Like DDR before it, DDR2 cells transfer data both on the rising and falling edge of the clock (a technique called "double pumping"). The key difference between DDR and DDR2 is that in DDR2 the bus is clocked at twice the speed of the memory cells, so four bits of data can be transferred per memory cell cycle. Thus, without speeding up the memory cells themselves, DDR2 can effectively operate at twice the bus speed of DDR.

DDR2's bus frequency is boosted by electrical interface improvements, on-die termination, prefetch buffers and off-chip drivers. However, latency is greatly increased as a trade-off. The DDR2 prefetch buffer is 4 bits deep, whereas it is 2 bits deep for DDR and 8 bits deep for DDR3. While DDR SDRAM has typical read latencies of between 2 and 3 bus cycles, DDR2 may have read latencies between 4 and 6 cycles. Thus, DDR2 memory must be operated at twice the bus speed to achieve the same latency.

Another cost of the increased speed is the requirement that the chips are packaged in a more expensive and more difficult to assemble BGA package as compared to the TSSOP package of the previous memory generations such as DDR SDRAM and SDR SDRAM. This packaging change was necessary to maintain signal integrity at higher speeds.

Power savings are achieved primarily due to an improved manufacturing process through die shrinkage, resulting in a drop in operating voltage (1.8 V compared to DDR's 2.5 V). The lower memory clock frequency may also enable power reductions in applications that do not require the highest available speed.

According to JEDEC[1] the maximum recommended voltage is 1.9 volts and should be considered the absolute maximum when memory stability is an issue (such as in servers or other mission critical devices). In addition, JEDEC states that memory modules must withstand up to 2.3 volts before incurring permanent damage (although they may not actually function correctly at that level).

DDR3 SDRAM

In electronic engineering, DDR3 SDRAM or double-data-rate three synchronous dynamic random access memory is a random access memory technology used for high speed storage of the working data of a computer or other digital electronic device.

DDR3 is part of the SDRAM family of technologies and is one of the many DRAM (dynamic random access memory) implementations. DDR3 SDRAM is an improvement over its predecessor, DDR2 SDRAM.

The primary benefit of DDR3 is the ability to transfer I/O data at eight times the speed of the memory cells it contains, thus enabling faster bus speeds and higher peak throughput than earlier memory technologies. However, there is no corresponding reduction in latency, which is therefore proportionally higher. In addition, the DDR3 standard allows for chip capacities of 512 megabits to 8 gigabits, effectively enabling a maximum memory module size of 16 gigabytes.

DDR3 memory promises a power consumption reduction of 30% compared to current commercial DDR2 modules due to DDR3's 1.5 V supply voltage, compared to DDR2's 1.8 V or DDR's 2.5 V. The 1.5 V supply voltage works well with the 90 nanometer fabrication technology used for most DDR3 chips. Some manufacturers further propose using "dual-gate" transistors to reduce leakage of current.[1]

According to JEDEC[2] the maximum recommended voltage is 1.575 volts and should be considered the absolute maximum when memory stability is the foremost consideration, such as in servers or other mission critical devices. In addition, JEDEC states that memory modules must withstand up to 1.975 volts before incurring permanent damage, although they may not actually function correctly at that level.

The main benefit of DDR3 comes from the higher bandwidth made possible by DDR3's 8 bit deep prefetch buffer, in contrast to DDR2's 4 bit prefetch buffer or DDR's 2 bit buffer.

DDR3 modules can transfer data at the effective clock rate of 800–1600 MHz using both rising and falling edges of a 400–800 MHz I/O clock. In comparision, DDR2's current range of effective data transfer rate is 400–800 MHz using a 200–400 MHz I/O clock, and DDR's range is 200–400 MHz based on a 100–200 MHz I/O clock. To date, the graphics card market has been the driver of such bandwidth requirements, where fast data transfer between framebuffers is required.

DDR3 prototypes were announced in early 2005. Products in the form of motherboards are appearing on the market as of mid-2007[3] based on Intel's P35 "Bearlake" chipset and memory DIMMs at speeds up to DDR3-1600 (PC3-12800).[4] AMD's roadmap indicates their own adoption of DDR3 in 2008.

DDR3 DIMMs have 240 pins, the same number as DDR2, and are the same size, but are electrically incompatible and have a different key notch location

DDR4 SDRAM

DDR4 SDRAM will be the successor to DDR3 SDRAM. It was revealed at the Intel Developer Forum in San Francisco and is currently in the design state and is expected to released in 2012.

The new chips are expected run at 1.2 volts or below, versus the 1.5 volts of DDR3 chips and have data transfers in excess of 2000 million per second

RDRAM

The first PC motherboards with support for RDRAM debuted in 1999. They supported PC-800 RDRAM, which operated at 400 MHz and delivered 1600 MB/s of bandwidth over a 16-bit bus using a 184-pin RIMM form factor. Data is transferred on both the rising and falling edges of the clock signal, a technique known as double data rate. For marketing reasons the physical clock rate was multiplied by two (because of the DDR operation); therefore, the 400 MHz Rambus standard was named PC-800. This was significantly faster than the previous standard, PC-133 SDRAM, which operated at 133 MHz and delivered 1066 MB/s of bandwidth over a 64-bit bus using a 168-pin DIMM form factor.

Moreover, if a mainboard has a dual- or quad-channel memory subsystem, all of the memory channels must be upgraded simultaneously. Sixteen-bit modules provide one channel of memory, while 32-bit modules provide two channels. Therefore, a dual channel mainboard accepting 16-bit modules must have RIMMs added or removed in pairs. A dual channel mainboard accepting 32-bit modules can have single RIMMs added or removed as well.


  • PC600: 16-bit, single channel RIMM, specified to operate at 300 MHz clock speed, 1200 MB/s bandwidth
  • PC700: 16-bit, single channel RIMM, specified to operate at 355 MHz clock speed, 1420 MB/s bandwidth
  • PC800: 16-bit, single channel RIMM, specified to operate at 400 MHz clock speed, 1600 MB/s bandwidth
  • PC1066 (RIMM 2100): 16-bit, single channel RIMM specified to operate at 533 MHz clock speed, 2133 MB/s bandwidth
  • PC1200 (RIMM 2400): 16-bit, single channel RIMM specified to operate at 600 MHz clock speed, 2400 MB/s bandwidth
  • RIMM 3200: 32-bit, dual channel RIMM specified to operate at 400 MHz clock speed, 3200 MB/s bandwidth
  • RIMM 4200: 32-bit, dual channel RIMM specified to operate at 533 MHz clock speed, 4200 MB/s bandwidth
  • RIMM 4800: 32-bit, dual channel RIMM specified to operate at 600 MHz clock speed, 4800 MB/s bandwidth
  • RIMM 6400: 32-bit, dual channel RIMM specified to operate at 800 MHz clock speed, 6400 MB/s bandwidth

Video game consoles:- Rambus's RDRAM saw use in several video game consoles, beginning in 1996 with the Nintendo 64. The Nintendo console utilized 4 MB RDRAM running with a 500 MHz clock on an 9-bit bus, providing 500 MB/s bandwidth. RDRAM allowed N64 to be equipped with a large amount of memory bandwidth while maintaining a lower cost due to design simplicity. RDRAM's narrow bus allows circuit board designers to use simpler design techniques to minimize cost. The memory, however, was disliked for its high random access latencies. In the N64, the RDRAM modules are cooled by a passive heatspreader assembly.

Sony uses RDRAM in the PlayStation 2. The PS2 was equipped with 32 MB of the memory, and implemented a dual-channel configuration resulting in 3200 MB/s available bandwidth. The PlayStation 3 utilizes 256 MB of Rambus's XDR DRAM, which could be considered a successor to RDRAM, on a 64-bit bus at 400 MHz with an octal data rate[1] (cf. double data rate) providing speeds of 3.2 GHz, allowing a large 204.8 Gbit/s (25.6 GB/s) bandwidth.

Video cards:-Cirrus Logic implemented RDRAM support in their Laguna graphics chip, with two members of the family; the 2D-only 5462 and the 5464, a 2D chip with 3D acceleration. RDRAM offered a cost-advantage while being potentially faster than competing DRAM technologies with its high bandwidth. The chips were used on the Creative Graphics Blaster MA3xx series, among others.


VC-RAM


Virtual Channel Random Access Memory (VC-RAM in short, other names such as VC-SDRAM, VCSDRAM, VCDRAM, or VCM) was a proprietary type of SDRAM produced by NEC, but released as an open standard with no licensing fees. VCM creates a state in which the various system processes can be assigned their own virtual channel, thus increasing the overall system efficiency by avoiding the need to have processes share buffer space. This is accomplished by creating different "blocks" of memory, allowing each individual memory block to interface separately with the memory controller and have its own buffer space. The only motherboards ever able to support VC-RAM were for AMD Athlon and Intel Pentium 3 processors. A VC-RAM module is physically similar to a SDR SDRAM memory module, so VC-RAM capable motherboards are also able to use standard SDR SDRAM, but they cannot be mixed together.

VC-RAM is faster than SDRAM because it has significantly lower latencies. The technology was a potential competitor of Rambus or RDRAM because VC-RAM was not nearly as expensive as RDRAM was; however, the technology did not catch on. Instead, NEC accepted dual data rate (DDR) SDRAM as the successor of SDR SDRAM.

NEC claims 3% boost in Quake benchmarks with 20% increase in system performance.

GDDR2

The first commercial product to claim using the "DDR2" technology was the NVIDIA GeForce FX 5800 graphics card. However, it is important to note that this GDDR-2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR-2 is a colloquial misnomer. In particular, the performance-enhancing doubling of the I/O clock rate is missing. It had severe overheating issues due to the nominal DDR voltages. ATI has since designed the GDDR technology further into GDDR3, which is more true to the DDR2 specifications, though with several additions suited for graphics cards.

GDDR3 is now commonly used in modern graphics cards and some tablet PCs. However, further confusion has been added to the mix with the appearance of budget and mid-range graphics cards which claim to use "GDDR2". These cards actually use standard DDR2 chips designed for use as main system memory. These chips cannot achieve the clock speeds that GDDR3 can but are inexpensive enough to be used as memory on mid-range cards.

GDDR3


GDDR3, Graphics Double Data Rate 3, is a graphics card-specific memory technology, designed by ATI Technologies with the collaboration of JEDEC.

It has much the same technological base as DDR2, but the power and heat dispersal requirements have been reduced somewhat, allowing for higher-speed memory modules, and simplified cooling systems. Unlike the DDR2 used on graphics cards, GDDR3 is unrelated to the JEDEC DDR3 specification. This memory uses internal terminators, enabling it to better handle certain graphics demands. To improve bandwidth, GDDR3 memory transfers 4 bits of data per pin in 2 clock cycles.

The GDDR3 Interface transfers two 32 bit wide data words per clock cycle from the I/O pins. Corresponding to the 4n-pre fetch a single write or read access consists of a 128 bit wide, one-clock-cycle data transfer at the internal memory core and four corresponding 32 bit wide, one-half-clock-cycle data transfers at the I/O Pins. Single-ended unidirectional Read and Write Data strobes are transmitted simultaneously with Read and Write data respectively in order to capture data properly at the receivers of both the Graphics SDRAM and the controller. Data strobes are organized per byte of the 32 bit wide interface.

GDDR4


GDDR4 SDRAM (Graphics Double Data Rate, version 4) is a type of graphics card memory specified by the JEDEC Semiconductor Memory Standard. Its main competitor appears to be Rambus's XDR DRAM. GDDR4 is the memory successor to GDDR3. It should be noted that neither is related to the JEDEC DDR3 memory standard.

GDDR5


GDDR5 (Graphics Double Data Rate, version 5) is a type of graphics card memory the standards of which were set out in the GDDR5 specification by JEDEC. GDDR5 is the successor to GDDR4 and unlike its predecessors has two parallel DQ links which provide doubled I/O throughput when compared to GDDR4. GDDR5 SGRAM is a high speed dynamic random-access memory designed for applications requiring high bandwidth. GDDR5 SGRAM uses a 8n prefetch architecture and DDR interface to achieve high-speed operation and can be configured to operate in x32 mode or x16 (clamshell) mode which is detected during device initialization. The GDDR5 interface transfers two 32 bit wide data words per WCK clock cycle to/from the I/O pins. Corresponding to the 8n prefetch a single write or read access consists of a 256 bit wide, two CK clock cycle data transfer at the internal memory core and eight corresponding 32 bit wide one-half WCK clock cycle data transfers at the I/O pins.

Did you find this post helpful