Computers

Published on May 2016 | Categories: Documents | Downloads: 65 | Comments: 0 | Views: 479
of 13
Download PDF   Embed   Report

Comments

Content


Random-access memory (RAM) is a form of computer data storage. A
random-access memory device allows data items to be read and written in roughly the same amount of time
regardless of the order in which data items are accessed.
[1]
In contrast, with other direct-access data storage
media such as hard disks, CD-RWs, DVD-RWs and the older drum memory, the time required to read and
write data items varies significantly depending on their physical locations on the recording medium, due to
mechanical limitations such as media rotation speeds and arm movement delays.
Today, random-access memory takes the form of integrated circuits. However, many types of SRAM are
still random access even in a strict sense. RAM is normally associated with volatile types of memory (such
as DRAM memory modules), where stored information is lost if the power is removed, although many efforts
have been made to develop non-volatile RAM chips.
[2]
Other types of non-volatile memory exist that allow
random access for read operations, but either do not allow write operations or have limitations on them.
These include most types of ROM and a type of flash memory called NOR-Flash.
Integrated-circuit RAM chips came into the market in the late 1960s, with the first commercially available
DRAM chip, the Intel 1103, introduced in October 1970.
[3]


Types Of RAM
1T-SRAM is a pseudo-static random-access memory (PSRAM) technology introduced by MoSys, Inc.,
which offers a high-density alternative to traditional static random access memory (SRAM) in embedded
memory applications. Mosys uses a single-transistor storage cell (bit cell) like dynamic random access
memory (DRAM), but surrounds the bit cell with control circuitry that makes the memory functionally
equivalent to SRAM (the controller hides all DRAM-specific operations such as precharging and refresh).
1T-SRAM (and PSRAM in general) has a standard single-cycle SRAM interface and appears to the
surrounding logic just as an SRAM would.
Due to its one-transistor bit cell, 1T-SRAM is smaller than conventional (six-transistor, or ―6T‖) SRAM, and
closer in size and density to embedded DRAM (eDRAM). At the same time, 1T-SRAM has performance
comparable to SRAM at multi-megabit densities, uses less power than eDRAM and is manufactured in a
standard CMOS logic process like conventional SRAM.
MoSyS markets 1T-SRAM as physical IP for embedded (on-die) use in System-on-a-chip (SOC)
applications. It is available on a variety of foundry processes, including Chartered, SMIC, TSMC, and UMC.
Some engineers use the terms 1T-SRAM and "embedded DRAM" interchangeably, as some foundries
provide Mosys's 1T-SRAM as ―eDRAM‖. However, other foundries provide 1T-SRAM as a distinct offering.






A-RAM, Advanced-Random Access Memory is a DRAM memory based on single-transistor capacitor-
less cells. A-RAM was invented in 2009 at the University of Granada, UGR (Spain) in collaboration with
the Centre National de la Recherche Scientifique, CNRS (France). It was conceived by Noel Rodriguez
(UGR), Francisco Gamiz (UGR) and Sorin Cristoloveanu (CNRS). A-RAM is compatible with single-
gate silicon on insulator (SOI), double-gate, FinFETs and multiple-gate FETs (MuFETs).
[clarification needed]

The conventional 1-Transistor + 1-Capacitor DRAM is extensively used in the semiconductor industry for
manufacturing high density dynamic memories. Beyond the 45 nm node, the DRAM industry will need new
concepts avoiding the miniaturization issue of the memory-cell capacitor. The 1T-DRAM family of memories,
where the A-RAM is included, replaces the storage capacitor for the floating body of SOI transistors to store
the charge.
Diode memory uses diodes and resistors to implement random-access memory for information storage.
The devices have been dubbed ―one diode-one resistor‖ (1D-1R).
Dynamic random-access memory (DRAM) is a type of random-access memory that stores
each bit of data in a separate capacitorwithin an integrated circuit. The capacitor can be either charged or
discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1.
Since even "nonconducting" transistors always leak a small amount, the capacitors will slowly discharge,
and the information eventually fades unless the capacitor charge is refreshed periodically. Because of this
refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.
The main memory (the "RAM") in personal computers is dynamic RAM (DRAM). It is the RAM
in desktops, laptops and workstationcomputers as well as some of the RAM of video game consoles.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit,
compared to four or six transistors in SRAM. This allows DRAM to reach very high densities. Unlike flash
memory, DRAM is volatile memory (vs. non-volatile memory), since it loses its data quickly when power is
removed. The transistors and capacitors used are extremely small; billions can fit on a single memory chip.
eDRAM stands for "embedded DRAM", a capacitor-based dynamic random-access memory integrated on
the same die or module as an ASIC or processor. The cost-per-bit is higher than for stand-alone DRAM
chips but in many applications the performance advantages of placing the eDRAM on the same chip as the
processor outweighs the cost disadvantage compared with external memory.
Embedding memory on the ASIC or processor allows for much wider buses and higher operation speeds,
and due to much higher density of DRAM in comparison to SRAM, larger amounts of memory can be
installed on smaller chips if eDRAM is used instead ofeSRAM. eDRAM requires additional fab process steps
compared with embedded SRAM, which raises cost, but the 3× area savings of eDRAM memory offsets the
process cost when a significant amount of memory is used in the design.
eDRAM memories, like all DRAM memories, require periodic refreshing of the memory cells, which adds
complexity. However if the memory refresh controller is embedded along with the eDRAM memory, the
remainder of the ASIC can treat the memory like a simple SRAM type such as in 1T-SRAM.
eDRAM is used in IBM's POWER7 processor,
[1]
Intel's Haswell CPUs with GT3e integrated graphics,
[2]
and
in many game consolesand other devices, including Sony's PlayStation 2, Sony's PlayStation
Portable, Nintendo's GameCube, Nintendo's Wii, Nintendo's Wii U, Apple Inc.'s iPhone, Microsoft's Zune
HD, and Microsoft's Xbox 360 and Xbox One.

ETA-RAM is a trademark for a novel RAM computer memory technology developed by Eta
Semiconductor.
[1]
ETA-RAM has the benefits of improving on both parameters (cost and dissipated power)
combining the advantages of both DRAM and SRAM: lower cost of existing DRAMs, lower power dissipation
and higher performance than SRAMs. The cost advantages are obtained by utilizing a much simpler process
technology and by reducing significantly the silicon area of the cells: an ETA-RAM cell requires about the
same silicon area of modern DRAM devices. The improved power dissipation is obtained by reducing the
current utilized in reading and writing the data bits in the cell and by removing the refresh requirements. At
the same time, ETA-RAM offers writing and reading data rate higher than the standard six-
transistor SRAM cell used in cache memory. In order to combine the advantages of the twoRAM types, Eta
Semiconductor adopted a new approach based on building static memory cells using a single process
structure of minimum dimensions that by itself cover the same function of a conventional SRAM.
[2]
This is
possible using a new CMOS Technology for the manufacturing of high-density integrated circuits invented
by the founders of Eta Semiconductor. Such technology, said ETA CMOS, defines novel structures that,
thanks to metal junctions and the use of stacked gates, develop simultaneously the functions of more
traditional transistors.
Ferroelectric RAM (FeRAM, F-RAM or FRAM) is a random-access memory similar in construction
to DRAM but uses a ferroelectriclayer instead of a dielectric layer to achieve non-volatility. FeRAM is one of
a growing number of alternative non-volatile random-access memory technologies that offer the same
functionality as flash memory. FeRAM advantages over flash include: lower power usage, faster write
performance
[1]
and a much greater maximum number of write-erase cycles (exceeding 10
16
for 3.3 V
devices). Disadvantages of FeRAM are much lower storage densities than flash devices, storage capacity
limitations, and higher cost.
Hybrid Memory Cube (HMC) is a new type of computer RAM technology developed by Micron Technology.
The Hybrid Memory Cube Consortium (HMCC) is backed by several major technology companies
including Samsung, Micron Technology, Open-Silicon, ARM, HP, Microsoft, Altera, and Xilinx.
[1]

The HMC uses 3D packaging of multiple memory dies, typically 4 or 8 memory dies per package,
[2]
with use of through-
silicon vias (TSV) and microbumps. It has more data banks than classic DRAM memory of the same size. The memory
controller is integrated into the memory package as a separate logic die.
[3]
The HMC uses standard DRAM cells, but its
interface is incompatible with current DDRn (DDR2 or DDR3) implementations.
[4]

HMC technology won the Best New Technology award from The Linley Group (publisher of Microprocessor
Report magazine) in 2011.
[5][6]

The first public specification, HMC 1.0, was published in April 2013.
[7]
According to it, the HMC uses 16-lane or 8-lane (half
size) full-duplex differential serial links, with each lane having 10, 12.5 or 15 Gbit/s SerDes.
[8]
Each HMC package is named
a cube, and they can be chained in a network of up to 8 cubes with cube-to-cube links and some cubes using their links as
pass-through links.
[9]
Typical cube package with 4 links has 896 BGA pins and sized 31x31x3.8 millimeters.
[10]

The typical raw bandwidth of a single 16-lane link with 10 Gbit/s signalling implies a total bandwidth of all 16 lanes of
40 GB/s (20 GB/s transmit and 20 GB/s receive); cubes with 4 and 8 links are planned. Effective memory bandwidth
utilization varies from 33% to 50% for smallest packets of 32 bytes; and from 45% to 85% for 128 byte packets.
[2]

As reported at the HotChips 23 conference in 2011, the first generation of HMC demonstration cubes with four 50 nm
DRAM memory dies and one 90 nm logic die with total capacity of 512 MB and size 27x27 mm had power consumption of
11 W and was powered with 1.2 V.
[2]

Engineering samples of second generation HMC memory chips were announced in September 2013 by Micron and mass
production of HMC may start in 2014.
[11][12]
Samples of 2 GB HMC (stack of 4 memory dies, each of 4 Gbit) are packed in
31×31 mm package and has 4 HMC links. Other samples from 2013 has only two HMC links and a smaller package:
16×19.5 mm.
[13]

Volume production of 2 and 4 GB devices is planned for 2014.
[14]
Magnetoresistive random-access memory (MRAM) is a non-volatile random-access
memory technology under development since the 1990s. Continued increases in density of existing memory
technologies – notably flash RAM and DRAM – kept it in a niche role in the market, but its proponents
believe that the advantages are so overwhelming that magnetoresistive RAM will eventually become
dominant for all types of memory, becoming a universal memory.
[1]

Nano-RAM is a proprietary computer memory technology from the company Nantero. It is a type
of nonvolatile random access memory based on the position of carbon nanotubes deposited on a chip-like
substrate. In theory, the small size of the nanotubes allows for very high density memories. Nantero also
refers to it as NRAM.
nvSRAM is a type of non-volatile random-access memory (NVRAM). It is similar in operation to static
random-access memory(SRAM). The current market for non-volatile memory is dominated by BBSRAMs, or
battery-backed static random-access memory. However, BBSRAMs are slow and suffer
from ROHS compliance issues. nvSRAMs provide 20ns or lesser access times.
nvSRAM is one of the advanced NVRAM technologies that is fast replacing the BBSRAMs, especially for
applications that need battery free solutions and long term retention at SRAM speeds. nvSRAMs are used in
a wide range of situations—networking, aerospace, and medical, among many others
[1]
—where the
preservation of data is critical and where batteries are impractical.
nvSRAM is available from 16k densities up to 8M densities from both Simtek Corporation
[2]
and Cypress
Semiconductor.
[3]
There are other nvSRAM products from Maxim which are essentially BBSRAMs. They
have a lithium battery built into the SRAM package. It is an efficient replacement for
BBSRAM, EPROM or EEPROM. It is faster than EPROM and EEPROM solutions. It is better than BBSRAM
solution because there is no ROHS issue associated with this type of memory. No external battery is used.
Phase-change memory (also known as PCM, PCME, PRAM, PCRAM, Ovonic Unified
Memory, Chalcogenide RAM and C-RAM) is a type of non-volatile random-access memory. PRAMs
exploit the unique behaviour of chalcogenide glass. In the older generation of PCM heat produced by the passage of
an electric current through a heating element generally made of TiN would be used to either quickly heat and quench
the glass, making it amorphous, or to hold it in its crystallization temperature range for some time, thereby switching it
to a crystalline state. PCM also has the ability to achieve a number of distinct intermediary states, thereby having the
ability to hold multiple bits in a single cell, but the difficulties in programming cells in this way has prevented these
capabilities from being implemented in other technologies (most notably flash memory) with the same capability.
Newer PCM technology has been trending in a couple different directions. Some groups have been directing a lot of
research towards attempting to find viable material alternatives to Ge
2
Sb
2
Te
5
(GST), with mixed success, while others
have developed the idea of using a GeTe - Sb
2
Te
3
superlattice in order to achieve non thermal phase changes by
simply changing the coordination state of the Germanium atoms with a laser pulse, and this new Interfacial phase
change memory (IPCM) has had many successes and continues to be the site of much active research.
[1]

Leon Chua has argued that all 2-terminal non-volatile memory devices including phase change memory should be
consideredmemristors.
[2]
Stan Williams of HP Labs has also argued that phase change memory should be considered
to be a memristor.
[3]
However, this terminology has been challenged and the potential applicability of memristor theory
to any physically realizable device is open to question.
[4][5]


Resistive random-access memory (RRAM or ReRAM) is a type of non-volatile (NV) random-
access (RAM) computer memory that works by changing the resistance across a dielectric solid-state
material often referred to as a memristor. This technology bears some similarities to CBRAM and phase-
change memory (PCM).
CBRAM involves one electrode providing ions which dissolve readily in an electrolyte material, while PCM
involves generating sufficient Joule heating to effect amorphous-to-crystalline or crystalline-to-amorphous
phase changes. On the other hand, RRAM involves generating defects in a thin oxide layer, known as
oxygen vacancies (oxide bond locations where the oxygen has been removed), which can subsequently
charge and drift under an electric field. The motion of oxygen ions and vacancies in the oxide would be
analogous to the motion of electrons and holes in a semiconductor.
RRAM is currently under development by a number of companies, some of which have filed patent
applications claiming various implementations of this technology.
[1][2][3][4][5][6][7]
RRAM has entered
commercialization on an initially limited KB-capacity scale.
[8]

Although commonly anticipated as a replacement technology for flash memory, the cost benefit and
performance benefit of RRAM have not been obvious enough to most companies to proceed with the
replacement. A broad range of materials apparently can potentially be used for RRAM. However, the recent
discovery
[9]
that the popular high-k gate dielectric HfO2 can be used as a low-voltage RRAM has greatly
encouraged others to investigate other possibilities.
Reduced-latency Dynamic random access memory (RLDRAM) is a type of random access
memory developed by Infineon Technologies AG in 1999. Infineon and Micron Technology, Inc. later agreed
to jointly develop the device to guarantee a second source. RLDRAM memory is a low-latency, high-
bandwidth DRAM designed for networking and L3 cache, high-end commercial graphics, and other
applications that require back-to-back READ/WRITE operations or completely random access.
[1]
Static random-access memory (SRAM or static RAM) is a type of semiconductor memory that
uses bistable latching circuitry to store each bit. The term static differentiates it from dynamic RAM (DRAM)
which must be periodically refreshed. SRAM exhibits data remanence,
[1]
but it is still volatile in the
conventional sense that data is eventually lost when the memory is not powered.
Thyristor RAM (T-RAM) is a new (2009) type of DRAM computer memory invented and developed
by T-RAM Semiconductor, which departs from the usual designs of memory cells, combining the strengths
of the DRAM and SRAM: high density and high speed. This technology, which exploits the electrical
property known as negative differential resistance and is called thin capacitively-coupled thyristor,
[1]
is used
to create memory cells capable of very high packing densities. Due to this, the memory is highly scalable,
and already has a storage density that is several times higher than found in conventional six-transistor
SRAM memory. It was expected that the next generation of T-RAM memory will have the same density as
DRAM.
It is assumed that this type of memory will be used in the next-generation processors by AMD, produced in
32 nm and 22 nm,
[2]
replacing the previously licensed but unused Z-RAM technology.
Zero-capacitor (registered trademark, Z-RAM) is a novel dynamic random-access memory technology
developed by Innovative Silicon based on the floating body effect of silicon on insulator (SOI) process
technology. Z-RAM has been licensed by Advanced Micro Devices for possible use in
future microprocessors. Innovative Silicon claims the technology offers memory access speeds similar to the
standard six-transistor static random-access memory cell used in cache memory but uses only a
single transistor, therefore affording much higher packing densities.
Video RAM or VRAM, is a dual-ported variant of dynamic RAM (DRAM), which was once commonly used to
store the framebuffer in some graphics adapters.


Samsung Electronics Corporation VRAM
It was invented by F. Dill, D. Ling and R. Matick at IBM Research in 1980, with a patent issued in 1985 (US Patent
4,541,075). The first commercial use of VRAM was in a high-resolution graphics adapter introduced in 1986 by IBM for
the PC/RT system, which set a new standard for graphics displays. Prior to the development of VRAM, dual-ported
memory was quite expensive, limiting higher resolution bitmapped graphics to high-end workstations. VRAM improved
the overall framebuffer throughput, allowing low cost, high-resolution, high-speed, color graphics. Modern GUI-based
operating systems benefitted from this and thus it provided a key ingredient for proliferation of graphic user interfaces
throughout the world at that time.
VRAM has two sets of data output pins, and thus two ports that can be used simultaneously. The first port, the DRAM
port, is accessed by the host computer in a manner very similar to traditional DRAM. The second port, the video port,
is typically read-only and is dedicated to providing a high throughput, serialized data channel for the graphics
chipset.
[1]

Typical DRAM arrays normally access a full row of bits (i.e. a word line) at up to 1,024 bits at one time, but only use
one or a few of these for actual data, the remainder being discarded. Since DRAM cells are destructively read, each
row accessed must be sensed, and re-written. Thus, 1,024 sense amplifiers are typically used. VRAM operates by not
discarding the excess bits which must be accessed, but making full use of them in a simple way. If each horizontal
scan line of a display is mapped to a full word, then upon reading one word and latching all 1,024 bits into a separate
row buffer, these bits can subsequently be serially streamed to the display circuitry. This will leave access to the
DRAM array free to be accessed (read or write) for many cycles, until the row buffer is almost depleted. A complete
DRAM read cycle is only required to fill the row buffer, leaving most DRAM cycles available for normal accesses.
Such operation is described in the paper "All points addressable raster display memory" by R. Matick, D. Ling, S.
Gupta, and F. Dill, IBM Journal of R&D, Vol 28, No. 4, July 1984, pp. 379–393. To use the video port, the controller
first uses the DRAM port to select the row of the memory array that is to be displayed. The VRAM then copies that
entire row to an internal row-buffer which is a shift register. The controller can then continue to use the DRAM port for
drawing objects on the display. Meanwhile, the controller feeds a clock called the shift clock (SCLK) to the VRAM's
video port. Each SCLK pulse causes the VRAM to deliver the next data bit, in strict address order, from the shift
register to the video port. For simplicity, the graphics adapter is usually designed so that the contents of a row, and
therefore the contents of the shift-register, corresponds to a complete horizontal line on the display.
Through the 1990s, many graphic subsystems used VRAM, with the number of megabits touted as a selling point. In
the late 1990s, synchronous DRAM technologies gradually became affordable, dense, and fast enough to displace
VRAM, even though it is only single-ported and more overhead is required. Nevertheless, many of the VRAM
concepts of internal, on-chip buffering and organization have been used and improved in modern graphics adapters

ROM
Read-only memory (ROM) is a class of storage medium used in computers and other electronic
devices. Data stored in ROM can only be modified slowly or with difficulty, or not at all, so it is mainly used to
distribute firmware (software that is very closely tied to specific hardware, and unlikely to need frequent
updates).
Strictly, read-only memory refers to memory that is hard-wired, such as diode matrix and the later mask
ROM. Although discrete circuits can be altered (in principle), ICs cannot and are useless if the data is bad.
The fact that such memory can never be changed is a large drawback; more recently, ROM commonly
refers to memory that is read-only in normal operation, while reserving the fact of some possible way to
change it.
Other types of non-volatile memory such as erasable programmable read only memory (EPROM)
and electrically erasable programmable read-only memory (EEPROM or Flash ROM) are sometimes
referred to, in an abbreviated way, as "read-only memory" (ROM); although these types of memory can be
erased and re-programmed multiple times, writing to this memory takes longer and may require different
procedures than reading the memory.
[1]
When used in this less precise way, "ROM" indicates anon-
volatile memory which serves functions typically provided by mask ROM, such as storage of program code
and nonvolatile data.
PROM
Creating ROM chips totally from scratch is time-consuming and very expensive in small quantities. For
this reason, developers created a type of ROM known as programmable read-only memory (PROM).
Blank PROM chips can be bought inexpensively and coded by the user with a programmer.

PROM chips have a grid of columns and rows just as ordinary ROMs do. The difference is that every
intersection of a column and row in a PROM chip has a fuse connecting them. A charge sent through a
column will pass through the fuse in a cell to a grounded row indicating a value of 1. Since all the cells
have a fuse, the initial (blank) state of a PROM chip is all 1s. To change the value of a cell to 0, you use a
programmer to send a specific amount of current to the cell. The higher voltage breaks the connection
between the column and row by burning out the fuse. This process is known as burning the PROM.

PROMs can only be programmed once. They are more fragile than ROMs. A jolt of static electricity can
easily cause fuses in the PROM to burn out, changing essential bits from 1 to 0. But blank PROMs are
inexpensive and are good for prototyping the data for a ROM before committing to the costly ROM
fabrication process.

EPROM
Working with ROMs and PROMs can be a wasteful business. Even though they are inexpensive per chip,
the cost can add up over time. Erasable programmable read-only memory (EPROM) addresses this issue.
EPROM chips can be rewritten many times. Erasing an EPROM requires a special tool that emits a certain
frequency of ultraviolet (UV) light. EPROMs are configured using an EPROM programmer that provides
voltage at specified levels depending on the type of EPROM used.

The EPROM has a grid of columns and rows and the cell at each intersection has two transistors. The two
transistors are separated from each other by a thin oxide layer. One of the transistors is known as the
floating gate and the other as the control gate. The floating gate's only link to the row (wordline) is
through the control gate. As long as this link is in place, the cell has a value of 1. To change the value to 0
requires a process called Fowler-Nordheim tunneling.
Tunneling is used to alter the placement of electrons in the floating gate. Tunneling creates an avalanche
discharge of electrons, which have enough energy to pass through the insulating oxide layer and
accumulate on the gate electrode. When the high voltage is removed, the electrons are trapped on the
electrode. Because of the high insulation value of the silicon oxide surrounding the gate, the stored charge
cannot readily leak away and the data can be retained for decades. An electrical charge, usually 10 to 13
volts, is applied to the floating gate. The charge comes from the column (bitline), enters the floating gate
and drains to a ground.

This charge causes the floating-gate transistor to act like an electron gun. The excited electrons are pushed
through and trapped on the other side of the thin oxide layer, giving it a negative charge. These negatively
charged electrons act as a barrier between the control gate and the floating gate. A device called a cell
sensor monitors the level of the charge passing through the floating gate. If the flow through the gate is
greater than 50 percent of the charge, it has a value of 1. When the charge passing through drops below
the 50-percent threshold, the value changes to 0. A blank EPROM has all of the gates fully open, giving
each cell a value of 1.
To rewrite an EPROM, you must erase it first. To erase it, you must supply a level of energy strong
enough to break through the negative electrons blocking the floating gate. In a standard EPROM, this is
best accomplished with UV light at a wavelength of 253.7 nanometers (2537 angstroms). Because this
particular frequency will not penetrate most plastics or glasses, each EPROM chip has a quartz window
on top of it. The EPROM must be very close to the eraser's light source, within an inch or two, to work
properly.

An EPROM eraser is not selective, it will erase the entire EPROM. The EPROM must be removed from
the device it is in and placed under the UV light of the EPROM eraser for several minutes. An EPROM
that is left under too long can become over-erased. In such a case, the EPROM's floating gates are charged
to the point that they are unable to hold the electrons at all.

EEPROMs and Flash Memory
Though EPROMs are a big step up from PROMs in terms of reusability, they still require dedicated
equipment and a labor-intensive process to remove and reinstall them each time a change is necessary.
Also, changes cannot be made incrementally to an EPROM; the whole chip must be erased. Electrically
erasable programmable read-only memory (EEPROM) chips remove the biggest drawbacks of EPROMs.

In EEPROMs:
1. The chip does not have to removed to be rewritten.
2. The entire chip does not have to be completely erased to change a specific portion of it.
3. Changing the contents does not require additional dedicated equipment.
Instead of using UV light, you can return the electrons in the cells of an EEPROM to normal with the
localized application of an electric field to each cell. This erases the targeted cells of the EEPROM, which
can then be rewritten. EEPROMs are changed 1 byte at a time, which makes them versatile but slow. In
fact, EEPROM chips are too slow to use in many products that make quick changes to the data stored on
the chip.

Manufacturers responded to this limitation with Flash memory, a type of EEPROM that uses in-circuit
wiring to erase by applying an electrical field to the entire chip or to predetermined sections of the chip
called blocks. This erases the targeted area of the chip, which can then be rewritten. Flash memory works
much faster than traditional EEPROMs because instead of erasing one byte at a time, it erases a block or
the entire chip, and then rewrites it. The electrons in the cells of a Flash-memory chip can be returned to
normal ("1") by the application of an electric field, a higher-voltage charge.




CPU Manufacturers
1. Intel 11. TI
2. AMD 12. Conor Langan
3. Nvidia 13. Freescale
4. IBM 14. Transmeta
5. Qualcomm 15. Rise
6. Motorola 16. IDT
7. GlobalFoundries 17. Marvell
8. Sun 18. Samsung
Electronics
9. Cyrix 19. Arm
10. Via 20. Tilera
1













Leading Processor
Intel Core i7-3770K
#2
Intel Core i7-3960X Extreme Edition

#3
Intel Core i7-4790K

#4
Intel Core i5-4570





Inventor of CPU
Computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these
machines to be called "fixed-program computers". Since the term "CPU" is generally defined as a device
for software (computer program) execution, the earliest devices that could rightly be called CPUs came with
the advent of the stored-program computer.
The idea of a stored-program computer was already present in the design of J. Presper Eckert and John
William Mauchly's ENIAC, but was initially omitted so that it could be finished sooner. On June 30, 1945,
before ENIAC was made, mathematician John von Neumanndistributed the paper entitled First Draft of a
Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed
in August 1949.
[2]
EDVAC was designed to perform a certain number of instructions (or operations) of
various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer
memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of
ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new
task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by
changing the contents of the memory. EDVAC, however, was not the first stored-program computer;
the Manchester Small-Scale Experimental Machine, a small prototype stored-program computer, ran its first
program on 21 June 1948
[3]
and the Manchester Mark 1ran its first program during the night of 16–17 June
1949.

Microprocessors
In the 1970s the fundamental inventions by Federico Faggin (Silicon Gate MOS ICs with self-aligned
gates along with his new random logic design methodology) changed the design and implementation of
CPUs forever. Since the introduction of the first commercially available microprocessor (the Intel 4004) in
1970, and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost
completely overtaken all other central processing unit implementation methods. Mainframe and
minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their
older computer architectures, and eventually produced instruction setcompatible microprocessors that were
backward-compatible with their older hardware and software. Combined with the advent and eventual
success of the ubiquitous personal computer, the term CPU is now applied almost exclusively
[a]
to
microprocessors. Several CPUs (denoted 'cores') can be combined in a single processing chip.



Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close