Virtual vs Cache

Published on February 2017 | Categories: Documents | Downloads: 43 | Comments: 0 | Views: 171
of 4
Download PDF   Embed   Report

Comments

Content

Virtual Memory & Cache Memory
Introduction and Difference

Introduction
Virtual Memory
Virtual (or logical) memory is a concept that, when implemented by a computer and its operating system,
allows programmers to use a very large range of memory or storage addresses for stored data. The
computing system maps the programmer's virtual addresses to real hardware storage addresses. Usually,
the programmer is freed from having to be concerned about the availability of data storage.
In addition to managing the mapping of virtual storage addresses to real storage addresses, a computer
implementing virtual memory or storage also manages storage swapping between active storage (RAM)
and hard disk or other high volume storage devices. Data is read in units called "pages" of sizes ranging
from a thousand bytes (actually 1,024 decimal bytes) up to several megabytes in size. This reduces the
amount of physical storage access that is required and speeds up overall system performance.

Cache Memory
A small amount of high-speed memory residing on or close to the CPU is called Cache Memory. Cache
memory supplies the processor with the most frequently requested data and instructions. Level 1 cache
(primary cache) is the cache closest to the processor. Level 2 cache (secondary cache) is the cache second
closest to the processor and is usually on the motherboard.
Cache memory helps to alleviate the gap between the speed of a CPU's megahertz rating and the ability
of RAM to respond and deliver data. It reduces the frequency that the CPU must wait for data from the
main memory.

Description
Virtual Memory
Virtual memory is a computer system technique which gives
an application program the impression that it has
contiguous working memory (an address space), while in
fact it may be physically fragmented and may even overflow
on to disk storage.
Developed for multitasking kernels, virtual memory provides
two primary functions:
1. Each process has its own address space, thereby not
required to be relocated nor required to use relative
addressing mode.
2. Each process sees one contiguous block of free
memory upon launch. Fragmentation is hidden.
All implementations (excluding emulators) require hardware
support. This is typically in the form of a Memory
Management Unit built into the CPU.
Systems that use this technique make programming of large
applications easier and use real physical memory (e.g. RAM)
more efficiently than those without virtual memory. Virtual
memory differs significantly from memory virtualization in
that virtual memory allows resources to be virtualized as memory for a specific system, as opposed to a
large pool of memory being virtualized as smaller pools for many different systems.
Note that "virtual memory" is more than just "using disk space to extend physical memory size" - that is
merely the extension of the memory hierarchy to include hard disk drives. Extending memory to disk is a
normal consequence of using virtual memory techniques, but could be done by other means such as
overlays or swapping programs and their data completely out to disk while they are inactive. The
definition of "virtual memory" is based on redefining the address space with a contiguous virtual memory
addresses to "trick" programs into thinking they are using large blocks of contiguous addresses.
Modern general-purpose computer operating systems generally use virtual memory techniques for
ordinary applications, such as word processors, spreadsheets, multimedia players, accounting, etc.,
except where the required hardware support (a memory management unit) is unavailable. Older
operating systems, such as DOS of the 1980s, or those for the mainframes of the 1960s, generally had no
virtual memory functionality - notable exceptions being the Atlas, B5000 and Apple Computer's Lisa.
Embedded systems and other special-purpose computer systems which require very fast and/or very
consistent response times may opt not to use virtual memory due to decreased determinism. This is
based on the idea that unpredictable processor exceptions produce unwanted jitter on CPU operated I/O,
which the smaller embedded processors often perform directly to keep cost and power consumption low.
And the associated simple application has little use for multitasking features.
Page | 1

Cache Memory
Pronounced cash, a special high-speed
storage mechanism. It can be either a
reserved section of main memory or an
independent high-speed storage device.
Two types of caching are commonly
used in personal computers: memory caching and disk caching.
A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of highspeed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main
memory. Memory caching is effective because most programs access the same data or instructions over
and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the
slower DRAM.
Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor,
for example, contains an 8K memory cache, and the Pentium has a 16K cache. Such internal caches are
often called Level 1 (L1) caches. Most modern PCs also come with external cache memory, called Level 2
(L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of
SRAM but they are much larger.
Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a
disk cache uses conventional main memory. The most recently accessed data from the disk (as well as
adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it
first checks the disk cache to see if the data is there. Disk caching can dramatically improve the
performance of applications, because accessing a byte of data in RAM can be thousands of times faster
than accessing a byte on a hard disk.
When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its
hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize
certain types of frequently used data. The strategies for determining which information should be kept in
the cache constitute some of the more interesting problems in computer science.
Pronounced cash, a special high-speed storage mechanism. It can be either a reserved section of main
memory or an independent high-speed storage device. Two types of caching are commonly used in
personal computers: memory caching and disk caching.
A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of highspeed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main
memory. Memory caching is effective because most programs access the same data or instructions over
and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the
slower DRAM.
Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor,
for example, contains an 8K memory cache, and the Pentium has a 16K cache. Such internal caches are
often called Level 1 (L1) caches. Most modern PCs also come with external cache memory, called Level 2
(L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of
SRAM but they are much larger.

Page | 2

Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a
disk cache uses conventional main memory. The most recently accessed data from the disk (as well as
adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it
first checks the disk cache to see if the data is there. Disk caching can dramatically improve the
performance of applications, because accessing a byte of data in RAM can be thousands of times faster
than accessing a byte on a hard disk.
When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its
hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize
certain types of frequently used data. The strategies for determining which information should be kept in
the cache constitute some of the more interesting problems in computer science.

References


www.wikipedia.org



www.answers.com



www.webopedia.com



www.intel.com

Page | 3

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close