You are on page 1of 10

Cache (pronounced cash) memory is extremely fast memory that is built into a computers central

processing unit (CPU), or located next to it on a separate chip. The CPU uses cache memory to
store instructions that are repeatedly required to run programs, improving overall system speed. The
advantage of cache memory is that the CPU does not have to use the motherboards system bus for
data transfer. Whenever data must be passed through the system bus, the data transfer speed slows
to the motherboards capability. The CPU can process data much faster by avoiding the bottleneck
created by the system bus.
As it happens, once most programs are open and running, they use very few resources. When these
resources are kept in cache, programs can operate more quickly and efficiently. All else being equal,
cache is so effective in system performance that a computer running a fast CPU with little cache can
have lower benchmarks than a system running a somewhat slower CPU with more cache. Cache
built into the CPU itself is referred to as Level 1 (L1) cache. Cache that resides on a separate chip
next to the CPU is called Level 2 (L2) cache. Some CPUs have both L1 and L2 cachebuilt-in and
designate the separate cache chip as Level 3 (L3) cache.
Ad
Cache that is built into the CPU is faster than separate cache, running at the speed of
themicroprocessor itself. However, separate cache is still roughly twice as fast as Random Access
Memory (RAM). Cache is more expensive than RAM, but it is well worth getting a CPU and
motherboard with built-in cache in order to maximize system performance.
Disk caching applies the same principle to the hard disk that memory caching applies to the CPU.
Frequently accessed hard disk data is stored in a separate segment of RAM in order to avoid having
to retrieve it from the hard disk over and over. In this case, RAM is faster than the platter technology
used in conventional hard disks. This situation will change, however, as hybrid hard disks become
ubiquitous. These disks have built-in flash memory caches. Eventually, hard drives will be 100%
flash drives, eliminating the need for RAM disk caching, as flash memory is faster than RAM.

cache memory
Vendor Resources

Overcoming the Weaknesses of Server-Side Flash QLogic Corporation

Using Oracle TimesTen Products and Technologies Oracle Corporation

Cache memory, also called CPU memory, is random access memory (RAM)
that a computer microprocessor can access more quickly than it can access
regular RAM. Thismemory is typically integrated directly with the CPU chip or
placed on a separate chipthat has a separate bus interconnect with the CPU.

The basic purpose of cache memory is to storeprogram instructions that are frequently
re-referenced by software during operation. Fastaccess to these instructions increases
the overall speed of the software program.
As the microprocessor processes data, it looks first in the cache memory; if it finds the
instructions there (from a previous reading of data), it does not have to do a more
time-consuming reading of data from larger memory or other data storage devices.
Most programs use very few resources once they have been opened and operated for a
time, mainly because frequently re-referenced instructions tend to be cached. This
explains why measurements of system performance incomputers with
slower processors but larger caches tend to be faster than measurements of system
performance in computers with faster processors but more limited cache space.
Multi-tier or multilevel caching has become popular
in server and desktop architectures, with different levels providing greater efficiency
through managed tiering. Simply put, the less frequently access is made to certain data
or instructions, the lower down the cache level the data or instructions are written.

Cache memory levels explained


Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that
describe its closeness and accessibility to the microprocessor:

Level 1 (L1) cache is extremely fast but relatively small, and


is usually embedded in the processor chip (CPU).

Level 2 (L2) cache is often more capacious than L1; it may be


located on the CPU or on a separate chip or coprocessor with a
high-speed alternative system bus interconnecting the cache to
the CPU, so as not to be slowed by traffic on the main system
bus.

Level 3 (L3) cache is typically specialized memory that works


to improve the performance of L1 and L2. It can be significantly

slower than L1 or L2, but is usually double the speed of RAM. In


the case of multicore processors, each core may have its own
dedicated L1 and L2 cache, but share a common L3 cache. When
an instruction is referenced in the L3 cache, it is typically
elevated to a higher tier cache.

Memory cache configurations


Caching configurations continue to evolve, but memory cache traditionally works
under three different configurations:

Direct mapping, in which each block is mapped to exactly


one cache location. Conceptually, this is like rows in a table with
three columns: the data block or cache line that contains the
actual data fetched and stored, a tag that contains all or part of
the address of the fetched data, and a flag bit that connotes the
presence of a valid bit of data in the row entry.

Fully associative mapping is similar to direct mapping in


structure, but allows a block to be mapped to any cache location
rather than to a pre-specified cache location (as is the case with
direct mapping).

Set associative mapping can be viewed as a compromise


between direct mapping and fully associative mapping in which
each block is mapped to a subset of cache locations. It is
sometimes called N-way set associative mapping, which provides
for a location in main memory to be cached to any of "N"
locations in the L1 cache.

Specialized caches
In addition to instruction and data caches, there are other caches designed to provide
specialized functions in a system. By some definitions, the L3 cache is a specialized

cache because of its shared design. Other definitions separate instruction caching from
data caching, referring to each as a specialized cache.
Other specialized memory caches include the translation lookaside buffer (TLB)
whose function is to record virtual address to physical address translations.
Still other caches are not, technically speaking, memory caches at all. Disk caches, for
example, may leverage RAM or flash memory to provide much the same kind of data
caching as memory caches do with CPU instructions. If data is frequently accessed
from disk, it is cached into DRAM or flash-based silicon storage technology for faster
access and response.
In the video below, Dennis Martin, founder and president of Demartek LLC, explains
the pros and cons of using solid-state drives as cache and as primary storage.

Specialized caches also exist for such applications as Web browsers, databases,
network address binding and client-side Network File System protocol support. These
types of caches might be distributed across multiple networked hosts to provide
greaterscalability or performance to an application that uses them.

Increasing cache size


L1, L2 and L3 caches have been implemented in the past using a combination of
processor andmotherboard components. Recently, the trend has been toward
consolidating all three levels of memory caching on the CPU itself. For this reason,
the primary means for increasing cache size has begun to shift from the acquisition of
a specific motherboard with different chipsets and bus architectures to buying the
right CPU with the right amount of integrated L1, L2 and L3 cache.

Margaret Rouse asks:

What are some tips you have for


increasing cache memory?
4 Responses

Join the Discussion

Contrary to popular belief, implementing flash or greater amounts of DRAM on a


system does not increase cache memory. This can be confusing since the term memory
caching(hard disk buffering) is often used interchangeably with cache memory. The
former, using DRAM or flash to buffer disk reads, is intended to improve storage I/O
by caching data that is frequently referenced in a buffer ahead of slower

performing magnetic diskor tape. Cache memory, by contrast, provides read buffering
for the CPU.
This CompTIA A+video tutorial explains cache memory.
This was first published in December 2014

Here is what happening when you have a warning sign showing " virtual memory is low warning". Your
computer uses physical memory, aka RAM, and virtual memory. Virtual memory is used to simulate more
RAM when your computer is reaching its maximum CPU and RAM usage. It very similar to a bucket filling
up with water. If your CPU needs water you can bring this resource to it. But you

Difference Between Cache Memory and


Virtual Memory
Posted on January 28, 2015 by admin

Cache Memory vs Virtual Memory


The difference between cache memory and virtual memory exists in the purpose for
which these two are used and in the physical existence. Cache memory is a type
of memory used to improve the access time of main memory. It resides between
the CPU and the main memory, and there can be several levels of caches such as L1,
L2 and L3. The type of hardware used for cache memory is much costlier than
the RAM(Random Access Memory) used for main memory because cache memory is
much faster. For this reason, the capacity of cache memory is very small. Virtual
memory is a memory management technique used to efficiently use RAM (main
memory) while providing a separate memory space for each program that is even larger
than the actual physical RAM (main memory) capacity. Here the hard disk is used to
expand the memory. The items in the physical RAM are transferred back and forth with
the hard disk.

What is Cache Memory?


Cache memory is a type of memory that lies between the CPU (Central Processing
Unit) and the RAM (Random Access memory). The purpose of cache memory is to
reduce the memory access time of the CPU from the RAM. The cache memory is much
faster than RAM. So access time on cache is much lesser than the access time on
RAM. But the cost of memory used for cache memory is much higher than the cost of
memory used for RAM, and hence, the capacity of cache memory is very small. The
type of memory used for cache memory is called SRAM ( Static Random Access
Memory).
Whenever the CPU wants to access memory, it first checks whether what it needs
resides in the cache memory. If yes, it would be able to access it with the least latency.
If it does not reside in cache, then the requested content would be copied from RAM to
the cache and then only the CPU will access it from the cache. Here, when copying
content from the cache, not only the content in the requested memory address but also
the nearby content is copied to cache. So, the next time there is a high probability for a
cache hit to occur as most computer programs access nearby data or last accessed
data most of the times. So due to the cache, the average memory latency is reduced.

In CPU, there are three types of caches: Instruction cache to store program
instructions, Data cache to store data items, and the Translation Look-aside Buffer to
store memory mappings. For data cache, generally, there are multi-level caches. That
is, there are several caches as L1, L2 and L3. L1 cache is the fastest but smallest
cache memory that is closest to the CPU. L2 cache is slower than L1, but larger than
L1 and resides after the L1 cache. Because of this hierarchy a better average memory
access time can be achieved at a less cost.

What is Virtual Memory?


Virtual memory is a memory management technique used in computers systems.
There is no hardware called virtual memory, but it is a concept that uses RAM and the
hard disk to provide a virtual address space for programs. First RAM is divided into
chunks called pages and they are identified by physical memory addresses. In the hard
disk, a special portion is reserved where, inLinux, it is called the swap and, in Windows,
it is called a page file. When a program is started, it is given a virtual address space that
can be even larger than the actual physical memory. Virtual memory space is also
divided into chunks called pages and each of this virtual memory page can be mapped
to a physical page. The table called page table keep track of this mapping. When the
physical memory runs out of space, what is done is, certain physical pages are pushed
to that special portion in the hard disk. When any page pushed to the hard disk is
needed again, it is brought to the physical memory by putting another selected page
from the physical memory to the hard disk.

What is the difference between Cache Memory and


Virtual Memory?
Cache memory is a type of memory used for improving the main memory access time.
It is a faster type of memory that resides between CPU and RAM to reduce the average
memory access latency. Virtual memory is a memory management method where it is a
concept that lets programs get its own virtual memory space, which is even larger than
the real physical RAM available.
Cache memory is a type of hardware memory that actually exists physically. On the
other hand, there is no hardware called virtual memory as it is a concept that uses
RAM, hard disk, Memory management unit, and software to provide a virtual type of
memory.

Cache memory management is done fully by hardware. Virtual memory is managed by


the operating system (software).
Cache memory lies between RAM and the processor. Data transfers involve RAM,
cache memory, and the processor. Virtual memory, on the other hand, involves transfer
of data between RAM and hard disk.
Cache memories take small sizes such as Kilobytes and Megabytes. Virtual memory,
on the other hand, involves huge sizes that take gigabytes.
Virtual memory involves data structures such as page tables that store mapping
between physical memory and virtual memory. But this type of data structures is not
necessary for cache memory.
Summary:

Cache Memory vs Virtual Memory


Cache memory is used for improving the main memory access time while virtual
memory is a memory management method. Cache memory is an actual hardware, but
there is no hardware called virtual memory. RAM, hard disk, and various other hardware
together with the operating system produces the concept called virtual memory to
provide large and isolated virtual memory spaces to each program. The content in the
cache memory is managed by hardware while the content in the virtual memory is
managed by the operating system.

You might also like