Using paging as the core mechanism to support virtual memeory can lead to high performance
overheads. By chopping the address space into small, fixed-sized units (pages), paging requires
a large amount of mapping information. Because that mapping information is generally stored in
physical memory, paging logically requires an extra memory lookup for each virtual address generated
by program. Going to memory for translation information before every instruction fetch or explicit
load or store is prohibitively slow. And thus our problem: How to speed up address translation?
when we want to make things fast, the OS usually needs some help. And help often comes from the
OS‘s old friend: the Hardware. To speed address translation, we are going to add what is called a
translation-lookaside buffer, or TLB. A TLB is a part of the chip‘s memory-management unit (MMU),
and is simply a hardware cache of popular virtual-to-physical address translations; thus a better name
would be an address-translation cache. Upon each virtual memory reference, the hardware first
checks the TLB to see if the desired translation is held therein; if so, the translation is performed
quickly withour having to consult the page table which has all translations. Because of their tremendous
performance impact, TLBs in a real sense make virtual memeory possible.