Translation Lookaside Buffer

COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION

In principle, then, every virtual memory reference can cause two physical mem-
ory accesses: one to fetch the appropriate page table entry, and one to fetch the
desired data. Thus, a straightforward virtual memory scheme would have the effect
of doubling the memory access time. To overcome this problem, most virtual
memory schemes make use of a special cache for page table entries, usually called
a translation lookaside buffer (TLB). This cache functions in the same way as a
memory cache and contains those page table entries that have been most recently
used. Figure 8.18 is a flowchart that shows the use of the TLB. By the principle of
locality, most virtual memory references will be to locations in recently used pages.
Therefore, most references will involve page table entries in the cache. Studies of
the VAX TLB have shown that this scheme can significantly improve performance
[CLAR85, SATY81].

Note that the virtual memory mechanism must interact with the cache system
(not the TLB cache, but the main memory cache). This is illustrated in Figure 8.19.
A virtual address will generally be in the form of a page number, offset. First, the
memory system consults the TLB to see if the matching page table entry is present.
If it is, the real (physical) address is generated by combining the frame number with
the offset. If not, the entry is accessed from a page table. Once the real address is
generated, which is in the form of a tag and a remainder, the cache is consulted to
see if the block containing that word is present (see Figure 4.5). If so, it is returned
to the processor. If not, the word is retrieved from main memory.

The reader should be able to appreciate the complexity of the processor hard-
ware involved in a single memory reference. The virtual address is translated into
a real address. This involves reference to a page table, which may be in the TLB, in

main memory, or on disk. The referenced word may be in cache, in main memory,
or on disk. In the latter case, the page containing the word must be loaded into main
memory and its block loaded into the cache. In addition, the page table entry for
that page must be updated.

时间: 2024-11-05 13:46:34

Translation Lookaside Buffer的相关文章

PatentTips - Virtual translation lookaside buffer

BACKGROUND OF THE INVENTION A conventional virtual-machine monitor (VM monitor) typically runs on a computer and presents to other software the abstraction of one or more virtual machines. Each virtual machine may function as a self-contained platfor

【转】TLB(Translation Lookaside Buffers,TLB)的作用

原文网址:http://sdnydubing.blog.163.com/blog/static/137470570201122810503396/ 从虚拟地址到物理地址的转换过程可知:使用一级页表进行地址转换时,每次读/写数据需要访问两次内存,第一次访问一级页表获得物理地址,第二次才是真正的读/写数据:使用两级页表时,每次读/写数据需要访问三次内存,访问两次页表(一级页表和二级页表)获得物理地址,第三次才是真正的读/写数据. 上述的地址转换过程打打降低了CPU的性能,有没有办法改进呢?程序执行过

Linux中Buffer和Cache的区别

1. Cache:缓存区,是高速缓存,是位于CPU和主内存之间的容量较小但速度很快的存储器,因为CPU的速度远远高于主内存的速度,CPU从内存中读取数据需等待很长的时间,而  Cache保存着CPU刚用过的数据或循环使用的部分数据,这时从Cache中读取数据会更快,减少了CPU等待的时间,提高了系统的性能. Cache并不是缓存文件的,而是缓存块的(块是I/O读写最小的单元):Cache一般会用在I/O请求上,如果多个进程要访问某个文件,可以把此文件读入Cache中,这样下一个进程获取CPU控制

PatentTips - Supporting address translation in a virtual machine environment

BACKGROUND A conventional virtual-machine monitor (VMM) typically runs on a computer and presents to other software the abstraction of one or more virtual machines. Each virtual machine may function as a self-contained platform, running its own "gues

【学习笔记】cache/buffer

cache 是为了弥补高速设备和低速设备的鸿沟而引入的中间层,最终起到**加快访问速度**的作用.buffer 的主要目的进行流量整形,把突发的大数量较小规模的 I/O 整理成平稳的小数量较大规模的 I/O,以**减少响应次数**(比如从网上下电影,你不能下一点点数据就写一下硬盘,而是积攒一定量的数据以后一整块一起写,不然硬盘都要被你玩坏了). 1.Buffer(缓冲区)是系统两端处理速度平衡(从长时间尺度上看)时使用的.它的引入是为了减小短期内突发I/O的影响,起到流量整形的作用.比如生产者-

armv8 memory translation

AArch32,arm的32bit架构: AArch64,arm的64bit架构: ARMv8.2-LPA,是armv8.2中的新feature,扩大了IPA和PA的支持范围,从48bit扩展到52bit. armv8-a core内部使用virtual memory,内部通过mmu转换为physical address. mmu的好处: 1)允许system同时运行多个task,各个task之间完全是地址透明的. 2)同一个task,code在编写的时候,也完全不需要了解processor内部

[转] Cache 和 Buffer的区别

程序员开发过程中经常会遇到“缓存”.“缓冲”等相似概念,之前没有特别关注,现在停下来做一下总结,才能更好地前行. 先来下枯燥的概念: 1.Cache:缓存区,是高速缓存,是位于CPU和主内存之间的容量较小但速度很快的存储器,因为CPU的速度远远高于主内存的速度,CPU从内存中读取数据需等待很长的时间,而  Cache保存着CPU刚用过的数据或循环使用的部分数据,这时从Cache中读取数据会更快,减少了CPU等待的时间,提高了系统的性能.Cache并不是缓存文件的,而是缓存块的(块是I/O读写最小

Chapter 5 MySQL Server Administration_1

Chapter 5 MySQL Server Administration Table of Contents 5.1 The MySQL Server 5.1.1 Configuring the Server 5.1.2 Server Configuration Defaults 5.1.3 Server Option and Variable Reference 5.1.4 Server Command Options 5.1.5 Server System Variables 5.1.6

单线程你别阻塞,Redis时延问题分析及应对

单线程你别阻塞,Redis时延问题分析及应对 Redis的事件循环在一个线程中处理,作为一个单线程程序,重要的是要保证事件处理的时延短,这样,事件循环中的后续任务才不会阻塞: 当redis的数据量达到一定级别后(比如20G),阻塞操作对性能的影响尤为严重: 下面我们总结下在redis中有哪些耗时的场景及应对方法: 耗时长的命令造成阻塞 keys.sort等命令 keys命令用于查找所有符合给定模式 pattern 的 key,时间复杂度为O(N), N 为数据库中 key 的数量.当数据库中的个