什么是write-allocate policy?

在有cache的单机系统中,通常有两种写策略:write through和write back。这两种写策略都是针对写命中(write hit)情况而言的:write through是既写cache也写main memory;write back是只写cache,并使用dirty标志位记录cache的修改,直到被修改的cache 块被替换时,才把修改的内容写回main memory。那么在写失效(write miss)时,即所要写的地址不在cache中,该怎么办呢?一种办法就是把要写的内容直接写回main memory,这种办法叫做no write allocate policy;另一种办法就是把要写的地址所在的块先从main memory调入cache中,然后写cache,这种办法叫做write allocate policy。

在有cache的多处理器系统中仍然会有写失效(write miss)的情况,而且no write allocate policy和write allocate policy也仍然都可以使用。需要讨论的只是在出现write miss这种情况后,snooping cache该如何处理的问题。假设执行写操作的是P1,监听的是P2的cache,那么无论P1执行写操作时是write hit还是write miss,P2的cache(即snooping cache)都会检查P1所写的地址是否在P2的cache中,假如执行的是write invalid策略,那么snooping cache或者把相应的块置为invalid,或者什么都不做(因为它没有相应的块),肯定不会把其它处理器所要写的块调入自己的cache中(因为没有任何意义)。

通过这样的分析,可以认为英文原版书中p597第二段的第一句话:”Another variant is loading the snooping cache on write misses”所表达的意思不准确,而且在整个第二段中仅有此处出现了一次snooping cache,其它的地方再也没有提到。所以,我认为准确的表述应该是:”Another variant is loading the cache on write misses”。

上文是转一个老师的。如果要看图,这个wiki里面的比较形象。

http://en.wikipedia.org/wiki/Cache_(computing)

实际上ARM手册上说得也很细,不过要花更多耐心来看。

http://infocenter.arm.com/help/topic/com.arm.doc.ddi0488c/DDI0488C_cortex_a57_mpcore_r1p0_trm.pdf

Write-Back Read-Write-Allocate

This is expected to be the most common and highest performance memory type. Any read or

write to this memory type searches the cache to determine if the line is resident. If it is, the line

is read or updated. A store that hits a Write-Back cache line does not update main memory.

If the required cache line is not in the cache, one or more cache lines is requested from the L2

cache. The L2 cache can obtain the lines from its cache, from another coherent L1 cache, or

from memory. The line is then placed in the L1 cache, and the operation completes from the L1

cache.

Write-Back No-Allocate

Use Write-Back No-Allocate memory to access data that might be in the cache because other

virtual pages that are mapped to the same Physical Address are Write-Back

Read-Write-Allocate. Write-Back No-Allocate memory avoids polluting the caches when

accessing large memory structures that are used only one time. The cache is searched and the

correct data is delivered or updated if the data resides in one of the caches. However, if the

request misses the L1 or L2 cache, the line is not allocated into that cache. For a read that misses

all caches, the required data is read to satisfy the memory request, but the line is not added to

the cache. For a write that misses in all caches, the modified bytes are updated in memory.

Note

The No-Allocate allocation hint is only a performance hint. The processor might in some cases,

allocate Write-Back No-Allocat

6.4.4 Non-cacheable streaming enhancement

You can enable the CPUACTLR[24], Non-cacheable streaming enhancement bit, only if your

memory system meets the requirement that cache line fill requests from the multiprocessor are

atomic. Specifically, if the multiprocessor requests a cache line fill on the AXI master read

address channel, any given write request from a different master is ordered completely before

or after the cache line fill read. This means that after the memory read for the cache line fill

starts, writes from any other master to the same cache line are stalled until that memory read

completes. Setting this bit enables higher performance for applications with streaming reads

from memory types that do not allocate into the cache.

Because it is possible to build an AXI interconnect that does not comply (vi. 遵守;顺从,遵从;答应)with the specified

requirement, the CPUACTLR[24] bit defaults to disabled.

从总线到总线的内存操作可以不过cache line。这不就是DMA吗?

6.4.7 Preload instruction behavior

The multiprocessor supports the PLD, PLDW, and PRFM prefetch hint instructions. For Normal

Write-Back Cacheable memory page, the PLD, PLDW, and PRFM L1 instructions cause the line to be

allocated to the L1 data cache of the executing processor. The PLD instruction brings the line into

the cache in Exclusive or Shared state and the PLDW instruction brings the line into the cache in

Exclusive state. The preload instruction cache, PLDI, is treated as a NOP. PLD and PLDW instructions

are performance hints instructions (?)only and might be dropped in some cases.

performance hints instructions 怎么理解?因为hint是提示和暗示的意思。

这里先理解为“用于推测/暗示性能的指令”

时间: 2024-10-15 00:18:56

什么是write-allocate policy?的相关文章

Cache写机制

Cache 写机制分为:Write-through和Write-back Write-through(直写模式) 定义:在数据更新时,同时写入缓存Cache和后端存储(主存): 优点:操作简单: 缺点:因为数据修改需要同时写入存储,数据写入速度较慢. 对于写缺失使用no write allocate policy(见下文)的write through 处理流程 Write-back(回写模式) 定义:在数据更新时只写入缓存Cache,只在数据被替换出缓存时,被修改(用dirty标记)的缓存数据才

Memory allocate in Ogre

Memory allocate in Oger Memory allocate Every memory in ogre will be allocated from AllocatedObject, it use Policy to switch from different allocators. It looks like this: template <class Alloc> class _OgreExport AllocatedObject { public: void* oper

Oracle Policy For NBU

Theory:NBU use a automatic script to connect RMAN and backup database Steps: First,make a automatic script. Second,build a oracle policy First: Make a automatic script.NBU has provide some scripts on it's client directory You can find it in /usr/open

Linux Swap故障之 swapoff failed: Cannot allocate memory

swap分区关闭 准备调整Linux下的swap分区的使用率. 在Linux下执行 swapoff -a -v报如下错误: swapoff: /dev/mapper/cryptswap1: swapoff failed: Cannot allocate memory 上述错误原因分析: 从上述的信息可以看出,当前Linux系统把/dev/mapper/cryptswap1这个设备当做了交换分区,如果当前改交换分区使用的容量大于系统当前剩余的内存,就会报这个错误,因为在关闭交换分区的时候,需要把分

centos7 修改selinux 开机导致 faild to load SELinux policy freezing 错误

centos7 修改selinux 开机导致 faild to load SELinux policy  freezing 错误 之前把selinux关闭了,这次想打开selinux,于是修改了 /etc/selinux/config 文件,然后重启时,就开不了机了, 出现错误:faild to load SELinux policy  freezing,查了一些资料,完善方案 1. 重启时在启动页面 按 E, 进入 grub 编辑页面: 2. 找到 linux 那一行,在最后 language

Windows Server 2016 DNS Policy Geo-Location 1

随着信息技术的不断发展,人们对于IT基础架构的要求也越高,开始要求提供快速交付,批量部署,数据分析,网络虚拟化等新需求,各大厂商也针对自己的产品进行不断地完善,DNS最开始在Windows Server上面只是提供最简单的域名解析,缓存,转发等功能,后来越来越完善,到了2008时代新增了DNSSEC,单标域,名称保护等新功能,使DNS更加安全规范化,Windows Server 2016上面,DNS新增了在作者看来非常不错的功能,DNS policy,近日测试了许多里面的新功能,甚是为之惊艳,作

Windows Server 2016 DNS Policy Split-Brain 3

在DNS管理中可能会遇见这样的问题,例如某公司DNS既提供给内网用户解析使用,也提供给公网用户解析使用,但是,可能内网用户使用的不多,或者公网用户使用的不多,导致其中一方可能只用到了几条记录,但是却要各自单独维护一台DNS服务器,在过去,处于安全考虑只能这样做,部署多台DNS服务器,但是到了2016 DNS支持分裂部署的方式,定义DNS policy,实现不同的网卡承担不同的DNS查询请求,例如可以定义,凡是通过内网接口进来的查询都走DNS内网卡,通过外网卡进来的查询都走DNS外网卡.这样就在单

Windows Server 2016 DNS Policy Tod-Intelligent 2

在上一篇中作者为大家简单介绍了下Windows Server 2016 DNS Policy基于地理位置的流量隔离功能,本文将为大家介绍DNS policy里面另一个主要的功能 ,基于时间的智能DNS定位 假定Contoso是一家跨国的图书公司,跨中国和德国,设立在中国大连和德国柏林都有分公司,希望实现平常时间两边分公司的员工都正常访问本分公司的服务器,当负载达到时间峰值时,例如,每天9点到11点是大连这边用户访问的高峰期,这时候百分之80的流量通过大连主机响应,百分之20定位到柏林服务器响应.

c++内存之allocate(转)

STL提供了很多泛型容器,如vector,list和map.程序员在使用这些容器时只需关心何时往容器内塞对象,而不用关心如何管理内存,需要用多少内存,这些STL容器极大地方便了C++程序的编写.例如可以通过以下语句创建一个vector,它实际上是一个按需增长的动态数组,其每个元素的类型为int整型: stl::vector<int> array; 拥有这样一个动态数组后,用户只需要调用push_back方法往里面添加对象,而不需要考虑需要多少内存: array.push_back(10); a