PatentTips - Write Combining Buffer for Sequentially Addressed Partial Line Operations

SUMMARY OF THE INVENTION

The present invention pertains to a write combining buffer for use in a
microprocessor. The microprocessor fetches data and instructions which are
stored by an external main memory. The data and instructions are sent over a
bus. The microprocessor then processes the data according to the instructions
received. When the microprocessor completes a task, it writes the data back to
the main memory for storage. In the present invention, a write combining buffer
is used for combining the data of at least two write commands into a single data
set, wherein the combined data set is transmitted over the bus in one clock
cycle rather than two or more clock cycles. Thereby, bus traffic is minimized.

In the currently preferred embodiment, the write combining buffer is
comprised of a single line having a 32-byte data portion, a tag portion, and a
validity portion. The tag entry specifies the address corresponding to the data
currently stored in the data portion. There is one valid bit corresponding to
each byte of the data portion which specifies whether that byte currently
contains useful data. So long as subsequent write operations to the write
combining buffer result in hits, the data is written to the buffer‘s data
portion. In other words, write hits to the write combining buffer result in the
data being combined with previous write data. But when a miss occurs, the line
is reallocated, and the old data is written to the main memory. Only those bytes
which have been written to as indicated by the valid bits, are written back to
the main memory. Each time the write combining buffer is allocated, the valid
bits are cleared. Thereupon, the new data and its address are written to the
write combining buffer.

DETAILED DESCRIPTION

Referring to FIG. 1, the computer system upon which a preferred embodiment of
the present invention is implemented is shown as 100. Computer system 100
comprises a bus or other communication means 101 for communicating information,
and a processing means 102 coupled with bus 101 for processing information.
Processor 102 includes, but is not limited to microprocessors such as the Intel?
architecture microprocessors, PowerPC?, Alpha?, etc. System 100 further
comprises a random access memory (RAM) or other dynamic storage device 104
(referred to as main memory), coupled to bus 101 for storing information and
instructions to be executed by processor 102. Main memory 104 also may be used
for storing temporary variables or other intermediate information during
execution of instructions by processor 102. Computer system 100 also comprises a
read only memory (ROM) and/or other static storage device 106 coupled to bus 101
for storing static information and instructions for processor 102, and a data
storage device 107 such as a magnetic disk or optical disk and its corresponding
disk drive. Data storage device 107 is coupled to bus 101 for storing
information and instructions.


Referring now to FIG. 2, a block diagram illustrating an exemplary processor
102 incorporating the teachings of the present invention is shown. The exemplary
processor 102 comprises an execution unit 201, a bus controller 202, a data
cache controller 203, a data cache unit 204, and an instruction fetch and issue
unit 205 with an integrated instruction cache 206. The elements 201-206 are
coupled to each other as illustrated. Together they cooperate to fetch, issue,
execute, and save execution results of instructions in a pipelined manner.


In the currently preferred embodiment, a write combining buffer 208 is
implemented as part of the data cache unit 204. The write combining buffer 208
collects write operations which belong to the same cache line address. Several
small write operations (e.g., string moves, string copies, bit block transfers
in graphics applications, etc.) are combined by the write combining buffer 208
into a single, larger write operation. It is this larger write, which is
eventually sent over the bus, thereby maximizing the efficiency of bus
transmission. In one embodiment, the write combining buffer 208 resides within
the fill buffer 211.

The write combining function is an architectural extension to the cache
protocol. Special micro-operations (uops are simple instructions including its
micro-opcode, source fields, destination, immediates, and flags) are defined for
string store operations and stores to an USWC memory types to cause the data
cache unit to default to a write combining protocol instead of the standard
protocol. Write combining is allowed for the following memory types: Uncached
SpeculatabIe Write Combining (USWC), Writeback (WB), and Restricted Caching
(RC). The USWC memory type is intended for situations where use of the cache
should be avoided for performance reasons. The USWC memory type is also intended
for situations where data must eventually be flushed out of the processor, but
where delaying and combining writes is permissible for a short time. The WB
memory type is conventional writeback cached memory. The RC memory type is
intended for use in frame buffers. It is essentially the same as the WB memory
type, except that no more than a given (e.g., 32K) amount of RC memory will ever
be cached. Write combining protocol still maintains the coherency with external
writes. It avoids the penalty of coherence by deferring the coherence actions
until eviction.

An implementation of the current invention is possible whereby there are
multiple WC buffers, permitting interleaved writes to different addresses to be
combined. This would create a weakly ordered memory model, however, and
therefore could not be used by some existing programs.

The currently preferred embodiment appears to have only one WC buffer
evicting when missed. This permits the WC buffer to be used when a write
ordering is required; for example, it permits the invention to be used in the
existing Intel? Architecture block memory instructions PEP STOSX and PEP MOVSX.

The currently preferred embodiment uses a structure that already exists in
the Intel? Architecture microprocessor, the fill buffers. The fill buffers are a
set of several (4) cache lines with byte granularity valid and dirty bits, used
by the out-of-order microprocessor to create a non-blocking cache. The WC buffer
is a single fill buffer marked to permit WC stores to be merged. When evicted,
the WC fill buffer waits until normal fill buffer eviction.

In the currently preferred embodiment, only one write combining buffer is
implemented. Physically, any fill buffer can be used as the write combining
buffer. Since only one logical write combining buffer is provided, when a second
write combining buffer is needed, an eviction process is initiated. During
eviction, one of the following actions can occur. If all the bytes are written,
and the write combining buffer is of cacheable (i.e., RC or WB) type, then the
data cache unit requests an AllocM transaction to the bus. An AllocM transaction
is where a bus transaction that causes all other processors to discard stale
copies of the cache line without supplying the data. When this transaction is
completed, the line is placed in the cache. If all the bytes are not written,
and the write combining buffer is of a cacheable (i.e., RC or WB) type, then the
data cache unit requests read-for-ownership (RFO) transaction to the bus. The
RFO transaction entails a read directing any other processor to supply data and
relinquish ownership. Thereupon, the line is placed in the cache. If all the
bytes are written and the write combining buffer is of the USWC type, then the
data cache unit requests a writeback transaction to the bus. If all the bytes
are not written, and the write combining buffer is of the USWC type, then the
data cache unit evicts the write combining buffer. The eviction is performed as
a sequence of up to four partial writes of four sets of data. The data cache
unit supplies eight byte enables to the bus with each set. If a data set does
not contain any written bytes, the data cache unit does not issue a partial
write for that set.

FIG. 3 shows a more detailed block diagram of the data cache unit 300. The
data cache unit 300 includes a level 1 data cache 301 and a write combining
buffer 302. The level 1 data cache 301 is a standard SRAM writeback cache
memory. In a writeback configuration, the CPU updates the cache during a write
operation. The actual main memory is updated when the line is discarded from the
cache. Level 1 data cache 301 includes a data cache RAM portion 303 which is
used to store copies of data or instructions. A separate tag RAM portion 304 is
used as a directory of entries in data RAM portion 303. A number of tags
corresponding to each entry are stored in tag RAM 304. A tag is that portion of
the line address that is used by the cache‘s address mapping algorithm to
determine whether the line is in the cache.


Write combining buffer 302 is comprised of a single line having a data
portion 305, a tag portion 306, and a validity portion 307. Data portion 305 can
store up to 32 bytes of user data. Not every byte need contain data. For
example, the execution unit may choose to store data in alternating bytes. The
validity portion 307 is used to store valid bits corresponding to each data byte
of data portion 305. The valid bits indicate which of the bytes of data portion
305 contain useful data. In the above example wherein data is stored in
alternating bytes, every other valid bit is set. In this manner, when the line
in the write combining buffer 302 is written to the level 1 data cache 301, only
those bytes containing valid data are stored.

When data is being written to the data cache unit 300, there are three
possible scenarios that can occur. First, there could be a level 1 data cache
hit. A cache hit is defined as a data or instruction cycle in which the
information being read or written is currently stored in that cache. In this
situation, the data is directly copied to the level 1 data cache 301. For
example, a write combine store byte uop (i.e., WC Stob instruction) having an
address 1 and data 1 falls in this scenario because the tag column 304 of level
1 data cache 301 currently contains a tag of <addr1>. Thus, <data1>
is stored in the data portion 303 of the level 1 data cache 301.

In the second scenario, the write operation results in a hit of the write
combining buffer 302. In this case, the data is stored in the write combining
data portion 305. For each byte that is written, the corresponding valid bit is
set. For example, a write combine store byte uop (i.e., WC Stob) having an
address of <addr2> has its data <data2> written to the data portion
305 of write combining buffer 302 because there is a miss of <addr2> in
the level 1 data cache 301, and there is a hit of <addr2> in the write
combining buffer 302. Any subsequent write operations that fall within the
32-byte data field will be written to the write combining buffer 302 until that
line eventually is evicted and a new address (i.e., tag) is assigned. For
example, suppose that the tag of the write combining buffer contains the address
0X12340. Subsequently, a write combine store word uop (i.e., WC Stow) to 0X12346
is received. Since the 0X12346 address falls within the 32-byte range of
0X12340, that word is stored in the write combining buffer. In contrast, if a WC
Stow to address 0X12351 request is received, the write combining buffer must be
reallocated because the address fails outside the 32-byte boundary.

In the third scenario, there is a complete miss to both the level 1 data
cache 301 and the write combining buffer 302. For this scenario, the contents in
the write combining buffer 302 are purged to the main memory (not shown). All of
the valid bits are then cleared. The new data is stored in the data portion 305;
its address is stored in the tag portion 306; and the valid bits corresponding
to those bytes of data which were written are set. For example, a write combine
store byte uop (i.e., WC Stob) having an address of <addr3> will result in
a miss of both the level 1 data cache 301 and the write combining buffer 302.
Hence, the <data2> currently stored in write combining buffer 302 is
written to the main memory at a location specified by <addr2>. Thereupon,
<data3> can be stored in the data portion 305 of write combining buffer
302. Its address <addr3> is stored in the tag portion 306, and the
appropriate valid bit(s) are set. It should be noted that the execution of the
write combining procedure is transparent to the application program.

In one embodiment, the DCU 300 includes the fill buffers 308. Fill buffers
308 is comprised of multiple lines 309. Each of these multiple lines 309 is
divided into state, data, tag, and validity fields. The state for one of these
lines can be write combining (WC) 311.

FIG. 4 is a flow chart showing the steps of the write combining procedure of
the present invention. The processor continues its execution process until a
write combine (WC) store request is encountered, step 401. A WC store request
can be generated in several ways. In one Instance, a special write combine uop
is used to indicate that a store is to be write combined. In another instance, a
particular block of memory is designated for applying the write combining
feature. In other words, any stores having an address that falls within that
particular block of memory is designated as a write combine store.


Once a write combine store request is received, a determination is made as to
whether the request results in a write combining buffer hit, step 402. This is
accomplished by comparing the current tag of the write combining buffer with the
store‘s address. If there is a hit (i.e., the two addresses match), the store is
made to the write combining buffer, step 403. The corresponding valid bit(s) are
set, step 404. The processor then waits for the next write combine store
request, step 401. So long as subsequent stores are to that same address (i.e.,
WC store falls within the 32-byte range), the store continues to be to the write
combining buffer.

Otherwise, if it is determined in step 402 that the store results in a miss
to the write combining buffer, then steps 405-410 are performed. In step 405,
the processor generates a request for allocation. An allocation refers to the
assignment of a new value to a tag. This occurs during a line fill operation,
wherein the information is transferred into the cache from the next outer level
(e.g., from the write combining buffer to the level 1 cache, level 2 cache, or
main memory). The old WC buffer (if any) is marked as being obsolete, step 406.
The write combining buffer is allocated with the new address, step 407. The
valid bits are all cleared, step 408. Next, steps 403 and 404 described above
are executed. Furthermore, following step 406, the old contents of the WC buffer
are written to memory, step 409. Thereupon, the old WC buffer can be reused,
step 410.

FIG. 5 shows the write combining buffer 302 of FIG. 3 in greater detail. It
can be seen that the write combining buffer 302 is relatively small. It is
comprised of a single line. In FIG. 5, data is written to bytes 0, 2, . . . 29,
and 30, as indicated by the shading. Hence, bits 1, 2, . . . 29, and 30 of the
validity field 302 are set. All the other valid bits are not set. The tag field
306 specifies the addresses corresponding to the data stored in data field 305.


In alternative embodiments, multiple write combining buffers may be utilized.
Furthermore, the present invention can be applied to non-cached as well as
single or multiple cached systems as well as write through and writeback caches.

PatentTips - Write Combining Buffer for Sequentially Addressed
Partial Line Operations,布布扣,bubuko.com

PatentTips - Write Combining Buffer for Sequentially Addressed
Partial Line Operations

时间: 2024-10-10 06:02:28

PatentTips - Write Combining Buffer for Sequentially Addressed Partial Line Operations的相关文章

Write Combining Buffer

现代CPU使用了很多技术来降低对内存存取数据的延时,因为CPU执行的速度实在是太快了,在从内存存取数据的约120ns中,可以执行数百条指令. 其中多级的缓存架构就是为了减少这种延时,来提高CPU的利用率. 在SMP系统使消息传递协议来保证缓存的一致性.但是CPU运行实在是太快了,人类总是很贪婪,想尽各种办法榨取CPU的性能,因此在缓存体系统,还存在的其它一些不怎么被人熟悉的Buffer. 这其中有Write back buffer, Line fill buffer, 而本文要介绍的是Write

PatentTips - Optimizing Write Combining Performance

BACKGROUND OF THE INVENTION The use of a cache memory with a processor facilitates the reduction of memory access time. The fundamental idea of cache organization is that by keeping the most frequently accessed instructions and data in the fast cache

Speculative store buffer

A speculative?store?buffer is speculatively updated in response to speculative store?memory operations buffered by a?load/store?unit in a microprocessor. Instead of performing dependency checking for?load?memory operations among the?store?memory oper

Method and apparatus for providing total and partial store ordering for a memory in multi-processor system

An improved memory model and implementation is disclosed. The memory model includes a Total Store Ordering (TSO) and Partial Store Ordering (PSO) memory model to provide a partial order for the memory operations which are issued by multiple processor

【mysql】Innodb三大特性之insert buffer

一.什么是insert buffer insert buffer是一种特殊的数据结构(B+ tree)并不是缓存的一部分,而是物理页,当受影响的索引页不在buffer pool时缓存 secondary index pages的变化,当buffer page读入buffer pool时,进行合并操作,这些操作可以是 INSERT, UPDATE, or DELETE operations (DML) 最开始的时候只能是insert操作,所以叫做insert buffer,现在已经改叫做chang

Java-NIO(二):缓冲区(Buffer)的数据存取

缓冲区(Buffer): 一个用于特定基本数据类行的容器.有java.nio包定义的,所有缓冲区都是抽象类Buffer的子类. Java NIO中的Buffer主要用于与NIO通道进行交互,数据是从通道读入到缓冲区,从缓冲区写入通道中的. Buffer就像一个数组,可以保存多个相同类型的数据.根据类型不同(boolean除外),有以下Buffer常用子类: ByteBuffer CharBuffer ShortBuffer IntBuffer LongBuffer FloatBuffer Dou

Snoop resynchronization mechanism to preserve read ordering

A processor employing a post-cache (LS2) buffer. Loads are stored into the LS2buffer after probing the data cache. The load/store unit snoops the loads in the LS2?buffer against snoop requests received from an external bus. If a snoop invalidate requ

MYSQL术语表

MYSQL术语表 http://dev.mysql.com/doc/refman/5.6/en/glossary.html MySQL Glossary These terms are commonly used in information about the MySQL database server. This glossary originated as a reference for terminology about the InnoDB storage engine, and th

python 网络框架twisted基础学习及详细讲解

twisted网络框架的三个基础模块:Protocol, ProtocolFactory, Transport.这三个模块是构成twisted服务器端与客户端程序的基本. Protocol:Protocol对象实现协议内容,即通信的内容协议ProtocolFactory: 是工厂模式的体现,在这里面生成协议Transport: 是用来收发数据,服务器端与客户端的数据收发与处理都是基于这个模块 在windows中安装twisted需要先安装pywin32,自己去下载下就行.随后pip instal