فایل هلپ

مرجع دانلود فایل ,تحقیق , پروژه , پایان نامه , فایل فلش گوشی

فایل هلپ

مرجع دانلود فایل ,تحقیق , پروژه , پایان نامه , فایل فلش گوشی

فایل فلش تبلت چینی 7 اینچ با CPU A23

اختصاصی از فایل هلپ فایل فلش تبلت چینی 7 اینچ با CPU A23 دانلود با لینک مستقیم و پر سرعت .

فایل فلش تبلت چینی 7 اینچ با CPU A23

این فایل روی اکثر مدل تبلت های دارای cpu A23 بدون هیچ مشکلی تست و جواب داده است.


دانلود با لینک مستقیم


فایل فلش تبلت چینی 7 اینچ با CPU A23

دانلود تحقیق CPU cache به زبان انگلیسی

اختصاصی از فایل هلپ دانلود تحقیق CPU cache به زبان انگلیسی دانلود با لینک مستقیم و پر سرعت .

دانلود تحقیق CPU cache به زبان انگلیسی


دانلود تحقیق CPU cache به زبان انگلیسی

CPU cache
 
Diagram of a CPU memory cacheA CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are to cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.

The diagram to the right shows two memories. Each location in each memory has a datum (a cache line), which in different designs ranges in size from 8 to 512 bytes. The size of the cache line is usually larger than the size of the usual access requested by a CPU instruction, which ranges from 1 to 16 bytes. Each location in each memory also has an index, which is a unique number used to refer to that location. The index for a location in main memory is called an address. Each location in the cache has a tag, which contains the index of the datum in main memory which has been cached. In a CPU's data cache, these entries are called cache lines or cache blocks.

When the processor wishes to read or write a location in main memory, it first checks whether that memory location is in the cache. This is accomplished by comparing the address of the memory location to all tags in the cache that might contain that address. If the processor finds that the memory location is in the cache, we say that a cache hit has occurred, otherwise we speak of a cache miss. In the case of a cache hit, the processor immediately reads or writes the data in the cache line. The proportion of accesses that result in a cache hit is known as the hit rate, and is a measure of the effectiveness of the cache.

In the case of a cache miss, most caches allocate a new entry, which comprises the tag just missed and a copy of the data from memory. The reference can then be applied to the new entry just as in the case of a hit. Misses are comparatively slow because they require the data to be transferred from main memory. This transfer incurs a delay since main memory is much slower than cache memory, and also incurs the overhead for recording the new data in the cache before it is delivered to the processor.

Contents
1 Some details of operation
2 Associativity
3 Cache misses
4 Address translation
4.1 Virtual indexing and virtual aliases
4.2 Virtual tags and vhints
4.3 Page coloring
5 Cache hierarchy in a modern processor
5.1 Specialized caches
5.1.1 Victim cache
5.1.2 Trace cache
5.1.3 Harvard architecture
5.2 Multi-level caches
5.3 Example: the K8
5.4 More hierarchies
6 Implementation
7 See also
8 References
9 External links
 


Some details of operation
In order to make room for the new entry on a cache miss, the cache generally has to evict one of the existing entries. The heuristic that it uses to choose the entry to evict is called the replacement policy. The fundamental problem with any replacement policy is that it must predict which existing cache entry is least likely to be used in the future. Predicting the future is difficult, especially for hardware caches which use simple rules amenable to implementation in circuitry, so there are a variety of replacement policies to choose from and no perfect way to decide among them. One popular replacement policy, LRU, replaces the least recently used entry.

When data is written to the cache, it must at some point be written to main memory as well. The timing of this write is controlled by what is known as the write policy. In a write-through cache, every write to the cache causes a write to main memory. Alternatively, in a write-back or copy-back cache, writes are not immediately mirrored to memory. Instead, the cache tracks which locations have been written over (these locations are marked dirty). The data in these locations is written back to main memory when that data is evicted from the cache. For this reason, a miss in a write-back cache will often require two memory accesses to service: one to read the new location from memory and the other to write the dirty location to memory.

 

 

 

شامل 29 صفحه Wrod


دانلود با لینک مستقیم


دانلود تحقیق CPU cache به زبان انگلیسی