Penguin
Annotated edit history of Cache version 1, including all changes. View license author blame.
Rev Author # Line
1 AristotlePagaltzis 1 A "short term memory".
2
3 A [Cache] is a (relatively) tiny amount of (relatively) fast memory, which is used to accelerate access to (relatively) slow memory by temporarily storing frequently accessed parts of the data from the slow memory in the fast memory. This is called "caching". Accesses to data are transparently checked for whether the requested location has been stored in the [Cache], ie whether it is "cached". If so, the request can be satisfied from the fast memory, providing a significant time savings. This is called a "cache hit". Failure to do so is called a "cache miss".
4
5 F.ex, in the case of a HardDisk [Cache] in [RAM], a cache hit is about 1,000 faster than having to actually access the HardDisk. In the case of a [CPU]'s memory [Cache], the disparity is not quite so huge, but a cache miss is still 30 - 100 times slower than a cache hit. Because the clock speed of [CPU]s is increasing at much faster rates than that of [DRAM], current [CPU]s have up to 3 layers of [Cache]s, each slower (and therefor cheaper) but much larger than the previous level. This way, only about 1-3% of all memory accesses actually have to be served directly from [DRAM].
6
7 Many different strategies for various concerns in cache handling (such as how to reduce caching overhead, which elements to expire and when in order to cache other locations, and so on). See WriteThrough, WriteBack, [LRU], [LFU], and ReadAhead.