A "short term memory".

A Cache is a (relatively) tiny amount of (relatively) fast memory, which is used to accelerate access to (relatively) slow memory by temporarily storing frequently accessed parts of the data from the slow memory in the fast memory. This is called "caching". Accesses to data are transparently checked for whether the requested location has been stored in the Cache, ie whether it is "cached". If so, the request can be satisfied from the fast memory, providing a significant time savings. This is called a "cache hit". Failure to do so is called a "cache miss".

F.ex, in the case of a HardDisk Cache in RAM, a cache hit is about 1,000 faster than having to actually access the HardDisk. In the case of a CPU's memory Cache, the disparity is not quite so huge, but a cache miss is still 30 - 100 times slower than a cache hit. Because the clock speed of CPUs is increasing at much faster rates than that of DRAM, current CPUs have up to 3 layers of Caches, each slower (and therefor cheaper) but much larger than the previous level. This way, only about 1-3% of all memory accesses actually have to be served directly from DRAM.

Many different strategies for various concerns in cache handling (such as how to reduce caching overhead, which elements to expire and when in order to cache other locations, and so on). See WriteThrough, WriteBack, LRU, LFU, and ReadAhead.