Performance concept - Caching
Caching improves performance by retaining frequently used data in high speed memory, reducing access times to data. Some sources that provide data are slower than others. The approximate speed of retrieving data from various sources is shown below.
Component | Time it takes to fetch 512 bytes of data (ns) |
CPU Cache |
16 |
Main memory | 80 |
Hard disk | 800 + 12,000 seek time |
Flash SSD disk | 3,000 |
Network interface | 50,000 |
8 speed DVD | 300,000 + seek time and velocity changes |
Especially in situations where retrieving data takes relatively long (for instance reading form hard disk, CD- ROM and the network) caching can improve performance significantly.
In case of hard disks, before data can be read it must be located on the disk. Disks are mechanical devices, where the read head must be positioned above the correct track of the disk platter. Then the system must wait for the desired information to spin under the read head. This so-called seek time can take a long time: about 12 ms. When the data is actually read, streaming it is much faster: reading 512 bytes of data (a typical disk block) takes only 0.8 ms.
To speed up the reading of data from disk all disk drives contain caching memory. This caching memory stores all data recently read from disk and some of the disk blocks following the disk blocks that were read. When the data is to be read again, or (more likely) the data of the next disk block is needed, it is fetched from high speed cache memory, and without the seek time overhead.
The same principle goes for DVD drives (and CD-ROM drives, Blue-ray drives, etc). Here seek time includes not only the steps described above, but also the adjustment of disk speed. When the read head of the CD- Rom drive (the laser reading the disk) moves from the beginning to the end of the CD-ROM (or the other way around) the speed of CD-ROM drives changes accordingly. When data at the inner circles of the disk are read, the disk spins at a higher speed than when data is read at the edge of the disk. The drive’s motor must adjust the speed and this takes a considerable amount of time.
While networking connections are much faster, here also cache memory is used. And all CPUs today use internal caching as well (for more information on CPU cache.
Caching can be implemented in several ways, like using disk caching, web proxies, Operational Data Stores, web front end servers and even in- memory databases.
The best known example of using caching to increase performance is disk caching. Disk caching can be implemented in the storage component itself (for instance cache used on the physical disks or cahce implemented in the disk controller), but also in the operating system. A general rule of thumb is that adding memory in servers usually improves performance. This is due to the fact that all non-used memory in operating systems is used for disk cache. Over time all memory is filled with stored previous disk requests and pre-fetched disk blocks, speeding up data management.
Another example of caching is using web proxies. When users browse the Internet, instead of fetching all requested data from the Internet, earlier accessed data can be cached in a proxy server and fetched from there. This has two benefits: the user gets his data faster than when it would be retrieved from a distant web server, and all other users are provided more bandwidth to the Internet, as the data did not have to be fetched again.
An Operational Data Store (ODS) is a replica of a part of a database for a specific use. Instead of accessing the main database for retrieving information, often used information is retrieved from a separate small ODS database, not degrading the performance of the main database. A good example of this is a website of a bank. Most users want to see their actual balance when they login (and maybe the last 10 mutations of their balance). When every balance change is not only stored in the main database of the bank, but also in a small ODS database, the website only needs to access the ODS to provide users with the data they most likely need. This not only speeds up the user experience, but also decreases load on the main database.
In web facing environments storing most used (parts of) pages (like the JPG pictures used on the landing page) at the web front-end server lowers the amount of traffic to back-end systems enormously. Reverse- proxies can be used to cache most wanted data as well.
In special circumstances even complete databases can be run from memory instead of from disk. These so- called in-memory databases are used in situations where performance is crucial (like in real-time SCADA systems). Of course special arrangements must be made to ensure data is not lost when a power failure occurs.
This entry was posted on Tuesday 19 April 2011