A cache is a hardware or software component that stores data so that future requests for that data can be served faster.source: https://en.wikipedia.org/wiki/Cache_(computing)
Most websites and web applications need to execute code to be able to generate the HTML required for the browser to display. Depending on the code this can be very slow, especially when a lot of calculations are needed.
For example a simple WordPress website will run several queries to a database, loop over the results, put everything in their correct structure and markup, and only then it will send all the data (at once) back to the browser.
With proper caching in place the above will only happen once, for all visitors, until it needs to be refreshed. And not only will it make responses much quicker for the visitors, but it will also reduce the stress on the server, which in result will make it faster in case some content is not cached yet.
Of course the above image is a simplified view on how cache works and can be in multiple layers, even at the same time. It can be handled at the server, at the client and in between. Usually complimenting each other, but sometimes also working against each other.
Don’t worry, this won’t be a very technical explanation on how it all works. That would require a full post per cache layer. However in essence they all work the same.
When a request is made by the browser (entering the URL or clicking a link) this will go towards the server. A cache system will catch this request and check if it already knows what should be returned as a response. If it does, it will stop the normal request and send the data it has directly back to the browser.
In case it has no cached response it will wait till the server has handled the request and catch the response while it’s sent back to the browser. At this moment it will store both the request and the response in its own storage. This way, the next time the exact same request comes by it can send back the response which it has stored.
There are many reasons why a cached response is not correct anymore and needs to be removed or replaced by an updated version.
Most cache systems have a maximum lifetime or for data to be valid before it will delete it and wait for an updated response. This can be as short as milliseconds up to years if needed or wanted.
In addition to a maximum lifetime, do most cache systems have the option to “purge” their cache, either per item or everything at the same time. And some systems even allow the client, like a browser, to request uncached responses or only when the creation time is within a specific timeframe.
Now that we know what cache is and how it works, let's look at "a couple" of methods which are commonly used in PHP web development.
Whenever a piece of PHP code is executed the server needs to read all the files from the hard drive and combine them before it can read and run the code. With Opcode Caching the code from the scripts is stored in the memory of the server for quicker access since reading data from memory is a lot faster than from a drive.
Some well known engines are OPcache, APC, and Xcache.
Access to the database can be slow, especially with more complex queries. With Object Cache the queries, and their results, are stored in memory or a seperate location to return the data even faster.
Redis and Memcached are common systems used for this.
The name says it all, it stores the complete (HTML) page into a cache so it returns everything at once, without the need of compiling the code or doing (heavy) database queries.
For example Nginx and Redis can do this.
Web server accelerator, sometimes referred to as reverse proxies or Application Delivery Controllers, does the same as Page Caching except it works before hitting the server software itself. This means even less to do for the server and thus more resources available for other requests.
Varnish is one of the most common tools.
A CDN works differently from the above types of caching, because it’s (usually) not handled by the same server as where the website/application is running.
By setting a minimum (or maximum) lifetime to an asset on the server the browser will not only download the file, but also store it locally. This way, the next time the same file is required it will simply take the stored version instead of downloading it again from the server.
All browsers do this by default, but it does require proper settings at the server (or CDN).
Not really a type of caching, but it certainly speeds up the process of requesting and receiving data. By requesting data before it’s actually needed, at a time where it doesn’t hurt performance, the moment the data is needed it will already be present and the browser doesn’t need to wait for it.
It’s possible to do 2 types of prefetching: DNS Prefetch and Content Prefetch.
With DNS Prefetch only the DNS records are retrieved, which means the IP address and location (through the maze of servers all around the world) are gathered and stored. Normally this can take some time, so doing this beforehand will make loading files from that specific location (usually an external location) a bit faster.
Content Prefetch will get files and assets before required, and places it in the browser cache for quicker access in the future. In some cases even full pages are being loaded, making the transition from the current to the next page almost instant.
In short; all of them. Each cache system works differently but can work together with other levels without a problem. Just keep in mind that some systems require complex configurations specifically for your website or application. However a lot has been configured already when using a good hosting provider, plugins/extensions for your CMS/framework and standard setups.
One thing to keep in mind is the complexity of your project and how dynamic it has to be. Caching is most useful for static content, especially Page Cache and CDN’s. When you have a lot of dynamic content (like a search page with a lot of different filters) you’ll be better off with more high level caching like Object Cache