Science Fair Project Encyclopedia
Squid is a popular open source proxy server and web cache. It has a variety of uses, from speeding up a web server by caching repeated requests, to caching web, DNS, and other network lookups for a group of people sharing network resources. It is primarily designed to run on Unix-like systems.
Caching is a way to store requested Internet objects (i.e., data available via the HTTP, FTP, and gopher protocols) on a system closer to the requesting site. Web browsers can then use the local Squid cache as a proxy HTTP server, reducing access time as well as bandwidth consumption. This is often useful for ISPs to increase speed to their customers, and LANs that share an Internet connection. Because it is also a proxy (i.e. it behaves like a client on behalf of the real client), it provides some anonymity and security.
A client program (e.g. browser) either has to explicitly specify the proxy server it wants to use (typical for ISP customers), or it could be using a proxy without any extra configuration: "transparent caching,", in which case all outgoing HTTP requests are intercepted by Squid and all responses are cached. The latter is typically a corporate set-up (all clients are on the same LAN).
Squid has some features that can further help anonymize connections, such as disabling or changing specific header fields in a client's HTTP requests. See the documentation for
header_replace for further details.
The above set-up - caching the contents of an unlimited number of webservers for a limited number of clients - is the classical one. Another set-up is 'reverse-proxy' or 'webserver acceleration' (using
httpd_accel_host). In this set-up, the cache serves an unlimited number of clients for a limited number of - or just one - web servers.
Suppose slow.example.com is a 'real' web server, and www.example.com is a Squid cache server that 'accelerates' it. The first time any page was requested from www.example.com, the cache server would get the actual page from slow.example.com, but for the next hour/day/year (matter of cache configuration) every next request would get this stored copy directly from the accelerator. Result: less traffic on the source server, which means less CPU usage, less memory usage and less bandwidth.
It is possible for a single Squid server to serve both as a normal and a reverse proxy simultaneously.
Squid can run on the following operating systems:
- Mac OS X
- OSF and Digital Unix
- SCO Unix
Wikimedia servers use Squid to cache frequently requested pages and reduce load on the main database and web servers.
As of February 2005, the current stable version is 2.5; there is also a 3.0 version available.
- Squid Cache - official project homepage
- Squid + PF - Transparent proxying with Squid and PF.
- Logfile Analysis - Squid-Cache list of logfile analyzers
- ViSolve Squid Support: manual, configuration tips, ...
- Squidguard - A flexible plugin for advanced filtering.
- DansGuardian - Smart filtering, can be used together with Squid.
- Calamaris - Squid logfile report
- Squeezer2 - Squid logfile report
Check cache behaviour
- web-caching.com: check page cacheability
- analyze.forret.com: analyze HTTP headers and compare to Squid policy
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details