Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: Fwd: Squid configuration/groking



 Thanks Kristian, you ROCK ! this is exactly what I was looking for. 
I was missing the regex placement stuff. 
Please pass my thanks on to Cory, 

Richard 


On Wed, 2007-10-10 at 23:52 -0700, Kristian Erik Hermansen wrote: 

> ---------- Forwarded message ---------- 
> From: Cory Coager <[hidden email]> 
> Date: Oct 10, 2007 9:16 PM 
> Subject: Re: Fwd: Squid configuration/groking 
> To: Kristian Erik Hermansen <[hidden email]> 
> 
> 
> 
>  Kristian Erik Hermansen wrote: 
>  Any ideas on this for the guy? 
> 
> 
> ---------- Forwarded message ---------- 
> From: TheBlueSage <[hidden email]> 
> Date: Wed, 10 Oct 2007 12:25:47 -0400 
> Subject: Squid configuration/groking 
> To: [hidden email] 
> 
> Hi Folks, 
> 
> Is there anyone out there with Squid knowledge, specifically using squid 
> as an accelerated caching web-server ? I am having .. understanding .. 
> issues .. 
> 
> I have setup Squid in the following manner, to act as a caching 
> accelerator in front of our web server. We serve a lot of mashup pages 
> that take a lot of time to put together, but once constructed, never 
> change. Thus serving from a cached page should make the page delivery 
> really quick after the first request. 
> 
> 1. Put squid in front of the web server, have it answer the incoming 
> requests on port 80 
> 2. If squid has a copy of a requested page, then send the copy. 
> 3. if not, then request it from the real domain server (apache on port 
> 81), send the page, and cache it at the same time 
> 
> Conundrum 
> ----------------- 
> The basic setup seems to be cool, but I run into issues with the fact 
> that some of my content has varying expire times. For example, we take 
> XML feeds from some supplier, and output content derived from the feeds. 
> However the feeds come in bursts. This means that for several days there 
> is no change, and the cache should spit out the copy it has,. Then, when 
> the feed comes, it updates the pages on an 'every 30 second' basis. I 
> still want those pages cached, with 30 second expirt times, as we will 
> serve it many times in that 30 second gap.Then, once the feed stops, the 
> last page should be cached until the feed starts again... at some 
> undetermined time in the future. 
> Most of what I have read says to use ACLs for what should and should not 
> be cached, but not much explains how to do the above ..... 
> 
> anyone out there hit this before ? Or if there is a better solution that 
> Squid, I am all ears. I have no special attachment to squid, it was 
> chosen through familiarity with its name as much as any other reason, 
> 
> thanks for any help, 
> 
> Richard 
> 
> 
> 
> 
>  Yup...I did this at work.  I'm using squid version 2.6 and I believe 
> older versions use a totally different variable name for this stuff. 
> Here are the relevant pieces of the config: 
> 
> http_port <squidip>:80 accel defaultsite=www.example.com vhost vport 
>  cache_peer <websiteip> parent 80 0 no-query originserver default 
> 
>  As far as what gets cached and what doesn't, its all controlled by 
> 'refresh_pattern'.  This variable uses regular expressions to apply 
> filters and you also pass a min, percent and max variables to set the 
> age of the cached content.  You can also use overrides with this to 
> force more or less caching but becareful as this breaks RFC's.  Read 
> the example config from squid, it has great details on this 
> configuration setting. 
> 
>  Examples: 
>  refresh_pattern -i [.]gif$ 2880 100% 20160 
>  refresh_pattern -i [.]php$ 0 20% 20160 
> 
> 
> 
> -- 
> Kristian Erik Hermansen 
> 


BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org