Miscellaneous questions
Augustin Amann
augustin at waw.com
Mon Mar 17 10:04:14 CET 2008
Dag-Erling Smørgrav a écrit :
> "Michael S. Fischer" <michael at dynamine.net> writes:
>
>> Dag-Erling Smørgrav <des at linpro.no> writes:
>>
>>> I think the default timeout on backends connection may be a little
>>> short, though.
>>>
>> I assume this is the thread_pool_timeout parameter?
>>
>
> No, that's how long an idle worker thread is kept alive. I don't think
> the backend timeout is configurable, I think it's hardocded to five
> seconds.
>
>
>> I'm dealing with a situation where the working set of cacheable
>> responses is larger than the RAM size of a particular Varnish
>> instance. (I don't want to go to disk because it will incur at least
>> a 10ms penalty.) I also want to maximize the hit ratio.
>>
>
> My knee-jerk reaction would be "add more RAM, or add more servers"
>
>
>> One good way to do this is to put a pass-only Varnish instance (i.e.,
>> a content switch) in front of a set of intermediate backends (Varnish
>> caching proxies), each of which is assigned to cache a subset of the
>> possible URI namespace.
>>
>> However, in order to do this, the content switch must make consistent
>> decisions about which cache to direct the incoming requests to. One
>> good way of doing that is implementing a hash function H(U) -> V,
>> where U is the request URI, and V is the intermediate-level proxy.
>>
> That's actually a pretty good idea... Could you open a ticket for it?
>
> DES
>
I'm thinking about the same idea of a reverse-proxy cache cluster for work.
I think that one way of doing that is to use HaProxy (*haproxy*.1wt.eu)
which implement such hash function. You could use it in front of
Varnish, with a URL based balance algorithm ... Should work ok for this
job, even if this could be great to do that in varnish directly.
Augustin.
More information about the varnish-misc
mailing list