using parallel varnishes

David Birdsong david.birdsong at gmail.com
Fri Jun 18 07:09:41 CEST 2010


On Thu, Jun 17, 2010 at 10:02 PM, Don Faulkner <dfaulkner at pobox.com> wrote:
> I like the setup. But for some reason I think it needs to be:
>
> web server -> load balancer -> cache -> load balancer -> ssl endpoint
>
ok, put haproxy behind the cache again, or nginx...or let varnish do
the load balancing.  it depends how your web servers need to be
balanced.

> Becuase the caches aren't load balancers and so aren't really balancing between the various servers that might serve a vhost.
>
> I guess my question is "which comes first, the load balancer or the cache?" and of course, why?
if you have multiple cache's then definitely a load balancer.
otherwise your cache/hit ratio is prone to be poor and out of your
control.  use a lb algorithm that hash's on some key.  most hardware
load balancers let you balance on uri.  haproxy provides consistent
hashing (search those two words if you're not familiar) which will
protect key distribution as best as is possible between cache server
failures.

...then load balance behind the cache again using whatever mechanism
makes the most sense for your specific 'backends.'  for me, that often
means simply a forward proxy to *help* find the content somewhere
using dns lookups etc.  other times it means balancing on least
connections.  varnish offers some of these balancing mechanisms
internally.



> --
> Don Faulkner, KB5WPM
> All that is gold does not glitter. Not all those who wander are lost.
>
> On Jun 17, 2010, at 5:43 PM, David Birdsong wrote:
>
>> On Thu, Jun 17, 2010 at 3:31 PM, Don Faulkner <dfaulkner at pobox.com> wrote:
>>> I would like to hear more about how you're combining varnish and haproxy, and what you're trying to achieve.
>>>
>>> I'm just getting started with varnish, but I've used haproxy before.
>>>
>>> I'm trying to construct a cluster of caching, load balancing, and ssl termination to sit in front of my web infrastructure. In thinking about this, I seem to be caught in an infinite loop.
>>>
>>> I've seen several threads suggesting that the "right" way to build the web pipeline is this:
>>>
>>> web server -> cache -> load balancer -> ssl endpoint -> (internet & clients)
>>>
>>> But, in this case, all I have the load balancer doing is balancing between the various caches.
>> Is there a something that you don't like about this setup?
>>
>>> On the other hand, if I reverse this and put the cache in front, then I'm caching the output of the load balancers, and there's no load balancing for the caches.
>>>
>>> I obviously haven't thought this through enough. Could someone pry me out of my loop?
>>> --
>>> Don Faulkner, KB5WPM
>>> All that is gold does not glitter. Not all those who wander are lost.
>>>
>>> On Jun 17, 2010, at 1:50 PM, Ken Brownfield wrote:
>>>
>>>> Seems like that will do the job.
>>>>
>>>> You might also want to look into the consistent hash of haproxy, which should provide cache "distribution" over an arbitrary pool.  Doing it in varnish would get pretty complicated as you add more varnishes, and the infinite loop potential is a little unnerving (to me anyway :)
>>>>
>>>> We wanted redundant caches in a similar way (but for boxes with ~1T of cache) and set up a test config with haproxy that seems to work, but we haven't put real-world load on it yet.
>>>> --
>>>> Ken
>>>>
>>>> On Jun 17, 2010, at 6:54 AM, Martin Boer wrote:
>>>>
>>>>> Hello all,
>>>>>
>>>>> I want to have 2 servers running varnish in parallel so that if one fails the other still contains all cacheable data and the backend servers won't be overloaded.
>>>>> Could someone check to see if I'm on the right track ?
>>>>>
>>>>> This is how I figure it should be working.
>>>>> I don't know how large 'weight' can be, but with varnish caching > 90% that impact would be affordable.
>>>>> Regards,
>>>>> Martin Boer
>>>>>
>>>>>
>>>>> director via_other_varnish random {
>>>>> .retries = 5;
>>>>> {
>>>>>    .backend = other_server;
>>>>>    .weight = 9;
>>>>> }
>>>>> # use the regular backends if the other varnish instance fails.
>>>>> {
>>>>>    .backend = backend_1;
>>>>>    .weight = 1;
>>>>>  }
>>>>> {
>>>>>    .backend = backend_2;
>>>>>    .weight = 1;
>>>>>  }
>>>>> {
>>>>>    .backend = backend_3;
>>>>>    .weight = 1;
>>>>>  }
>>>>> }
>>>>>
>>>>> director via_backends random {
>>>>>  {
>>>>>    .backend = backend_1;
>>>>>    .weight = 1;
>>>>>  }
>>>>> {
>>>>>    .backend = backend_2;
>>>>>    .weight = 1;
>>>>>  }
>>>>> {
>>>>>    .backend = backend_3;
>>>>>    .weight = 1;
>>>>>  }
>>>>> }
>>>>>
>>>>>
>>>>> sub vcl_recv {
>>>>> if ( resp.http.X-through-varnish > 0 ) {
>>>>>    # other varnish forwarded the request already
>>>>>    # so forward to backends
>>>>>    set req.backend = via_backends;
>>>>>    remove resp.http.X-through-varnish;
>>>>> } else {
>>>>>    # try the other varnish
>>>>>    resp.http.X-through-varnish = 1;
>>>>>    set req.backend = via_other_varnish;
>>>>> }
>>>>> ..
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> varnish-misc mailing list
>>>>> varnish-misc at varnish-cache.org
>>>>> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc
>>>>
>>>>
>>>> _______________________________________________
>>>> varnish-misc mailing list
>>>> varnish-misc at varnish-cache.org
>>>> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc
>>>
>>>
>>> _______________________________________________
>>> varnish-misc mailing list
>>> varnish-misc at varnish-cache.org
>>> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc
>>>
>
>



More information about the varnish-misc mailing list