abnormally high load?
Jeremy Hinegardner
jeremy at hinegardner.org
Wed Aug 12 23:32:00 CEST 2009
On Wed, Aug 12, 2009 at 12:25:11PM -0700, Ken Brownfield wrote:
> My first guess is that you're seeing varnish spawn a lot of threads because
> your back-end isn't keeping up with the miss rate. My second guess is that
> these misses are large files that are taking a long time for clients to
> download, therefore piling up active client connections (and thus worker
> threads).
On the backends, I'm seeing extremely idle systems, and for a short spurt last
night I changed the config to just 'pass' for everything and the backends were
able to mostly handle the load, and I may be able to tune them to handle all of
the load. That may be a route I take, just use varnish for routing the
requests for determining the backend and pass it all through.
The vast majority of our files are in the 1K->4K range, we do get a few on
very rare occasion that are 1M or so. All of the machines involved here are on
the same gigabit LAN.
> I'm guessing your load is going high because you're swapping? In top, are
> your CPUs saturated, or fairly idle?
The CPU's are saturated, with 0 swap, and virtually 0 iowait.
> If you're seeing CPU saturation, this is possibly an internal Varnish
> issue. Your VCL seems sane, but we haven't seen the inline C.
I could very well be breaking things with our VCL and inline C, here's our
complete varnish configuration, inline C and small lib the inline C uses.
http://gist.github.com/166767
If this is bending varnish in a manner that it shouldn't be, freel free shake
your head and tell me 'how could you do something like that'. So far, it is
working well, except for this load issue.
> Hope it helps,
Yup, so far every comment has helped some, I have a few things I'm going to
change today and see if it alleviates the issue.
enjoy,
-jeremy
>
> On Aug 12, 2009, at 11:08 AM, Jeremy Hinegardner wrote:
>
>> On Wed, Aug 12, 2009 at 05:41:52PM +0100, Rob S wrote:
>>> Jeremy Hinegardner wrote:
>>>> Hi all,
>>>>
>>>> I'm trying to figure out if this is a normal situation or not. We have
>>>> a
>>>> varnish instance in front of 12 tokyo tyrant instances with some inline
>>>> C
>>>> in the VCL to determine which backend to talk to.
>>>>
>>>>
>>>
>>> If you restart varnish during one of these spikes, does it instantly
>>> disappear? I've seen this happen (though only spiking to about 12), and
>>> this is when Varnish has munched through far more memory than we've
>>> allocated it. This problem is one I've been looking into with Ken
>>> Brownfield, and touches on
>>> http://projects.linpro.no/pipermail/varnish-misc/2009-April/002743.html
>>> and
>>> http://projects.linpro.no/pipermail/varnish-misc/2009-June/002840.html
>>>
>>> Do any of these tie up with your experience?
>>
>> Possibly, the correlation I can see with those instances is this section
>> of our
>> VCL
>>
>> sub vcl_recv {
>> ...
>> } else if ( req.request == "PUT" || req.request == "PURGE" ) {
>> purge( "req.url == " req.url );
>> if ( req.request == "PUT" ) {
>> pass;
>> } else {
>> error 200 "PURGE Success";
>> }
>> }
>> ...
>> }
>>
>> We do a consistent stream of PUT operations, its probably 10-15% of all
>> our operations. So our ban list would get farily large I'm guessing?
>>
>> I'm not seeing evidence of a memory leak, and the pmap of the process does
>> show
>> 4G in the varnish_storage.bin mapping.
>>
>> I've attached the output of 'varnishstat -1' if that helps. This is after
>> I've
>> diverted some traffic around varnish because of the load.
>>
>> If this purge() is the culprit, then I should make this change?
>>
>> sub vcl_recv {
>> ...
>> } else if ( req.request == "PUT" || req.request == "PURGE" ) {
>> lookup;
>> }
>> ...
>> }
>>
>> sub vcl_hit {
>> if ( req.request == "PUT" || req.request == "PURGE" ) {
>> set obj.ttl = 0s;
>> if ( req.request == "PURGE" ) {
>> error 200 "PURGE Success";
>> }
>> pass;
>> }
>> }
>>
>> sub vcl_miss {
>> if ( req.request == "PUT" ) {
>> pass;
>> }
>> if ( req.request == "PURGE") {
>> error 404 "Not in cache.";
>> }
>> }
>>
>> enjoy,
>>
>> -jeremy
--
========================================================================
Jeremy Hinegardner jeremy at hinegardner.org
More information about the varnish-misc
mailing list