From kavinsethu at gmail.com Mon Nov 21 12:37:52 2022 From: kavinsethu at gmail.com (learner) Date: Mon, 21 Nov 2022 18:07:52 +0530 Subject: Need an clarification to use Varnish cache to store docker images. Message-ID: Hi Team, I hope this is the right place to ask doubts otherwise please redirect me to the right place. I would like to understand whether we can use varnish as proxy cache as frontend for docker registry. Hope varnish is able to cache docker images. If yes, Could you please share with me the rightful resources to explore further. -- Kind Regards, Kavinnath -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Mon Nov 21 22:50:53 2022 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Mon, 21 Nov 2022 14:50:53 -0800 Subject: Need an clarification to use Varnish cache to store docker images. In-Reply-To: References: Message-ID: Hello Kavinnath, The short answer is yes, but". Varnish can happily front a docker registry, and it's only HTTP, but there are a certain number of caveats: - remember that docker, by default, doesn't like plaintext connections, so you'll probably want to use a TLS-terminator in front of Varnish ( https://hub.docker.com/_/hitch) - docker images can be quite big, so make sure to size Varnish properly. - if you want to front the main docker registry in particular, you'll have to deal with the token dance ( https://docs.docker.com/registry/spec/auth/token/) it requires. It's not a problem in itself, but it requires some VCL to get it working. If you have more questions, this mailing list will work very well for asynchronous messaging, but know that there's also an IRC channel ( https://varnish-cache.org/support/#irc-channel) as well as a discord server (https://discord.gg/EuwdvbZR6d) if you want something more synchronous. Cheers, -- Guillaume Quintard On Mon, Nov 21, 2022 at 4:39 AM learner wrote: > Hi Team, > > I hope this is the right place to ask doubts otherwise please redirect me > to the right place. > I would like to understand whether we can use varnish as proxy cache as > frontend for docker registry. Hope varnish is able to cache docker images. > If yes, Could you please share with me the rightful resources to explore > further. > > -- > Kind Regards, > Kavinnath > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at infoverse.ca Tue Nov 22 03:11:07 2022 From: info at infoverse.ca (InfoVerse Inc.) Date: Mon, 21 Nov 2022 22:11:07 -0500 Subject: Varnish-cache as private CDN Message-ID: Hello list, I am working on a design to use Varnish-Cache as a private CDN. The solution is for a small regional ISP in a remote region who wants to provide fast cached content to its users and minimize access to the Internet. Since this is an ISP, the users accessing the Internet can be routed to varnish cache servers, however, in the event of a "miss" the content should be fetched from the Internet. This is a different requirement than the traditional backend server. How can this be achieved with Varnish? I have done a bit of research on backends, directors but they all require a server or group of servers whose content can be cached. Is it possible to configure multiple Varnish storage servers as backends? The storage servers will fetch data from the Internet in case of a miss. Is this a workable solution? Looking forward to a solution. Thanks InfoVerse -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Tue Nov 22 05:03:34 2022 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Mon, 21 Nov 2022 21:03:34 -0800 Subject: Varnish-cache as private CDN In-Reply-To: References: Message-ID: Hi! So, the big question is: do you own the content/domains that the users will access? If yes, there's absolutely no problem, route to Varnish, let it cache, and you're done. There are certain vmods, like vmod_dynamic or vmod_reqwest that will allow you to dynamically find a backend based on a hostname. If you don't own the content, it isn't advisable to try and cache it, like, at all. Let's say for example you want to use varnish to cache content for facebook.com and let's assume you can hijack DNS response to send your users to Varnish instead of to the actual facebook servers. If the request Varnish receives is HTTPS (encrypted), well, you're out of luck because you won't have the certificates to pretend being facebook.com, your users will realize it and bail out. The only way around it is to try something like what Kazakhstan did a few years back [1], but I don't think that would fly in Canada. If you're thinking "wait, can't I just cache the response without decrypting it?", nope, because the whole connection is encrypted, and either you see everything (you have the certificate/key), or nothing (you don't have them). In that latter case, the best you can do is blindly redirect the connection to the facebook server, but then you are just an HTTPS proxy, and caching isn't relevant. If we are talking about plaintext HTTP, and ignoring that your browser and any website worth its salt (including facebook.com) will fight you very hard and try to go encrypted, you have another issue: you need to know what's cacheable, and that's a doozy. There's no universal rule to what's cacheable, and whatever set of rules you come up with, I'll bet I can find a website that'll break them. And the price of failure is super high too: imagine you start sending the same cached bank statement to everybody, people will sue you into the ground. So, all in all, meh, I wouldn't worry about it. And it's not just Varnish, it's any caching solution: you just can't "cache the internet". Sorry if that reads like a very long-winded way of saying "NO", but as I've had to answer this question many times over the years, I thought I'd hammer that point home once and for all :-) [1]: https://en.wikipedia.org/wiki/Kazakhstan_man-in-the-middle_attack -- Guillaume Quintard On Mon, Nov 21, 2022 at 7:13 PM InfoVerse Inc. wrote: > Hello list, > > I am working on a design to use Varnish-Cache as a private CDN. The > solution is for a small regional ISP in a remote region who wants to > provide fast cached content to its users and minimize access to the > Internet. > > Since this is an ISP, the users accessing the Internet can be routed to > varnish cache servers, however, in the event of a "miss" the content should > be fetched from the Internet. This is a different requirement than the > traditional backend server. > > How can this be achieved with Varnish? I have done a bit of research on > backends, directors but they all require a server or group of servers whose > content can be cached. > > Is it possible to configure multiple Varnish storage servers as backends? > The storage servers will fetch data from the Internet in case of a miss. Is > this a workable solution? > > Looking forward to a solution. > > Thanks > InfoVerse > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.rotherfield at function.london Tue Nov 22 18:00:32 2022 From: luke.rotherfield at function.london (Luke Rotherfield) Date: Tue, 22 Nov 2022 13:00:32 -0500 Subject: Varnish 6.0.11 panic help Message-ID: <977EEFFA-A5B5-4C70-AE66-60788269373A@function.london> Hi Guys I am really struggling to debug this panic and wondered if you could give any hints were I might start looking for answers. I have been trawling the GitHub repo and have seen several people recommend upping the thread_pool_stack value. Here is the panic and a few other bits of data that I think are useful based on the issues I have been reading: varnishd[15482]: Child (15492) Panic at: Tue, 22 Nov 2022 16:10:26 GMT Wrong turn at cache/cache_main.c:284: Signal 11 (Segmentation fault) received at 0xffff887fe000 si_code 2 version = varnish-6.0.11 revision a3bc025c2df28e4a76e10c2c41217c9864e9963b, vrt api = 7.1 ident = Linux,4.14.290-217.505.amzn2.aarch64,aarch64,-junix,-smalloc,-sfile,-sdefault,-hcritbit,epoll now = 3120063.050288 (mono), 1669133426.954528 (real) Backtrace: 0x43f5b4: /usr/sbin/varnishd() [0x43f5b4] 0x4a968c: /usr/sbin/varnishd(VAS_Fail+0x54) [0x4a968c] 0x43a350: /usr/sbin/varnishd() [0x43a350] 0xffff90fd7668: linux-vdso.so.1(__kernel_rt_sigreturn+0) [0xffff90fd7668] 0xffff90c8f540: /lib64/libc.so.6(memset+0x100) [0xffff90c8f540] 0x4b88f4: /usr/sbin/varnishd(deflateReset+0x48) [0x4b88f4] 0x42e924: /usr/sbin/varnishd(VGZ_NewGzip+0x88) [0x42e924] 0x42ebc0: /usr/sbin/varnishd() [0x42ebc0] 0x42df28: /usr/sbin/varnishd(VFP_Open+0x98) [0x42df28] 0x42b950: /usr/sbin/varnishd() [0x42b950] thread = (cache-worker) thr.req = (nil) { }, thr.busyobj = 0xffff3d040020 { end = 0xffff3d050000, retries = 0, sp = 0xffff3c241a20 { fd = 28, vxid = 32945, t_open = 1669133426.953970, t_idle = 1669133426.953970, ws = 0xffff3c241a60 { id = \"ses\", {s, f, r, e} = {0xffff3c241aa0, +96, (nil), +344}, }, transport = HTTP/1 { state = HTTP1::Proc } client = 172.31.47.149 50812 :80, }, worker = 0xffff832326c8 { ws = 0xffff83232770 { id = \"wrk\", {s, f, r, e} = {0xffff83231e00, +0, (nil), +2040}, }, VCL::method = BACKEND_RESPONSE, VCL::return = deliver, VCL::methods = {BACKEND_FETCH, BACKEND_RESPONSE}, }, vfc = 0xffff3d041f30 { failed = 0, req = 0xffff3d040640, resp = 0xffff3d040ab8, wrk = 0xffff832326c8, oc = 0xffff3b250640, filters = { gzip = 0xffff3d04a740 { priv1 = (nil), priv2 = 0, closed = 0 }, V1F_STRAIGHT = 0xffff3d04a660 { priv1 = 0xffff3d042600, priv2 = 674132, closed = 0 }, }, obj_flags = 0x0, }, ws = 0xffff3d040058 { id = \"bo\", {s, f, r, e} = {0xffff3d041f78, +34832, (nil), +57472}, }, ws_bo = 0xffff3d0425e8, http[bereq] = 0xffff3d040640 { ws = 0xffff3d040058 { [Already dumped, see above] }, hdrs { \"GET\", \"/build/admin/css/oro.css?v=7c08a284\", \"HTTP/1.1\", \"X-Forwarded-Proto: https\", \"X-Forwarded-Port: 443\", \"Host: london.paperstage.doverstreetmarket.com \", \"X-Amzn-Trace-Id: Root=1-637cf472-4faa5da7246eda4f0a477811\", \"User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36\", \"X-Amz-Cf-Id: WVvRZ7m_mF9MaQ88b0rJd_vEKVJo5YqQr6Povp9ODasHVw2FQop36w==\", \"Via: 3.0 c1685d59e35fdb859ab8a1f97feb5652.cloudfront.net (CloudFront)\", \"Cookie: BAPID=mdmh5umu6v6l1qejvo0lbhse54; __utma=58316727.170235450.1647004079.1669126271.1669131591.29; __utmb=58316727.22.10.1669131591; __utmc=58316727; __utmt=1; __utmt_b=1; __utmz=58316727.1665659005.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _csrf=5Inqz08yS1ACLDnJDFG8xoavis88G4BgHx4WpD6RuQM; _ga=GA1.2.170235450.1647004079; _ga_QNDMHZ1SBD=GS1.1.1667387720.14.1.1667387720.60.0.0; _gcl_au=1.1.255577917.1662482672; _landing_page=%2F; _orig_referrer=; _shopify_y=1531218a-8c0c-4153-86e1-37e06cc3a103; _y=1531218a-8c0c-4153-86e1-37e06cc3a103; df_preview=%7B%22preview_id%22%3A%221755%22%2C%22preview_type%22%3A%22version%22%2C%22preview_code%22%3A%22d82194c2b3c4f22164ca71d225673ff6%22%7D\", \"Accept-Language: en-US,en;q=0.9,fr;q=0.8\", \"Accept: text/css,*/*;q=0.1\", \"Referer: https://london.paperstage.doverstreetmarket.com/doverstreetadmin/pages/settings/1755/en\ ", \"Accept-Encoding: gzip, deflate, br\", \"pragma: no-cache\", \"cache-control: no-cache\", \"sec-gpc: 1\", \"sec-fetch-site: same-origin\", \"sec-fetch-mode: no-cors\", \"sec-fetch-dest: style\", \"X-Forwarded-For: 32.217.249.1, 15.158.35.113, 172.31.47.149\", \"X-Varnish: 32947\", }, }, http[beresp] = 0xffff3d040ab8 { ws = 0xffff3d040058 { [Already dumped, see above] }, hdrs { \"HTTP/1.1\", \"200\", \"OK\", \"Server: nginx/1.20.0\", \"Date: Tue, 22 Nov 2022 16:10:26 GMT\", \"Content-Type: text/css\", \"Content-Length: 674132\", \"Last-Modified: Tue, 22 Nov 2022 15:28:13 GMT\", \"Connection: keep-alive\", \"Expires: Wed, 22 Nov 2023 16:10:26 GMT\", \"Cache-Control: max-age=31536000, public, no-transform\", \"Accept-Ranges: bytes\", \"X-Url: /build/admin/css/oro.css?v=7c08a284\", \"X-Host: london.paperstage.doverstreetmarket.com \", }, }, objcore[fetch] = 0xffff3b250640 { refcnt = 2, flags = {busy, hfm, private}, exp_flags = {}, boc = 0xffff3b260160 { refcnt = 2, state = req_done, vary = (nil), stevedore_priv = (nil), }, exp = {1669133426.954405, -1.000000, 300.000000, 0.000000}, objhead = 0xffff907d01d0, stevedore = (nil), }, http_conn = 0xffff3d042600 { fd = 30 (@0xffff3be704e4), doclose = NULL, ws = 0xffff3d040058 { [Already dumped, see above] }, {rxbuf_b, rxbuf_e} = {0xffff3d042660, 0xffff3d0427a8}, {pipeline_b, pipeline_e} = {0xffff3d0427a8, 0xffff3d04a660}, content_length = 674132, body_status = length, first_byte_timeout = 180.000000, between_bytes_timeout = 180.000000, }, flags = {do_gzip, do_stream, do_pass, uncacheable}, director_req = 0xffff8ffc0190 { vcl_name = default, health = healthy, admin_health = probe, changed = 1669131098.893551, type = backend { display_name = boot.default, ipv4 = 127.0.0.1, port = 8080, hosthdr = 127.0.0.1, n_conn = 1, }, }, director_resp = director_req, vcl = { name = \"boot\", busy = 5, discard = 0, state = auto, temp = warm, conf = { syntax = \"40\", srcname = { \"/etc/varnish/default.vcl\", \"Builtin\", }, }, }, }, vmods = { std = {Varnish 6.0.11 a3bc025c2df28e4a76e10c2c41217c9864e9963b, 0.0}, }, param.show thread_pool_stack 200 thread_pool_stack Value is: 128k [bytes] (default) Minimum is: 128k yum info jemalloc Loaded plugins: extras_suggestions, langpacks, priorities, update-motd 223 packages excluded due to repository priority protections Installed Packages Name : jemalloc Arch : aarch64 Version : 3.6.0 Release : 1.el7 Size : 374 k Repo : installed From repo : epel Summary : General-purpose scalable concurrent malloc implementation URL : http://www.canonware.com/jemalloc/ License : BSD Description : General-purpose scalable concurrent malloc(3) implementation. : This distribution is the stand-alone "portable" implementation of jemalloc. Any insight you can offer would be greatly appreciated. I am going to try setting the thread_pool_stack to 1024k today, however this error is pretty intermittent so I may not know if it has resolved anything for a day or so. Kind Regards Luke Rotherfield Senior Developer https://function.london -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Nov 23 08:55:58 2022 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 23 Nov 2022 08:55:58 +0000 Subject: Varnish 6.0.11 panic help In-Reply-To: <977EEFFA-A5B5-4C70-AE66-60788269373A@function.london> References: <977EEFFA-A5B5-4C70-AE66-60788269373A@function.london> Message-ID: On Tue, Nov 22, 2022 at 6:03 PM Luke Rotherfield wrote: > > Hi Guys > > I am really struggling to debug this panic and wondered if you could give any hints were I might start looking for answers. I have been trawling the GitHub repo and have seen several people recommend upping the thread_pool_stack value. Here is the panic and a few other bits of data that I think are useful based on the issues I have been reading: > > varnishd[15482]: Child (15492) Panic at: Tue, 22 Nov 2022 16:10:26 GMT > > Wrong turn at cache/cache_main.c:284: > Signal 11 (Segmentation fault) received at 0xffff887fe000 si_code 2 > version = varnish-6.0.11 revision a3bc025c2df28e4a76e10c2c41217c9864e9963b, vrt api = 7.1 > ident = Linux,4.14.290-217.505.amzn2.aarch64,aarch64,-junix,-smalloc,-sfile,-sdefault,-hcritbit,epoll > now = 3120063.050288 (mono), 1669133426.954528 (real) > Backtrace: > 0x43f5b4: /usr/sbin/varnishd() [0x43f5b4] > 0x4a968c: /usr/sbin/varnishd(VAS_Fail+0x54) [0x4a968c] > 0x43a350: /usr/sbin/varnishd() [0x43a350] > 0xffff90fd7668: linux-vdso.so.1(__kernel_rt_sigreturn+0) [0xffff90fd7668] > 0xffff90c8f540: /lib64/libc.so.6(memset+0x100) [0xffff90c8f540] > 0x4b88f4: /usr/sbin/varnishd(deflateReset+0x48) [0x4b88f4] > 0x42e924: /usr/sbin/varnishd(VGZ_NewGzip+0x88) [0x42e924] > 0x42ebc0: /usr/sbin/varnishd() [0x42ebc0] > 0x42df28: /usr/sbin/varnishd(VFP_Open+0x98) [0x42df28] > 0x42b950: /usr/sbin/varnishd() [0x42b950] > thread = (cache-worker) > thr.req = (nil) { > }, > thr.busyobj = 0xffff3d040020 { > end = 0xffff3d050000, > retries = 0, > sp = 0xffff3c241a20 { > fd = 28, vxid = 32945, > t_open = 1669133426.953970, > t_idle = 1669133426.953970, > ws = 0xffff3c241a60 { > id = \"ses\", > {s, f, r, e} = {0xffff3c241aa0, +96, (nil), +344}, > }, > transport = HTTP/1 { > state = HTTP1::Proc > } > client = 172.31.47.149 50812 :80, > }, > worker = 0xffff832326c8 { > ws = 0xffff83232770 { > id = \"wrk\", > {s, f, r, e} = {0xffff83231e00, +0, (nil), +2040}, > }, > VCL::method = BACKEND_RESPONSE, > VCL::return = deliver, > VCL::methods = {BACKEND_FETCH, BACKEND_RESPONSE}, > }, > vfc = 0xffff3d041f30 { > failed = 0, > req = 0xffff3d040640, > resp = 0xffff3d040ab8, > wrk = 0xffff832326c8, > oc = 0xffff3b250640, > filters = { > gzip = 0xffff3d04a740 { > priv1 = (nil), > priv2 = 0, > closed = 0 > }, > V1F_STRAIGHT = 0xffff3d04a660 { > priv1 = 0xffff3d042600, > priv2 = 674132, > closed = 0 > }, > }, > obj_flags = 0x0, > }, > ws = 0xffff3d040058 { > id = \"bo\", > {s, f, r, e} = {0xffff3d041f78, +34832, (nil), +57472}, > }, > ws_bo = 0xffff3d0425e8, > http[bereq] = 0xffff3d040640 { > ws = 0xffff3d040058 { > [Already dumped, see above] > }, > hdrs { > \"GET\", > \"/build/admin/css/oro.css?v=7c08a284\", > \"HTTP/1.1\", > \"X-Forwarded-Proto: https\", > \"X-Forwarded-Port: 443\", > \"Host: london.paperstage.doverstreetmarket.com\", > \"X-Amzn-Trace-Id: Root=1-637cf472-4faa5da7246eda4f0a477811\", > \"User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36\", > \"X-Amz-Cf-Id: WVvRZ7m_mF9MaQ88b0rJd_vEKVJo5YqQr6Povp9ODasHVw2FQop36w==\", > \"Via: 3.0 c1685d59e35fdb859ab8a1f97feb5652.cloudfront.net (CloudFront)\", > \"Cookie: BAPID=mdmh5umu6v6l1qejvo0lbhse54; __utma=58316727.170235450.1647004079.1669126271.1669131591.29; __utmb=58316727.22.10.1669131591; __utmc=58316727; __utmt=1; __utmt_b=1; __utmz=58316727.1665659005.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _csrf=5Inqz08yS1ACLDnJDFG8xoavis88G4BgHx4WpD6RuQM; _ga=GA1.2.170235450.1647004079; _ga_QNDMHZ1SBD=GS1.1.1667387720.14.1.1667387720.60.0.0; _gcl_au=1.1.255577917.1662482672; _landing_page=%2F; _orig_referrer=; _shopify_y=1531218a-8c0c-4153-86e1-37e06cc3a103; _y=1531218a-8c0c-4153-86e1-37e06cc3a103; df_preview=%7B%22preview_id%22%3A%221755%22%2C%22preview_type%22%3A%22version%22%2C%22preview_code%22%3A%22d82194c2b3c4f22164ca71d225673ff6%22%7D\", > \"Accept-Language: en-US,en;q=0.9,fr;q=0.8\", > \"Accept: text/css,*/*;q=0.1\", > \"Referer: https://london.paperstage.doverstreetmarket.com/doverstreetadmin/pages/settings/1755/en\", > \"Accept-Encoding: gzip, deflate, br\", > \"pragma: no-cache\", > \"cache-control: no-cache\", > \"sec-gpc: 1\", > \"sec-fetch-site: same-origin\", > \"sec-fetch-mode: no-cors\", > \"sec-fetch-dest: style\", > \"X-Forwarded-For: 32.217.249.1, 15.158.35.113, 172.31.47.149\", > \"X-Varnish: 32947\", > }, > }, > http[beresp] = 0xffff3d040ab8 { > ws = 0xffff3d040058 { > [Already dumped, see above] > }, > hdrs { > \"HTTP/1.1\", > \"200\", > \"OK\", > \"Server: nginx/1.20.0\", > \"Date: Tue, 22 Nov 2022 16:10:26 GMT\", > \"Content-Type: text/css\", > \"Content-Length: 674132\", > \"Last-Modified: Tue, 22 Nov 2022 15:28:13 GMT\", > \"Connection: keep-alive\", > \"Expires: Wed, 22 Nov 2023 16:10:26 GMT\", > \"Cache-Control: max-age=31536000, public, no-transform\", > \"Accept-Ranges: bytes\", > \"X-Url: /build/admin/css/oro.css?v=7c08a284\", > \"X-Host: london.paperstage.doverstreetmarket.com\", > }, > }, > objcore[fetch] = 0xffff3b250640 { > refcnt = 2, > flags = {busy, hfm, private}, > exp_flags = {}, > boc = 0xffff3b260160 { > refcnt = 2, > state = req_done, > vary = (nil), > stevedore_priv = (nil), > }, > exp = {1669133426.954405, -1.000000, 300.000000, 0.000000}, > objhead = 0xffff907d01d0, > stevedore = (nil), > }, > http_conn = 0xffff3d042600 { > fd = 30 (@0xffff3be704e4), > doclose = NULL, > ws = 0xffff3d040058 { > [Already dumped, see above] > }, > {rxbuf_b, rxbuf_e} = {0xffff3d042660, 0xffff3d0427a8}, > {pipeline_b, pipeline_e} = {0xffff3d0427a8, 0xffff3d04a660}, > content_length = 674132, > body_status = length, > first_byte_timeout = 180.000000, > between_bytes_timeout = 180.000000, > }, > flags = {do_gzip, do_stream, do_pass, uncacheable}, > director_req = 0xffff8ffc0190 { > vcl_name = default, > health = healthy, > admin_health = probe, changed = 1669131098.893551, > type = backend { > display_name = boot.default, > ipv4 = 127.0.0.1, > port = 8080, > hosthdr = 127.0.0.1, > n_conn = 1, > }, > }, > director_resp = director_req, > vcl = { > name = \"boot\", > busy = 5, > discard = 0, > state = auto, > temp = warm, > conf = { > syntax = \"40\", > srcname = { > \"/etc/varnish/default.vcl\", > \"Builtin\", > }, > }, > }, > }, > vmods = { > std = {Varnish 6.0.11 a3bc025c2df28e4a76e10c2c41217c9864e9963b, 0.0}, > }, > > param.show thread_pool_stack > 200 > thread_pool_stack > Value is: 128k [bytes] (default) > Minimum is: 128k > > > yum info jemalloc > Loaded plugins: extras_suggestions, langpacks, priorities, update-motd > 223 packages excluded due to repository priority protections > Installed Packages > Name : jemalloc > Arch : aarch64 > Version : 3.6.0 > Release : 1.el7 > Size : 374 k > Repo : installed > From repo : epel > Summary : General-purpose scalable concurrent malloc implementation > URL : http://www.canonware.com/jemalloc/ > License : BSD > Description : General-purpose scalable concurrent malloc(3) implementation. > : This distribution is the stand-alone "portable" implementation of jemalloc. > > > Any insight you can offer would be greatly appreciated. I am going to try setting the thread_pool_stack to 1024k today, however this error is pretty intermittent so I may not know if it has resolved anything for a day or so. Hi Luke, At first glance this does not look like something you could work around with a larger thread stack. If you have a github account, we can move the discussion there: https://github.com/varnishcache/varnish-cache/issues/3867 Best, Dridi