From uday.polu at indiamart.com Thu Jun 1 01:47:46 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Thu, 1 Jun 2023 07:17:46 +0530 Subject: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching In-Reply-To: References: Message-ID: > Does it need to be unique? can't we just get away with > "aaaaaaaaaaaaaaaaaaaa"? > Our Requirements: 1. New Parameter should be *appended *to already existing parameters in Query String. (should not replace entire query string) 2. Parameter Value *Must be Unique for each request* (ideally unique randomness is preferred) 3. Allowed Characters are Alphanumeric which are *URL safe* [can be lowercase, uppercase in case of alphabets] 4. Characters can be repeated in parameter value EX: Gn4lT*Y*gBgpPaRi6hw6*Y*S (here, Y is repeated) But as mentioned above the value must be unique as a whole. Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S", 2nd request can be "G34lTYgBgpPaRi6hyaaF" and so on Thanks & Regards Uday Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Thu Jun 1 05:42:25 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Wed, 31 May 2023 22:42:25 -0700 Subject: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching In-Reply-To: References: Message-ID: Thanks, so, to make things clean you are going to need to use a couple of vmods, which means being able to compile them first: - https://github.com/otto-de/libvmod-uuid as Geoff offered - https://github.com/Dridi/libvmod-querystring that will allow easy manipulation of the querystring unfortunately, the install-vmod tool that is bundled into the Varnish docker image isn't able to cleanly compile/install them. I'll have a look this week-end if I can, or at least I'll open a ticket on https://github.com/varnish/docker-varnish But, if you are able to install those two, then your life is easy: - once you receive a request, you can start by creating a unique ID, which'll be the the vcl equivalent of `uuidgen | sed -E 's/(\w+)-(\w+)-(\w+)-(\w+).*/\1\2\3\4/'` (without having testing it, probably `regsub(uuid.uuid_v4(), "s/(\w+)-(\w+)-(\w+)-(\w+).*", "\1\2\3\4/"`) - then just add/replace the parameter in the querystring with vmod_querystring and...that's about it? Problem is getting the vmods to compile/install which I can help with this week-end. There's black magic that you can do using regex to manipulate querystring, but it's a terrible idea. -- Guillaume Quintard On Wed, May 31, 2023 at 6:48?PM Uday Kumar wrote: > > Does it need to be unique? can't we just get away with >> "aaaaaaaaaaaaaaaaaaaa"? >> > > Our Requirements: > 1. New Parameter should be *appended *to already existing parameters in > Query String. (should not replace entire query string) > 2. Parameter Value *Must be Unique for each request* (ideally unique > randomness is preferred) > 3. Allowed Characters are Alphanumeric which are *URL safe* [can be > lowercase, uppercase in case of alphabets] > 4. Characters can be repeated in parameter value EX: Gn4lT*Y*gBgpPaRi6hw6 > *Y*S (here, Y is repeated) But as mentioned above the value must be > unique as a whole. > > Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S", > 2nd request can be > "G34lTYgBgpPaRi6hyaaF" and so on > > > Thanks & Regards > Uday Kumar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Thu Jun 1 07:15:43 2023 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 1 Jun 2023 09:15:43 +0200 Subject: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching In-Reply-To: References: Message-ID: On 6/1/23 03:47, Uday Kumar wrote: > > Our Requirements: [...] > 3. Allowed Characters are Alphanumeric which are *URL safe* [can be > lowercase, uppercase in case of alphabets] [...] This time you didn't mention a requirement of exactly 20 characters. If that's not strictly required, it's worth pointing out that the hyphen character '-' is also URL safe. So in that case you could just use the standard 36-character representation of a UUID, with its four hyphens. It would save you the trouble of regex matching and substitution, as Guillaume spelled out. Best, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From uday.polu at indiamart.com Thu Jun 1 13:09:26 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Thu, 1 Jun 2023 18:39:26 +0530 Subject: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching In-Reply-To: References: Message-ID: Thanks for the prompt response! Thanks & Regards Uday Kumar On Thu, Jun 1, 2023 at 11:12?AM Guillaume Quintard < guillaume.quintard at gmail.com> wrote: > Thanks, so, to make things clean you are going to need to use a couple of > vmods, which means being able to compile them first: > - https://github.com/otto-de/libvmod-uuid as Geoff offered > - https://github.com/Dridi/libvmod-querystring that will allow easy > manipulation of the querystring > > unfortunately, the install-vmod tool that is bundled into the Varnish > docker image isn't able to cleanly compile/install them. I'll have a look > this week-end if I can, or at least I'll open a ticket on > https://github.com/varnish/docker-varnish > > But, if you are able to install those two, then your life is easy: > - once you receive a request, you can start by creating a unique ID, > which'll be the the vcl equivalent of `uuidgen | sed -E > 's/(\w+)-(\w+)-(\w+)-(\w+).*/\1\2\3\4/'` (without having testing it, > probably `regsub(uuid.uuid_v4(), "s/(\w+)-(\w+)-(\w+)-(\w+).*", > "\1\2\3\4/"`) > - then just add/replace the parameter in the querystring with > vmod_querystring > > and...that's about it? > > Problem is getting the vmods to compile/install which I can help with this > week-end. There's black magic that you can do using regex to manipulate > querystring, but it's a terrible idea. > > -- > Guillaume Quintard > > > On Wed, May 31, 2023 at 6:48?PM Uday Kumar > wrote: > >> >> Does it need to be unique? can't we just get away with >>> "aaaaaaaaaaaaaaaaaaaa"? >>> >> >> Our Requirements: >> 1. New Parameter should be *appended *to already existing parameters in >> Query String. (should not replace entire query string) >> 2. Parameter Value *Must be Unique for each request* (ideally unique >> randomness is preferred) >> 3. Allowed Characters are Alphanumeric which are *URL safe* [can be >> lowercase, uppercase in case of alphabets] >> 4. Characters can be repeated in parameter value EX: Gn4lT*Y*gBgpPaRi6hw6 >> *Y*S (here, Y is repeated) But as mentioned above the value must be >> unique as a whole. >> >> Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S", >> 2nd request can be >> "G34lTYgBgpPaRi6hyaaF" and so on >> >> >> Thanks & Regards >> Uday Kumar >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Mon Jun 5 06:23:48 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Sun, 4 Jun 2023 23:23:48 -0700 Subject: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching In-Reply-To: References: Message-ID: Hi all, Turns out install-vmod works, just needed to grab the right tarballs and have the right dependencies installed. Here's the Dockerfile I used: FROM varnish:7.3 USER root RUN set -e; \ EXTRA_DEPS="autoconf-archive libossp-uuid-dev"; \ apt-get update; \ apt-get -y install $VMOD_DEPS $EXTRA_DEPS libossp-uuid16 libuuid1 /pkgs/*.deb; \ # vmod_querystring install-vmod https://github.com/Dridi/libvmod-querystring/releases/download/v2.0.3/vmod-querystring-2.0.3.tar.gz; \ # vmod_uuid install-vmod https://github.com/otto-de/libvmod-uuid/archive/refs/heads/master.tar.gz; \ apt-get -y purge --auto-remove $VMOD_DEPS $EXTRA_DEPS varnish-dev; \ rm -rf /var/lib/apt/lists/* USER varnish and here's the VCL: vcl 4.1; import querystring; import uuid; backend default { .host = "localhost"; .port = "4444"; } sub vcl_init { new qf = querystring.filter(sort = true); qf.add_string("myparam"); } # clear the url from param as it goes in sub vcl_recv { # clear myparam from the incoming url set req.url = qf.apply(mode = drop); } # add the querystring parameter back if we go to the backend sub vcl_backend_fetch { # create the unique string set bereq.http.mynewparam = regsub(uuid.uuid_v4(), "^(.{20}).*", "\1"); # add our own myparam if (bereq.url ~ "\?") { set bereq.url = bereq.url + "&myparam=" + bereq.http.mynewparam; } else { set bereq.url = bereq.url + "?myparam=" + bereq.http.mynewparam; } } It's a bit crude, but it fulfills your requirements. Make sure you test it though. -- Guillaume Quintard On Thu, Jun 1, 2023 at 6:10?AM Uday Kumar wrote: > Thanks for the prompt response! > > Thanks & Regards > Uday Kumar > > > On Thu, Jun 1, 2023 at 11:12?AM Guillaume Quintard < > guillaume.quintard at gmail.com> wrote: > >> Thanks, so, to make things clean you are going to need to use a couple of >> vmods, which means being able to compile them first: >> - https://github.com/otto-de/libvmod-uuid as Geoff offered >> - https://github.com/Dridi/libvmod-querystring that will allow easy >> manipulation of the querystring >> >> unfortunately, the install-vmod tool that is bundled into the Varnish >> docker image isn't able to cleanly compile/install them. I'll have a look >> this week-end if I can, or at least I'll open a ticket on >> https://github.com/varnish/docker-varnish >> >> But, if you are able to install those two, then your life is easy: >> - once you receive a request, you can start by creating a unique ID, >> which'll be the the vcl equivalent of `uuidgen | sed -E >> 's/(\w+)-(\w+)-(\w+)-(\w+).*/\1\2\3\4/'` (without having testing it, >> probably `regsub(uuid.uuid_v4(), "s/(\w+)-(\w+)-(\w+)-(\w+).*", >> "\1\2\3\4/"`) >> - then just add/replace the parameter in the querystring with >> vmod_querystring >> >> and...that's about it? >> >> Problem is getting the vmods to compile/install which I can help with >> this week-end. There's black magic that you can do using regex to >> manipulate querystring, but it's a terrible idea. >> >> -- >> Guillaume Quintard >> >> >> On Wed, May 31, 2023 at 6:48?PM Uday Kumar >> wrote: >> >>> >>> Does it need to be unique? can't we just get away with >>>> "aaaaaaaaaaaaaaaaaaaa"? >>>> >>> >>> Our Requirements: >>> 1. New Parameter should be *appended *to already existing parameters in >>> Query String. (should not replace entire query string) >>> 2. Parameter Value *Must be Unique for each request* (ideally unique >>> randomness is preferred) >>> 3. Allowed Characters are Alphanumeric which are *URL safe* [can be >>> lowercase, uppercase in case of alphabets] >>> 4. Characters can be repeated in parameter value EX: Gn4lT*Y* >>> gBgpPaRi6hw6*Y*S (here, Y is repeated) But as mentioned above the value >>> must be unique as a whole. >>> >>> Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S", >>> 2nd request can be >>> "G34lTYgBgpPaRi6hyaaF" and so on >>> >>> >>> Thanks & Regards >>> Uday Kumar >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From uday.polu at indiamart.com Mon Jun 5 11:31:04 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Mon, 5 Jun 2023 17:01:04 +0530 Subject: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching In-Reply-To: References: Message-ID: Hello Guillaume, Thanks for the update! (It's done by default if you don't have a vcl_hash section in your VCL) >>>> We can tweak it slightly so that we ignore the whole querystring: >>>> sub vcl_hash { >>>> hash_data(regsub(req.url, "\?.*","")); >>>> if (req.http.host) { >>>> hash_data(req.http.host); >>>> } else { >>>> hash_data(server.ip); >>>> } >>>> return (lookup); >>>> } >>>> >>> Would like to discuss about above suggestion. *FYI:* *In our current vcl_hash subroutine, we didnt had any return lookup statement in production , and the code is as below* #Working sub vcl_hash{ hash_data(req.url); hash_data(req.http.Accept-Encoding); } The above code is *working without any issues on production even without return (lookup)* statement. For our new requirement * to ignore the parameter in URL while caching, * as per your suggestion we have made changes to the vcl_hash subroutine, new code is as below. #Not Working sub vcl_hash{ set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", ""); hash_data(req.http.hash-url); unset req.http.hash-url; hash_data(req.http.Accept-Encoding); } The above code is *not hashing the URL by ignoring traceId (not as expected)** but if I add return lookup at the end of subroutine its working as expected.* #Working Code sub vcl_hash{ set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", ""); hash_data(req.http.hash-url); unset req.http.hash-url; hash_data(req.http.Accept-Encoding); *return (lookup);* } *I have few doubts to be clarified:* 1. May I know what difference return (lookup) statement makes? 2. Will there be any side effects with modified code, if I use return (lookup)? (Because original code was not causing any issue even without return lookup in production) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Mon Jun 5 14:28:02 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Mon, 5 Jun 2023 07:28:02 -0700 Subject: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching In-Reply-To: References: Message-ID: Hi, Relevant documentation: - https://varnish-cache.org/docs/trunk/users-guide/vcl-hashing.html - https://www.varnish-software.com/developers/tutorials/varnish-builtin-vcl/ - https://varnish-cache.org/docs/trunk/users-guide/vcl-built-in-code.html Essentially: if you don't use a return statement, then the built-in vcl code is executed, and so the logic will be different with and without that statement. You wrote that the code isn't working, but don't explain further, which makes it hard to debug, my best guess is that you're hashing too much because of the built-in code. One thing you can do is this: ``` sub vcl_deliver { set resp.http.req-hash = req.hash; ... } ``` That will allow you to see objects get the same hash, or a different one. On that topic, I'm pretty certain that hashing the Accept-Encoding header is useless and will fragment your cache needlessly, as Varnish already takes that header into account implicitly. Note that the vcl I shared in my last email doesn't have a vcl_hash function because it relies entirely on modifying the url before it is hashed by the built-in vcl. Hope that helps. -- Guillaume Quintard On Mon, Jun 5, 2023 at 4:31?AM Uday Kumar wrote: > Hello Guillaume, > > Thanks for the update! > > > (It's done by default if you don't have a vcl_hash section in your VCL) >>>>> We can tweak it slightly so that we ignore the whole querystring: >>>>> sub vcl_hash { >>>>> hash_data(regsub(req.url, "\?.*","")); >>>>> if (req.http.host) { >>>>> hash_data(req.http.host); >>>>> } else { >>>>> hash_data(server.ip); >>>>> } >>>>> return (lookup); >>>>> } >>>>> >>>> > Would like to discuss about above suggestion. > > *FYI:* > *In our current vcl_hash subroutine, we didnt had any return lookup > statement in production , and the code is as below* > #Working > sub vcl_hash{ > hash_data(req.url); > hash_data(req.http.Accept-Encoding); > } > The above code is *working without any issues on production even without > return (lookup)* statement. > > For our new requirement * to ignore the parameter in URL while caching, * as > per your suggestion we have made changes to the vcl_hash subroutine, new > code is as below. > > #Not Working > sub vcl_hash{ > set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", ""); > hash_data(req.http.hash-url); > unset req.http.hash-url; > hash_data(req.http.Accept-Encoding); > } > > The above code is *not hashing the URL by ignoring traceId (not as > expected)** but if I add return lookup at the end of subroutine its > working as expected.* > > #Working Code > sub vcl_hash{ > set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", ""); > hash_data(req.http.hash-url); > unset req.http.hash-url; > hash_data(req.http.Accept-Encoding); > *return (lookup);* > } > > > *I have few doubts to be clarified:* > 1. May I know what difference return (lookup) statement makes? > 2. Will there be any side effects with modified code, if I use return > (lookup)? (Because original code was not causing any issue even without > return lookup in production) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-varnish at wisemo.com Tue Jun 6 12:05:25 2023 From: jb-varnish at wisemo.com (Jakob Bohm) Date: Tue, 6 Jun 2023 14:05:25 +0200 Subject: Mysterious no content result, from an URL with pass action In-Reply-To: References: <9d1f9ed6-1b52-88ba-9cdf-b1d613fe4069@wisemo.com> Message-ID: Hi all, Just a quick update, After changing the hitch to varnish connection from AF_UNIX to TCP, rerunning the experiment with tcpdump active revealed that varnish 7.2.1 seemed to silently ignore HTTP/2 requests whenever my browser chose that over HTTP/1.x .? Turning off HTTP/2 in hitch seems to make things work. I'm still surprised that varnishd drops HTTP/2 over proxyv2 silently with no logging that a connection was dropped, and in such a way that web browsers interpret it as an empty page.? Feels very similar to my earlier issue that failure to bind to a specified listen address was not shown to the sysadmin starting varnishd . Now it's time to upgrade to 7.3.0 and improve the configuration. On 2023-05-10 10:43, Jakob Bohm wrote: > On 2023-05-10 06:49, Guillaume Quintard wrote: >> Hi Jakob, >> >> (Sorry i didn't see that email sooner, it was in my spam folder) >> >> Looking at the log, I'm not sure what varnish should be loud about :-) >> 204 is a success code, and more importantly it's generated by the >> backend, so varnish is happily passing it along. >> >> At the http level, everything looks about right, but I can guess from >> your apparent irritation that something wrong one level up, let's try >> to debug that. >> >> What kind of response are you expecting, if not a 204? And maybe, >> what is that endpoint supposed to do? Given that the method was GET, >> and that there's no body, my only guess is that there's something >> happening with the?TeamCity-AgentSessionId header, maybe? >> Is the 27 seconds processing time expected? >> > > Expecting uncachable results that vary with time and are only > sometimes 204, and the response time is also somewhat unexpected, but > is not clearly logged (only a Varnish expert like you can decrypt that > it is 27 seconds).? It is also unclear if Varnish is always receiving > those responses from the backend. > > I also expected some other URLs in the log, but don't see them. > > Maybe I should find another day to run the experiments again. > >> Cheers, >> >> On Tue, May 9, 2023, 15:12 Jakob Bohm > > wrote: >> >> ??? Dear Varnish mailing list, >> >> ??? When testing varnish as a reverse proxy for multiple services >> ??? including a local JetBrains TeamCity instance, requests to that >> ??? teamcity server get corrupted into "204 No Content" replies. >> >> ??? Once again, Varnish fails to say why it is refusing to do its job. >> ??? Any sane program should explicitly and loudly report any fatal error >> ??? that stops it working.? Loudly means the sysadmin or other user >> ??? invoking the program receives the exaxt error message by default >> ??? instead of something highly indirect, hidden behind a debug option >> ??? or otherwisse highly non-obvious. >> >> ??? Here's a relevant clip from the VCL: >> >> ??? # Various top comments >> ??? vcl 4.1; >> >> ??? import std; >> ??? import proxy; >> >> ??? # Backend sending requests to the teamcity main server >> ??? backend teamcity { >> ???? ???? .host = "2a01:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx"; >> ???? ???? .port = "8111"; >> ??? } >> >> ??? # IP ranges allowed to access the build server and staging server >> ??? acl buildtrust { >> ???? ???? "127.0.0.0"/8; >> ???? ???? "::"/128; >> ???? ???? "various others"/??; >> ??? } >> >> ??? # IP ranges allowed to attempt login to things that use our >> common login >> ??? #??? database >> ??? acl logintrust { >> ???? ???? "various others"/??; >> ??? } >> >> ??? sub vcl_recv { >> ???? ???? # Happens before we check if we have this in cache already. >> ???? ???? # >> ???? ???? # Typically you clean up the request here, removing cookies >> you >> ??? don't need, >> ???? ???? # rewriting the request, etc. >> ???? ???? if (proxy.is_ssl()) { >> ???? ???????? set req.http.Scheme = "https"; >> ???? ???????? set req.http.ssl-version = proxy.ssl_version(); >> ???? ???????? set req.http.X-Forwarded-Proto = "https"; >> ???? ???????? set req.http.X-SSL-cipher = proxy.ssl_cipher(); >> ???? ???????? std.log("TLS-SSL-VERSION: " + proxy.ssl_version()); >> ???? ???? } else { >> ???? ???????? set req.http.X-Forwarded-Proto = req.http.Scheme; >> ???? ???????? unset req.http.ssl-version; >> ???? ???????? unset req.http.X-SSL-cipher; >> ???? ???????? std.log("TLS-SSL-VERSION: none"); >> ???? ???? } >> ???? ???? unset req.http.X-SSL-Subject; >> ???? ???? unset req.http.X-SSL-Issuer; >> ???? ???? unset req.http.X-SSL-notBefore; >> ???? ???? unset req.http.X-SSL-notAfter; >> ???? ???? unset req.http.X-SSL-serial; >> ???? ???? unset req.http.X-SSL-certificate; >> >> ???? ???? set req.http.X-Forwarded-For = client.ip; >> >> ???? ???? call vcl_req_host; >> >> ???? ???? if (req.url ~ "^/something") { >> ???? ???????? set req.backend_hint = be1; >> ???? ???? } else if (req.url !~ "^/somethingelse" && >> ???? ??????????????? !(client.ip ~ logintrust) && >> ???? ??????????????? !(client.ip ~ buildtrust)) { >> ???? ???????? # Treat as unknown by redirecting to public website >> ???? ???????? if ((req.url ~ "^/yeatanother") || >> ???? ???????????? (req.url ~ "^/yetsomeother")) { >> ???? ???????????? return (synth(752)); >> ???? ???????? } else if (req.url ~ "^/yetsomethird") { >> ???? ???????????? return (synth(753)); >> ???? ???????? } >> ???? ???????? return (synth(751)); >> ???? ???? } else if (req.http.Scheme && req.http.Scheme != "https") { >> ???? ???????? # See example at >> https://www.varnish-software.com/developers/tutorials/redirect/ >> ???? ???????? return (synth(750)); >> ???? ???? } else if (req.url ~ "^/somethingelse") { >> ???? ???????? set req.backend_hint = be1; >> ???? ???? } else if (req.url ~ "^/somethingfourth") { >> ???? ???????? set req.backend_hint = be2; >> ???? ???? } else if (req.url ~ "^/somethingfifth") { >> ???? ???????? set req.backend_hint = be2; >> ???? ???? } else if (!(client.ip ~ buildtrust)) { >> ???? ???????? # Treat as unknown by redirecting to public website >> ???? ???????? if ((req.url ~ "^/yeatanother") || >> ???? ???????????? (req.url ~ "^/yetsomeother")) { >> ???? ???????????? return (synth(752)); >> ???? ???????? } else if (req.url ~ "^/yetsomethird") { >> ???? ???????????? return (synth(753)); >> ???? ???????? } >> ???? ???????? return (synth(751)); >> ???? ???? } else if (req.url ~ "^/teamcity") { >> ???? ???????? set req.backend_hint= teamcity; >> ???? ???????? return (pass); >> ??? #??? } else if (req.http.host ~ "^somethingsixths") { >> ??? #?????? set req.backend_hint= be4; >> ???? ???? } else { >> ???? ???????? set req.backend_hint = be5; >> ???? ???? } >> ???? ???? call vcl_req_method; >> ???? ???? call vcl_req_authorization; >> ???? ???? call vcl_req_cookie; >> ???? ???? return (hash); >> ??? } >> >> ??? sub vcl_backend_response { >> ???? ???? # Happens after we have read the response headers from the >> ??? backend. >> ???? ???? # >> ???? ???? # Here you clean the response headers, removing silly >> Set-Cookie >> ??? headers >> ???? ???? # and other mistakes your backend does. >> >> ???? ???? # The Java webserver in teamcity is incompatible with varnish >> ??? connection >> ???? ???? #??? pooling >> ???? ???? if (beresp.backend == teamcity) { >> ???? ???????? if (beresp.http.Connection && >> ???? ???????????? beresp.http.Connection !~ "keep-alive") { >> ???? ???????????? set beresp.http.Connection += ", close"; >> ???? ???????? } else { >> ???? ???????????? set beresp.http.Connection = "close"; >> ???? ???????? } >> ???? ???? } >> ??? } >> >> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> ??? First 43 lines of varnishlog -v 2>&1 >> >> ??? *????????????? << BeReq??? >>?? 9 >> ??? -??????????? 9 Begin????????? b bereq 8 pass >> ??? -??????????? 9 VCL_use??????? b boot >> ??? -??????????? 9 Timestamp????? b Start: 1681484803.177212 0.000000 >> ??? 0.000000 >> ??? -??????????? 9 BereqMethod??? b GET >> ??? -??????????? 9 BereqURL?????? b >> /teamcity/app/agents/v1/commands/next >> ??? -??????????? 9 BereqProtocol? b HTTP/1.1 >> ??? -??????????? 9 BereqHeader??? b TeamCity-AgentSessionId: >> ??? L6juFAAt1awJDt6UKToPIxQq7wpBF89C >> ??? -??????????? 9 BereqHeader??? b User-Agent: TeamCity Agent 2021.2.3 >> ??? -??????????? 9 BereqHeader??? b Host: vmachine.example.com >> ??? >> ??? -??????????? 9 BereqHeader??? b Via: 1.1 vmachine (Varnish/7.2) >> ??? -??????????? 9 BereqHeader??? b Scheme: https >> ??? -??????????? 9 BereqHeader??? b ssl-version: TLSv1.3 >> ??? -??????????? 9 BereqHeader??? b X-Forwarded-Proto: https >> ??? -??????????? 9 BereqHeader??? b X-SSL-cipher: TLS_AES_256_GCM_SHA384 >> ??? -??????????? 9 BereqHeader??? b X-Forwarded-For: 192.168.2.112 >> ??? -??????????? 9 BereqHeader??? b X-Varnish: 9 >> ??? -??????????? 9 VCL_call?????? b BACKEND_FETCH >> ??? -??????????? 9 VCL_return???? b fetch >> ??? -??????????? 9 Timestamp????? b Fetch: 1681484803.177227 0.000014 >> ??? 0.000014 >> ??? -??????????? 9 Timestamp????? b Connected: 1681484803.177603 >> 0.000390 >> ??? 0.000375 >> ??? -??????????? 9 BackendOpen??? b 24 teamcity >> ??? 2a01:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx 8111 >> 2a01:yyyy:yyyy:yyyy::yyyy >> ??? 59548 connect >> ??? -??????????? 9 Timestamp????? b Bereq: 1681484803.177645 0.000432 >> ??? 0.000042 >> ??? -??????????? 9 BerespReason?? b No Content >> ??? -??????????? 9 Timestamp????? b Beresp: 1681484830.672487 27.495274 >> ??? 27.494842 >> ??? -??????????? 9 BerespProtocol b HTTP/1.1 >> ??? -??????????? 9 BerespStatus?? b 204 >> ??? -??????????? 9 BerespReason?? b No Content >> ??? -??????????? 9 BerespHeader?? b TeamCity-Node-Id: MAIN_SERVER >> ??? -??????????? 9 BerespHeader?? b Date: Fri, 14 Apr 2023 15:07:10 GMT >> ??? -??????????? 9 VCL_call?????? b BACKEND_RESPONSE >> ??? -??????????? 9 BerespHeader?? b Connection: close >> ??? -??????????? 9 VCL_return???? b deliver >> ??? -??????????? 9 Timestamp????? b Process: 1681484830.672563 27.495350 >> ??? 0.000075 >> ??? -??????????? 9 Filters??????? b >> ??? -??????????? 9 Storage??????? b malloc Transient >> ??? -??????????? 9 Fetch_Body???? b 0 none - >> ??? -??????????? 9 BackendClose?? b 24 teamcity close Backend/VCL >> requested >> ??? close >> ??? -??????????? 9 Timestamp????? b BerespBody: 1681484830.672926 >> 27.495713 >> ??? 0.000362 >> ??? -??????????? 9 Length???????? b 0 >> ??? -??????????? 9 BereqAcct????? b 345 0 345 85 0 85 >> ??? -??????????? 9 End??????????? b >> Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From dridi at varni.sh Tue Jun 6 15:08:39 2023 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 6 Jun 2023 15:08:39 +0000 Subject: Mysterious no content result, from an URL with pass action In-Reply-To: References: <9d1f9ed6-1b52-88ba-9cdf-b1d613fe4069@wisemo.com> Message-ID: On Tue, Jun 6, 2023 at 12:07?PM Jakob Bohm wrote: > > Hi all, > > Just a quick update, > > After changing the hitch to varnish connection from AF_UNIX to TCP, > rerunning the experiment with tcpdump active revealed that varnish > 7.2.1 seemed to silently ignore HTTP/2 requests whenever my browser > chose that over HTTP/1.x . Turning off HTTP/2 in hitch seems to > make things work. > > I'm still surprised that varnishd drops HTTP/2 over proxyv2 silently > with no logging that a connection was dropped, and in such a way that > web browsers interpret it as an empty page. Feels very similar to > my earlier issue that failure to bind to a specified listen address > was not shown to the sysadmin starting varnishd . Did you enable http2 support? https://varnish-cache.org/docs/7.2/reference/varnishd.html#feature I don't like the idea that we silently close sessions, could you please open a github issue explaining what you observe and how to reproduce it? > Now it's time to upgrade to 7.3.0 and improve the configuration. I don't think we improved anything in that area in the 7.3.0 release. Dridi From uday.polu at indiamart.com Mon Jun 12 20:30:34 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Tue, 13 Jun 2023 02:00:34 +0530 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses Message-ID: Hello, When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their browser, the browser includes the *Cache-Control: no-cache* header in the request. However, in our* production Varnish setup*, we have implemented a check that treats* requests with Cache-Control: no-cache as cache misses*, meaning it bypasses the cache and goes directly to the backend server (Tomcat) to fetch the content. *Example:* in vcl_recv subroutine of default.vcl: sub vcl_recv{ #other Code # Serve fresh data from backend while F5 and CTRL+F5 from user if (req.http.Cache-Control ~ "(no-cache|max-age=0)") { set req.hash_always_miss = true; } #other Code } However, we've noticed that the *Cache-Control: no-cache header is not being passed* to Tomcat even when there is a cache miss. We're unsure why this is happening and would appreciate your assistance in understanding the cause. *Expected Functionality:* If the request contains *Cache-Control: no-cache header then it should be passed to Tomcat* at Backend. Thanks & Regards Uday Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Mon Jun 12 20:43:20 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Mon, 12 Jun 2023 22:43:20 +0200 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses In-Reply-To: References: Message-ID: Hi Uday, Can you provide us with a log of the transaction please? You can run this on the Varnish server: varnishlog -g request -q 'ReqHeader:Cache-Control' And you should see something as soon as you send a request with that header to Varnish. Note that we need the backend part of the transaction, so please don't truncate the block. Kind regards, -- Guillaume Quintard On Mon, Jun 12, 2023 at 10:33?PM Uday Kumar wrote: > Hello, > > When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their > browser, the browser includes the *Cache-Control: no-cache* header in the > request. > However, in our* production Varnish setup*, we have implemented a check > that treats* requests with Cache-Control: no-cache as cache misses*, > meaning it bypasses the cache and goes directly to the backend server > (Tomcat) to fetch the content. > > *Example:* > in vcl_recv subroutine of default.vcl: > > sub vcl_recv{ > #other Code > # Serve fresh data from backend while F5 and CTRL+F5 from user > if (req.http.Cache-Control ~ "(no-cache|max-age=0)") { > set req.hash_always_miss = true; > } > #other Code > } > > > However, we've noticed that the *Cache-Control: no-cache header is not > being passed* to Tomcat even when there is a cache miss. > We're unsure why this is happening and would appreciate your assistance in > understanding the cause. > > *Expected Functionality:* > If the request contains *Cache-Control: no-cache header then it should be > passed to Tomcat* at Backend. > > Thanks & Regards > Uday Kumar > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From uday.polu at indiamart.com Wed Jun 14 08:59:07 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Wed, 14 Jun 2023 14:29:07 +0530 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses In-Reply-To: References: Message-ID: Hi Guillaume, Thanks for the response. Can you provide us with a log of the transaction please? I have sent a R*equest *to VARNISH which Contains *Cache-Control: no-cache header*, we have made sure the request with *cache-control header* is a MISS with a check in *vcl_recv subroutine*, so it's a *MISS *as expected. *The problem as mentioned before: * *Cache-Control: no-cache header is not being passed to the Backend even though its a MISS.* *Please find below the transaction log of Varnish.* * << Request >> 2293779 - Begin req 2293778 rxreq - Timestamp Start: 1686730406.463326 0.000000 0.000000 - Timestamp Req: 1686730406.463326 0.000000 0.000000 - ReqStart IPAddress 61101 - ReqMethod GET - ReqURL someURL - ReqProtocol HTTP/1.1 - ReqHeader Host: IP:Port - ReqHeader Connection: keep-alive - ReqHeader Pragma: no-cache - *ReqHeader Cache-Control: no-cache* - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7 - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Accept-Language: en-US,en;q=0.9 - ReqHeader X-Forwarded-For: IPAddress - VCL_call RECV - VCL_Log URL:someURL - ReqURL someURL - ReqHeader X-contentencode: gzip, deflate - VCL_Log HTTP_X_Compression:gzip, deflate - VCL_return hash - ReqUnset Accept-Encoding: gzip, deflate - ReqHeader Accept-Encoding: gzip - VCL_call HASH - ReqHeader hash-url: someURL - ReqUnset hash-url: someURL - ReqHeader hash-url: someURL - VCL_Log hash-url: someURL - ReqUnset hash-url: someURL - VCL_return lookup - VCL_call MISS - VCL_return fetch *- Link bereq 2293780 fetch* - Timestamp Fetch: 1686730406.515526 0.052200 0.052200 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader add_in_varnish_logs: ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA - RespHeader Content-Type: text/html;charset=UTF-8 - RespHeader Content-Encoding: gzip - RespHeader Vary: Accept-Encoding - RespHeader Date: Wed, 14 Jun 2023 08:13:25 GMT - RespHeader Server: Intermesh Caching Servers/2.0.1 - RespHeader X-Varnish: 2293779 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish (Varnish/5.2) - VCL_call DELIVER - RespHeader X-Edge: MISS - VCL_Log addvg:ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA - RespUnset add_in_varnish_logs: ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA - VCL_return deliver - Timestamp Process: 1686730406.515554 0.052228 0.000029 - RespHeader Accept-Ranges: bytes - RespHeader Transfer-Encoding: chunked - RespHeader Connection: keep-alive - Timestamp Resp: 1686730406.518064 0.054738 0.002510 - ReqAcct 569 0 569 331 36932 37263 - End *** << BeReq >> 2293780* -- Begin bereq 2293779 fetch -- Timestamp Start: 1686730406.463456 0.000000 0.000000 -- BereqMethod GET -- BereqURL someURL -- BereqProtocol HTTP/1.1 -- BereqHeader Host: IP:Port -- BereqHeader Pragma: no-cache -- BereqHeader Upgrade-Insecure-Requests: 1 -- BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 -- BereqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7 -- BereqHeader Accept-Language: en-US,en;q=0.9 -- BereqHeader X-Forwarded-For: IPAddress -- BereqHeader X-contentencode: gzip, deflate -- BereqHeader Accept-Encoding: gzip -- BereqHeader X-Varnish: 2293780 -- VCL_call BACKEND_FETCH -- BereqUnset Accept-Encoding: gzip -- BereqHeader Accept-Encoding: gzip, deflate -- BereqUnset X-contentencode: gzip, deflate -- VCL_return fetch -- BackendOpen 27 reload_2023-06-07T091359.node66 127.0.0.1 8984 127.0.0.1 39154 -- BackendStart 127.0.0.1 8984 -- Timestamp Bereq: 1686730406.463621 0.000165 0.000165 -- Timestamp Beresp: 1686730406.515400 0.051944 0.051779 -- BerespProtocol HTTP/1.1 -- BerespStatus 200 -- BerespReason OK -- BerespHeader Server: Apache-Coyote/1.1 -- BerespHeader add_in_varnish_logs: ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA -- BerespHeader Content-Type: text/html;charset=UTF-8 -- BerespHeader Transfer-Encoding: chunked -- BerespHeader Content-Encoding: gzip -- BerespHeader Vary: Accept-Encoding -- BerespHeader Date: Wed, 14 Jun 2023 08:13:25 GMT -- TTL RFC 120 10 0 1686730407 1686730407 1686730405 0 0 -- VCL_call BACKEND_RESPONSE -- BerespUnset Server: Apache-Coyote/1.1 -- BerespHeader Server: Caching Servers/2.0.1 -- TTL VCL 120 604800 0 1686730407 -- TTL VCL 86400 604800 0 1686730407 -- VCL_return deliver -- Storage malloc s0 -- ObjProtocol HTTP/1.1 -- ObjStatus 200 -- ObjReason OK -- ObjHeader add_in_varnish_logs: ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA -- ObjHeader Content-Type: text/html;charset=UTF-8 -- ObjHeader Content-Encoding: gzip -- ObjHeader Vary: Accept-Encoding -- ObjHeader Date: Wed, 14 Jun 2023 08:13:25 GMT -- ObjHeader Server: Caching Servers/2.0.1 -- Fetch_Body 2 chunked stream -- Gzip u F - 36932 291926 80 211394 295386 -- BackendReuse 27 reload_2023-06-07T091359.node66 -- Timestamp BerespBody: 1686730406.518050 0.054594 0.002650 -- Length 36932 -- BereqAcct 574 0 574 276 36932 37208 -- End Thanks & Regards Uday Kumar On Tue, Jun 13, 2023 at 2:13?AM Guillaume Quintard < guillaume.quintard at gmail.com> wrote: > Hi Uday, > > Can you provide us with a log of the transaction please? You can run this > on the Varnish server: > > varnishlog -g request -q 'ReqHeader:Cache-Control' > > And you should see something as soon as you send a request with that > header to Varnish. Note that we need the backend part of the transaction, > so please don't truncate the block. > > Kind regards, > > -- > Guillaume Quintard > > > On Mon, Jun 12, 2023 at 10:33?PM Uday Kumar > wrote: > >> Hello, >> >> When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their >> browser, the browser includes the *Cache-Control: no-cache* header in >> the request. >> However, in our* production Varnish setup*, we have implemented a check >> that treats* requests with Cache-Control: no-cache as cache misses*, >> meaning it bypasses the cache and goes directly to the backend server >> (Tomcat) to fetch the content. >> >> *Example:* >> in vcl_recv subroutine of default.vcl: >> >> sub vcl_recv{ >> #other Code >> # Serve fresh data from backend while F5 and CTRL+F5 from user >> if (req.http.Cache-Control ~ "(no-cache|max-age=0)") { >> set req.hash_always_miss = true; >> } >> #other Code >> } >> >> >> However, we've noticed that the *Cache-Control: no-cache header is not >> being passed* to Tomcat even when there is a cache miss. >> We're unsure why this is happening and would appreciate your assistance >> in understanding the cause. >> >> *Expected Functionality:* >> If the request contains *Cache-Control: no-cache header then it should >> be passed to Tomcat* at Backend. >> >> Thanks & Regards >> Uday Kumar >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jun 14 12:54:16 2023 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 14 Jun 2023 12:54:16 +0000 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses In-Reply-To: References: Message-ID: On Wed, Jun 14, 2023 at 9:02?AM Uday Kumar wrote: > > Hi Guillaume, > > Thanks for the response. > > Can you provide us with a log of the transaction please? > > I have sent a Request to VARNISH which Contains Cache-Control: no-cache header, we have made sure the request with cache-control header is a MISS with a check in vcl_recv subroutine, so it's a MISS as expected. > > The problem as mentioned before: > Cache-Control: no-cache header is not being passed to the Backend even though its a MISS. There is this in the code: > H("Cache-Control", H_Cache_Control, F ) // 2616 14.9 We remove the this header when we create a normal fetch task, hence the F flag. There's a reference to RFC2616 section 14.9, but this RFC has been updated by newer documents. Also that section is fairly long and I don't have time to dissect it, but I suspect the RFC reference is only here to point to the Cache-Control definition, not the F flag. I suspect the rationale for the F flag is that on cache misses we act as a generic client, not just on behalf of the client that triggered the cache miss. If you want pass-like behavior on a cache miss, you need to implement it in VCL: - store cache-control in a different header in vcl_recv - restore cache-control in vcl_backend_fetch if applicable Please note that you open yourself to malicious clients forcing no-cache on your origin server upon cache misses. Come to think of it, we should probably give Pragma both P and F flags. Dridi From uday.polu at indiamart.com Thu Jun 15 09:32:58 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Thu, 15 Jun 2023 15:02:58 +0530 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses In-Reply-To: References: Message-ID: > There is this in the code: > > * > H("Cache-Control", H_Cache_Control, F ) * // > 2616 14.9 > > We remove the this header when we create a normal fetch task, hence > the F flag. There's a reference to RFC2616 section 14.9, but this RFC > has been updated by newer documents. > Where can I find details about the above code, could not find it in RFC 2616 14.9! Thanks & Regards, Uday Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonatan.reiners at umusic.com Thu Jun 15 11:38:50 2023 From: jonatan.reiners at umusic.com (Reiners, Jonatan) Date: Thu, 15 Jun 2023 11:38:50 +0000 Subject: Universal Music is looking for varnish experts as Backend Engineer Message-ID: Dear Varnish Enthusiasts, I am Jonatan the Head of Engineering for the ALPHA project at Universal Music Germany. I saw your profile and thought that you would be a great fit for our team. We are looking for a new team member joining us as Backend Engineer. Alpha is an internal platform for projects with artists we as a team are supporting our colleagues to deliver the best service for the artists that are working with UMG. We are a self-organizing, empowered development team that you can become part of. Every team member takes part in delivering the best experience for our users and you can make a difference here. We are running a self-managed Kubernetes, our main API is a majestic monolith in Python/Django. We use Varnish to deliver a wealth of media assets with custom watermarking. This is the foundation to serve our global users. I would love to hear back from anyone who is interested in working with us. Best regards, Jonatan For more information, you can just reach out to me or connect via our job ad. https://www.universal-music.de/company/jobs-karriere/professionals/senior-backend-engineer-python-all-genders-2263 -- Jonatan Reiners Head of Engineering Asset & Information Management Universal Music Beats & Bytes a division of Universal Music GmbH Stralauer Allee 1 D-10245 Berlin Office: +49.(0)30.52007.2260 Web: http://www.universal-music.de [signature_656556639] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 263 bytes Desc: image002.png URL: From jonatan.reiners at umusic.com Thu Jun 15 11:55:36 2023 From: jonatan.reiners at umusic.com (Reiners, Jonatan) Date: Thu, 15 Jun 2023 11:55:36 +0000 Subject: Universal Music is looking for varnish experts as Backend Engineer In-Reply-To: References: Message-ID: Please disregard to previous text, there was a mistake. Obviously I didn?t see anyones profile. Dear Varnish Enthusiasts, I am Jonatan the Head of Engineering for the ALPHA project at Universal Music Germany. We are looking for a new team member joining us as Backend Engineer. Alpha is an internal platform for projects with artists we as a team are supporting our colleagues to deliver the best service for the artists that are working with UMG. We are a self-organizing, empowered development team that you can become part of. Every team member takes part in delivering the best experience for our users and you can make a difference here. We are running a self-managed Kubernetes, our main API is a majestic monolith in Python/Django. We use Varnish to deliver a wealth of media assets with custom watermarking. This is the foundation to serve our global users. I would love to hear back from anyone who is interested in working with us. Best regards, Jonatan For more information, you can just reach out to me or connect via our job ad. https://www.universal-music.de/company/jobs-karriere/professionals/senior-backend-engineer-python-all-genders-2263 From: varnish-misc on behalf of Reiners, Jonatan Date: Thursday, 15. June 2023 at 13:40 To: varnish-misc at varnish-cache.org Subject: Universal Music is looking for varnish experts as Backend Engineer Dear Varnish Enthusiasts, I am Jonatan the Head of Engineering for the ALPHA project at Universal Music Germany. I saw your profile and thought that you would be a great fit for our team. We are looking for a new team member joining us as Backend Engineer. Alpha is an internal platform for projects with artists we as a team are supporting our colleagues to deliver the best service for the artists that are working with UMG. We are a self-organizing, empowered development team that you can become part of. Every team member takes part in delivering the best experience for our users and you can make a difference here. We are running a self-managed Kubernetes, our main API is a majestic monolith in Python/Django. We use Varnish to deliver a wealth of media assets with custom watermarking. This is the foundation to serve our global users. I would love to hear back from anyone who is interested in working with us. Best regards, Jonatan For more information, you can just reach out to me or connect via our job ad. https://www.universal-music.de/company/jobs-karriere/professionals/senior-backend-engineer-python-all-genders-2263 -- Jonatan Reiners Head of Engineering Asset & Information Management Universal Music Beats & Bytes a division of Universal Music GmbH Stralauer Allee 1 D-10245 Berlin Office: +49.(0)30.52007.2260 Web: http://www.universal-music.de [signature_656556639] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 263 bytes Desc: image002.png URL: From dridi at varni.sh Thu Jun 15 12:47:27 2023 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 15 Jun 2023 12:47:27 +0000 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses In-Reply-To: References: Message-ID: On Thu, Jun 15, 2023 at 9:33?AM Uday Kumar wrote: > > >> There is this in the code: >> >> > H("Cache-Control", H_Cache_Control, F ) // 2616 14.9 >> >> We remove the this header when we create a normal fetch task, hence >> the F flag. There's a reference to RFC2616 section 14.9, but this RFC >> has been updated by newer documents. > > > Where can I find details about the above code, could not find it in RFC 2616 14.9! This is from include/tbl/http_headers.h in the Varnish code base. I'm not going to break it down in details, but that's basically where we declare well-known headers and when to strip them when we perform a req->bereq or beresp->resp transition. In this case, we strip the cache-control header from the initial beresp when it is a cache miss. Dridi From guillaume.quintard at gmail.com Thu Jun 15 13:00:07 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Thu, 15 Jun 2023 15:00:07 +0200 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses In-Reply-To: References: Message-ID: Adding to what Dridi said, and just to be clear: the "cleaning" of those well-known headers only occurs when the req object is copied into a beteq, so there's nothing preventing you from stashing the "cache-control" header into "x-cache-control" during vcl_recv, and then copying it back to "cache-control" during vcl_backend_response. -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinl at arena.net Thu Jun 15 16:57:46 2023 From: justinl at arena.net (Justin Lloyd) Date: Thu, 15 Jun 2023 16:57:46 +0000 Subject: Purging cached std.fileread() contents Message-ID: I'm trying to test a simple Varnish setup with no backend to serve a single index.html file. This is for use on a maintenance page web server when the main web site is down, more specifically, behind an AWS ALB with two target groups, one with the main web servers and the other with two lightweight maintenance page servers. I figured that it'd be nice to just leave out a Nginx configuration since a single static file is all that's needed (images referenced in the HTML are from other sources). I'm using std.fileread("/var/www/html/index.html") to set the resp.body. The documentation for std.fileread() says it is cached indefinitely, so how do I get Varnish to re-read the file when it gets updated without having to restart Varnish? The reason for this is that the maintenance page lists the ETA when the web site should be back up, so if I need to extend the maintenance, I have a manual script that creates and deploys a new version of the file to the maintenance page servers. Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Thu Jun 15 17:42:28 2023 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 15 Jun 2023 19:42:28 +0200 Subject: Purging cached std.fileread() contents In-Reply-To: References: Message-ID: <17aaf633-7bc0-4e2e-f3c6-ad6098954bf7@uplex.de> On 6/15/23 18:57, Justin Lloyd wrote: > > The documentation for std.fileread() says it is cached indefinitely, so > how do I get Varnish to re-read the file when it gets updated without > having to restart Varnish? "Cached indefinitely" means just what it says. The VMOD saves the file contents in memory on the first invocation of std.fileread(), and never reads the file again. We have a VMOD that reads file contents and then monitors the file for changes. The new contents are used after the change: https://code.uplex.de/uplex-varnish/libvmod-file Best, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From guillaume.quintard at gmail.com Thu Jun 15 17:52:22 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Thu, 15 Jun 2023 19:52:22 +0200 Subject: Purging cached std.fileread() contents In-Reply-To: <17aaf633-7bc0-4e2e-f3c6-ad6098954bf7@uplex.de> References: <17aaf633-7bc0-4e2e-f3c6-ad6098954bf7@uplex.de> Message-ID: Piling on here, there's also one in rust! https://github.com/gquintard/vmod_fileserver On Thu, Jun 15, 2023, 19:44 Geoff Simmons wrote: > On 6/15/23 18:57, Justin Lloyd wrote: > > > > The documentation for std.fileread() says it is cached indefinitely, so > > how do I get Varnish to re-read the file when it gets updated without > > having to restart Varnish? > > "Cached indefinitely" means just what it says. The VMOD saves the file > contents in memory on the first invocation of std.fileread(), and never > reads the file again. > > We have a VMOD that reads file contents and then monitors the file for > changes. The new contents are used after the change: > > https://code.uplex.de/uplex-varnish/libvmod-file > > > Best, > Geoff > -- > ** * * UPLEX - Nils Goroll Systemoptimierung > > Scheffelstra?e 32 > 22301 Hamburg > > Tel +49 40 2880 5731 > Mob +49 176 636 90917 > Fax +49 40 42949753 > > http://uplex.de > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From uday.polu at indiamart.com Thu Jun 15 18:21:28 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Thu, 15 Jun 2023 23:51:28 +0530 Subject: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses In-Reply-To: References: Message-ID: Thanks Dridi and Guillaume for clarification! On Thu, Jun 15, 2023, 18:30 Guillaume Quintard wrote: > Adding to what Dridi said, and just to be clear: the "cleaning" of those > well-known headers only occurs when the req object is copied into a beteq, > so there's nothing preventing you from stashing the "cache-control" header > into "x-cache-control" during vcl_recv, and then copying it back to > "cache-control" during vcl_backend_response. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinl at arena.net Thu Jun 15 18:25:12 2023 From: justinl at arena.net (Justin Lloyd) Date: Thu, 15 Jun 2023 18:25:12 +0000 Subject: Purging cached std.fileread() contents In-Reply-To: References: <17aaf633-7bc0-4e2e-f3c6-ad6098954bf7@uplex.de> Message-ID: Thank you both for the responses. However, I have a strong aversion to using miscellaneous 3rd party unsupported software, especially for something so relatively trivial, and something that has to be compiled and monitored for updates, that I'd think would be a native feature. It seems like it'd just be simpler to stick with a basic Nginx backend. (That said, I've actually been developing an alternate design from ec2 instances to a Lambda, but that has its own set of complexities, even for something as simple as a single, static maintenance page.) Justin From: varnish-misc On Behalf Of Guillaume Quintard Sent: Thursday, June 15, 2023 10:52 AM To: Geoffrey Simmons Cc: varnish-misc at varnish-cache.org Subject: Re: Purging cached std.fileread() contents Piling on here, there's also one in rust! https://github.com/gquintard/vmod_fileserver On Thu, Jun 15, 2023, 19:44 Geoff Simmons > wrote: On 6/15/23 18:57, Justin Lloyd wrote: > > The documentation for std.fileread() says it is cached indefinitely, so > how do I get Varnish to re-read the file when it gets updated without > having to restart Varnish? "Cached indefinitely" means just what it says. The VMOD saves the file contents in memory on the first invocation of std.fileread(), and never reads the file again. We have a VMOD that reads file contents and then monitors the file for changes. The new contents are used after the change: https://code.uplex.de/uplex-varnish/libvmod-file Best, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielkarp at gmail.com Thu Jun 22 00:27:13 2023 From: danielkarp at gmail.com (Daniel Karp) Date: Wed, 21 Jun 2023 20:27:13 -0400 Subject: Question about changes to ESI processing, possible feature request Message-ID: Hi, this is, I think, my first post to a varnish mailing list--I hope this is the right place for this. Varnish 7.3 changes the way it handles errors for Edge-side includes. Previously, the body of an ESI response would be included in the parent regardless of the status code of the ESI response. 7.3 changed this behavior to more closely match the ESI specification (https://www.w3.org/TR/esi-lang/). Now, if there is an error, the parent request will fail as well. The "onerror" attribute for ESIs is also supported (with the appropriate feature flag), allowing the request to continue when it would otherwise fail because of a failed ESI. But it isn't clear to me from the docs what happens when an ESI fails and include is set to "continue". The changelog says "any and all ESI:include objects will be delivered, no matter what their status might be." The user guide says "However, it is possible to allow individual , , etc... ] } Some of those requests might be 404 responses, and our 404's have "null" in the body (I know, a bit hacky, but it works) so that the JSON remains valid. If the ESI were silently removed, that would break the JSON in our responses. But there is a better solution in the ESI Specs: The "alt" attribute. If varnish were to silently remove the include, and also support the alt attribute, this would be the cleanest solution. In that case, we could set the alt attribute for our ESIs to an endpoint that returns a 200 response and "null", where appropriate. I hope this is clear! From dridi at varni.sh Thu Jun 22 05:45:34 2023 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 22 Jun 2023 05:45:34 +0000 Subject: Question about changes to ESI processing, possible feature request In-Reply-To: References: Message-ID: On Thu, Jun 22, 2023 at 12:28?AM Daniel Karp wrote: > > Hi, this is, I think, my first post to a varnish mailing list--I hope > this is the right place for this. Welcome! > Varnish 7.3 changes the way it handles errors for Edge-side includes. > Previously, the body of an ESI response would be included in the > parent regardless of the status code of the ESI response. 7.3 changed > this behavior to more closely match the ESI specification > (https://www.w3.org/TR/esi-lang/). Now, if there is an error, the > parent request will fail as well. The "onerror" attribute for ESIs is > also supported (with the appropriate feature flag), allowing the > request to continue when it would otherwise fail because of a failed > ESI. > > But it isn't clear to me from the docs what happens when an ESI fails > and include is set to "continue". > The changelog says "any and all ESI:include objects will be delivered, > no matter what their status might be." > The user guide says "However, it is possible to allow individual > The Pull request for this feature says: "This changes the default > behavior of includes, matching the ESI language specification." > The ESI specification itself says: "If an ESI Processor can fetch > neither the src nor the alt, it returns a HTTP status code greater > than 400 with an error message, unless the onerror attribute is > present. If it is, and onerror="continue" is specified, ESI Processors > will delete the include element silently." > > What is the new behavior? Will the body of the response be included > with onerror set to "continue" (which reproduces the previous > behavior), or will the element be silently removed? It is a good question and the short answer is that nothing is removed. If the processing of an ESI fragment fails in VCL, and there is no onerror="continue" kicking in, ESI delivery just stops where the parent response was, in the middle of its body delivery. Past VCL execution, we have the aforementioned body delivery, and if your include fails for any reason, the _partial_ include body was part of the parent response delivery, but again, it is interrupted. In each case, if onerror="continue" was in effect you would likely get a 503 guru meditation in the middle of your overall response for the former and a missing gap for the latter. > If it is the former, then that shouldn't be a problem for our use > case, although it is not great for conforming to the ESI specs. But if > the element is silently removed--or if that change is discussed to > better conform to the standards, I have a feature request. :) In other words, not removed, either replaced by a synthetic response or truncated. > We use ESI extensively for our JSON API, returning arrays of results. > We might have something like: > {foo: [ > , > , > etc... > ] } > Some of those requests might be 404 responses, and our 404's have > "null" in the body (I know, a bit hacky, but it works) so that the > JSON remains valid. If the ESI were silently removed, that would break > the JSON in our responses. I have seen and done much worse, this is actually an interesting representation of the resource in the "not found" state. Partial delivery could mean that you get "nu" instead of "null" and roll with it because of onerror="continue". > But there is a better solution in the ESI Specs: The "alt" attribute. > If varnish were to silently remove the include, and also support the > alt attribute, this would be the cleanest solution. In that case, we > could set the alt attribute for our ESIs to an endpoint that returns a > 200 response and "null", where appropriate. I don't think we support the alt attribute, or I remember pretty badly since I introduced the initial onerror support. It has been mentioned very recently so maybe there could be a move in this direction. An include with src and alt attributes would likely not be streamed, ruling out the "partial fragment" scenario. > I hope this is clear! I seriously doubt my answer was :) Cheers, Dridi From uday.polu at indiamart.com Wed Jun 28 13:25:33 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Wed, 28 Jun 2023 18:55:33 +0530 Subject: Unexpected Cache-Control Header Transmission in Dual-Server API Setup Message-ID: Hello All, Our application operates on a dual-server setup, where each server is dedicated to running a distinct API. *Technical specifications:* Framework: Spring-boot v2.4 (Java 1.8) Runtime Environment: Tomcat Version: Apache Tomcat/7.0.42 Server1 runs API-1 and Server2 runs API-2. Both servers are equipped with an installed Varnish application. When either API is accessed, the request is processed through the Varnish instance associated with the respective server. *Issue Description:* In a typical scenario, a client (browser) sends a request to API-1, which is handled by the Varnish instance on Server1. After initial processing, API-1 makes a subsequent request to API-2 on Server2. The Request Flow is as follows: *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on Server2 --> Tomcat on Server2* *Assuming, the request from Browser will be a miss at Server1 Varnish so that the request reaches Tomcat(Backend) on server1.* In cases where the browser *does not include any cache-control headers in the request* (e.g., no-cache, max-age=0), the Server1 Varnish instance correctly *does not receive any cache-control headers*. *However, when API-1 calls API-2, we observe that a cache-control: no-cache and p**ragma: no-cache headers are being transmitted to the Varnish instance on Server2*, despite the following conditions: 1. We are not explicitly sending any cache-control header in our application code during the call from API-1 to API-2. 2. Our application does not use the Spring-security dependency, which by default might add such a header. 3. The cache-control header is not being set by the Varnish instance on Server2. This unexpected behavior of receiving a cache-control header at Server2's Varnish instance when invoking API-2 from API-1 is the crux of our issue. We kindly request your assistance in understanding the cause of this unexpected behavior. Additionally, we would greatly appreciate any guidance on how to effectively prevent this issue from occurring in the future. Thanks & Regards Uday Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Wed Jun 28 15:02:16 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Wed, 28 Jun 2023 08:02:16 -0700 Subject: Unexpected Cache-Control Header Transmission in Dual-Server API Setup In-Reply-To: References: Message-ID: Hi Uday, That one should be quick: Varnish doesn't add cache-control headers on its own. So, from what I understand it can come from two places: - either the VCL in varnish1 - something in tomcat1 It should be very easy to check with varnishlog's. Essentially, run "varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send a curl request like "curl http://varnish1/some/request/not/in/cache.html -H "uday: true" You should see the request going through both varnish and should be able to pinpoint what created the header. Or at least identify whether it's a varnish thing or not. Kind regards For a reminder on varnishlog: https://docs.varnish-software.com/tutorials/vsl-query/ On Wed, Jun 28, 2023, 06:28 Uday Kumar wrote: > Hello All, > > Our application operates on a dual-server setup, where each server is > dedicated to running a distinct API. > > *Technical specifications:* > Framework: Spring-boot v2.4 (Java 1.8) > Runtime Environment: Tomcat > Version: Apache Tomcat/7.0.42 > Server1 runs API-1 and Server2 runs API-2. Both servers are equipped with > an installed Varnish application. When either API is accessed, the request > is processed through the Varnish instance associated with the respective > server. > > *Issue Description:* > In a typical scenario, a client (browser) sends a request to API-1, which > is handled by the Varnish instance on Server1. After initial processing, > API-1 makes a subsequent request to API-2 on Server2. > > The Request Flow is as follows: > *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on > Server2 --> Tomcat on Server2* > > *Assuming, the request from Browser will be a miss at Server1 Varnish so > that the request reaches Tomcat(Backend) on server1.* > > In cases where the browser *does not include any cache-control headers in > the request* (e.g., no-cache, max-age=0), the Server1 Varnish instance > correctly *does not receive any cache-control headers*. > > *However, when API-1 calls API-2, we observe that a cache-control: > no-cache and p**ragma: no-cache headers are being transmitted to the > Varnish instance on Server2*, despite the following conditions: > > 1. We are not explicitly sending any cache-control header in our > application code during the call from API-1 to API-2. > 2. Our application does not use the Spring-security dependency, which by > default might add such a header. > 3. The cache-control header is not being set by the Varnish instance on > Server2. > > This unexpected behavior of receiving a cache-control header at Server2's > Varnish instance when invoking API-2 from API-1 is the crux of our issue. > > We kindly request your assistance in understanding the cause of this > unexpected behavior. Additionally, we would greatly appreciate any guidance > on how to effectively prevent this issue from occurring in the future. > > Thanks & Regards > Uday Kumar > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From uday.polu at indiamart.com Wed Jun 28 17:02:53 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Wed, 28 Jun 2023 22:32:53 +0530 Subject: Unexpected Cache-Control Header Transmission in Dual-Server API Setup In-Reply-To: References: Message-ID: Hi Guillaume, You are right! varnish is not adding any cache-control headers. *Observations when trying to replicate the issue locally:* I was trying to replicate the issue using Local Machine by creating a Spring Boot Application that acts as API-1 and tried hitting API-2 that's on Server2. *Request Flow:* Local Machine ----> Server2 varnish --> Server2 Tomcat Point-1: When using* integrated tomcat (Tomcat 9) the spring-boot* issue was *not *replicable [*Just ran Application in intellij*] (meaning, the cache-control header is *not *being transmitted to Varnish of Server2) *Point-2:* When *Tomcat 9 was explicitly installed in my local machine* and built the* corresponding war of API-1 and used this to hit API-2* that's on Server2, *Now issue got replicated* (meaning, *cache-control: no-cache, pragma: no-cache is being transmitted to Varnish of Server2*) Any insights? Thanks & Regards Uday Kumar On Wed, Jun 28, 2023 at 8:32?PM Guillaume Quintard < guillaume.quintard at gmail.com> wrote: > Hi Uday, > > That one should be quick: Varnish doesn't add cache-control headers on its > own. > > So, from what I understand it can come from two places: > - either the VCL in varnish1 > - something in tomcat1 > > It should be very easy to check with varnishlog's. Essentially, run > "varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send > a curl request like "curl http://varnish1/some/request/not/in/cache.html > -H "uday: true" > > You should see the request going through both varnish and should be able > to pinpoint what created the header. Or at least identify whether it's a > varnish thing or not. > > Kind regards > > For a reminder on varnishlog: > https://docs.varnish-software.com/tutorials/vsl-query/ > > > On Wed, Jun 28, 2023, 06:28 Uday Kumar wrote: > >> Hello All, >> >> Our application operates on a dual-server setup, where each server is >> dedicated to running a distinct API. >> >> *Technical specifications:* >> Framework: Spring-boot v2.4 (Java 1.8) >> Runtime Environment: Tomcat >> Version: Apache Tomcat/7.0.42 >> Server1 runs API-1 and Server2 runs API-2. Both servers are equipped with >> an installed Varnish application. When either API is accessed, the request >> is processed through the Varnish instance associated with the respective >> server. >> >> *Issue Description:* >> In a typical scenario, a client (browser) sends a request to API-1, which >> is handled by the Varnish instance on Server1. After initial processing, >> API-1 makes a subsequent request to API-2 on Server2. >> >> The Request Flow is as follows: >> *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on >> Server2 --> Tomcat on Server2* >> >> *Assuming, the request from Browser will be a miss at Server1 Varnish so >> that the request reaches Tomcat(Backend) on server1.* >> >> In cases where the browser *does not include any cache-control >> headers in the request* (e.g., no-cache, max-age=0), the Server1 Varnish >> instance correctly *does not receive any cache-control headers*. >> >> *However, when API-1 calls API-2, we observe that a cache-control: >> no-cache and p**ragma: no-cache headers are being transmitted to the >> Varnish instance on Server2*, despite the following conditions: >> >> 1. We are not explicitly sending any cache-control header in our >> application code during the call from API-1 to API-2. >> 2. Our application does not use the Spring-security dependency, which by >> default might add such a header. >> 3. The cache-control header is not being set by the Varnish instance on >> Server2. >> >> This unexpected behavior of receiving a cache-control header at Server2's >> Varnish instance when invoking API-2 from API-1 is the crux of our issue. >> >> We kindly request your assistance in understanding the cause of this >> unexpected behavior. Additionally, we would greatly appreciate any guidance >> on how to effectively prevent this issue from occurring in the future. >> >> Thanks & Regards >> Uday Kumar >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at gmail.com Wed Jun 28 17:06:42 2023 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Wed, 28 Jun 2023 10:06:42 -0700 Subject: Unexpected Cache-Control Header Transmission in Dual-Server API Setup In-Reply-To: References: Message-ID: Not really, I have no tomcat expertise, which is where the issue should be fixed. That being said, if you can't prevent tomcat from adding the header, then you can use the VCL on varnish2 to scrub the headers ("unset req.http.cache-control;"). -- Guillaume Quintard On Wed, Jun 28, 2023 at 10:03?AM Uday Kumar wrote: > Hi Guillaume, > > You are right! > varnish is not adding any cache-control headers. > > > *Observations when trying to replicate the issue locally:* > I was trying to replicate the issue using Local Machine by creating a > Spring Boot Application that acts as API-1 and tried hitting API-2 that's > on Server2. > > *Request Flow:* Local Machine ----> Server2 varnish --> Server2 Tomcat > > Point-1: When using* integrated tomcat (Tomcat 9) the spring-boot* issue > was *not *replicable [*Just ran Application in intellij*] (meaning, the > cache-control header is *not *being transmitted to Varnish of Server2) > > *Point-2:* When *Tomcat 9 was explicitly installed in my local machine* > and built the* corresponding war of API-1 and used this to hit API-2* > that's on Server2, *Now issue got replicated* (meaning, *cache-control: > no-cache, pragma: no-cache is being transmitted to Varnish of Server2*) > > > Any insights? > > Thanks & Regards > Uday Kumar > > > On Wed, Jun 28, 2023 at 8:32?PM Guillaume Quintard < > guillaume.quintard at gmail.com> wrote: > >> Hi Uday, >> >> That one should be quick: Varnish doesn't add cache-control headers on >> its own. >> >> So, from what I understand it can come from two places: >> - either the VCL in varnish1 >> - something in tomcat1 >> >> It should be very easy to check with varnishlog's. Essentially, run >> "varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send >> a curl request like "curl http://varnish1/some/request/not/in/cache.html >> -H "uday: true" >> >> You should see the request going through both varnish and should be able >> to pinpoint what created the header. Or at least identify whether it's a >> varnish thing or not. >> >> Kind regards >> >> For a reminder on varnishlog: >> https://docs.varnish-software.com/tutorials/vsl-query/ >> >> >> On Wed, Jun 28, 2023, 06:28 Uday Kumar wrote: >> >>> Hello All, >>> >>> Our application operates on a dual-server setup, where each server is >>> dedicated to running a distinct API. >>> >>> *Technical specifications:* >>> Framework: Spring-boot v2.4 (Java 1.8) >>> Runtime Environment: Tomcat >>> Version: Apache Tomcat/7.0.42 >>> Server1 runs API-1 and Server2 runs API-2. Both servers are equipped >>> with an installed Varnish application. When either API is accessed, the >>> request is processed through the Varnish instance associated with the >>> respective server. >>> >>> *Issue Description:* >>> In a typical scenario, a client (browser) sends a request to API-1, >>> which is handled by the Varnish instance on Server1. After initial >>> processing, API-1 makes a subsequent request to API-2 on Server2. >>> >>> The Request Flow is as follows: >>> *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on >>> Server2 --> Tomcat on Server2* >>> >>> *Assuming, the request from Browser will be a miss at Server1 Varnish so >>> that the request reaches Tomcat(Backend) on server1.* >>> >>> In cases where the browser *does not include any cache-control >>> headers in the request* (e.g., no-cache, max-age=0), the Server1 >>> Varnish instance correctly *does not receive any cache-control headers*. >>> >>> *However, when API-1 calls API-2, we observe that a cache-control: >>> no-cache and p**ragma: no-cache headers are being transmitted to the >>> Varnish instance on Server2*, despite the following conditions: >>> >>> 1. We are not explicitly sending any cache-control header in our >>> application code during the call from API-1 to API-2. >>> 2. Our application does not use the Spring-security dependency, which by >>> default might add such a header. >>> 3. The cache-control header is not being set by the Varnish instance on >>> Server2. >>> >>> This unexpected behavior of receiving a cache-control header at >>> Server2's Varnish instance when invoking API-2 from API-1 is the crux of >>> our issue. >>> >>> We kindly request your assistance in understanding the cause of this >>> unexpected behavior. Additionally, we would greatly appreciate any guidance >>> on how to effectively prevent this issue from occurring in the future. >>> >>> Thanks & Regards >>> Uday Kumar >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From uday.polu at indiamart.com Wed Jun 28 17:26:16 2023 From: uday.polu at indiamart.com (Uday Kumar) Date: Wed, 28 Jun 2023 22:56:16 +0530 Subject: Unexpected Cache-Control Header Transmission in Dual-Server API Setup In-Reply-To: References: Message-ID: Okay thank you! On Wed, Jun 28, 2023, 22:36 Guillaume Quintard wrote: > Not really, I have no tomcat expertise, which is where the issue should be > fixed. That being said, if you can't prevent tomcat from adding the header, > then you can use the VCL on varnish2 to scrub the headers ("unset > req.http.cache-control;"). > > -- > Guillaume Quintard > > > On Wed, Jun 28, 2023 at 10:03?AM Uday Kumar > wrote: > >> Hi Guillaume, >> >> You are right! >> varnish is not adding any cache-control headers. >> >> >> *Observations when trying to replicate the issue locally:* >> I was trying to replicate the issue using Local Machine by creating a >> Spring Boot Application that acts as API-1 and tried hitting API-2 that's >> on Server2. >> >> *Request Flow:* Local Machine ----> Server2 varnish --> Server2 Tomcat >> >> Point-1: When using* integrated tomcat (Tomcat 9) the spring-boot* issue >> was *not *replicable [*Just ran Application in intellij*] (meaning, the >> cache-control header is *not *being transmitted to Varnish of Server2) >> >> *Point-2:* When *Tomcat 9 was explicitly installed in my local machine* >> and built the* corresponding war of API-1 and used this to hit API-2* >> that's on Server2, *Now issue got replicated* (meaning, *cache-control: >> no-cache, pragma: no-cache is being transmitted to Varnish of Server2*) >> >> >> Any insights? >> >> Thanks & Regards >> Uday Kumar >> >> >> On Wed, Jun 28, 2023 at 8:32?PM Guillaume Quintard < >> guillaume.quintard at gmail.com> wrote: >> >>> Hi Uday, >>> >>> That one should be quick: Varnish doesn't add cache-control headers on >>> its own. >>> >>> So, from what I understand it can come from two places: >>> - either the VCL in varnish1 >>> - something in tomcat1 >>> >>> It should be very easy to check with varnishlog's. Essentially, run >>> "varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send >>> a curl request like "curl http://varnish1/some/request/not/in/cache.html >>> -H "uday: true" >>> >>> You should see the request going through both varnish and should be able >>> to pinpoint what created the header. Or at least identify whether it's a >>> varnish thing or not. >>> >>> Kind regards >>> >>> For a reminder on varnishlog: >>> https://docs.varnish-software.com/tutorials/vsl-query/ >>> >>> >>> On Wed, Jun 28, 2023, 06:28 Uday Kumar wrote: >>> >>>> Hello All, >>>> >>>> Our application operates on a dual-server setup, where each server is >>>> dedicated to running a distinct API. >>>> >>>> *Technical specifications:* >>>> Framework: Spring-boot v2.4 (Java 1.8) >>>> Runtime Environment: Tomcat >>>> Version: Apache Tomcat/7.0.42 >>>> Server1 runs API-1 and Server2 runs API-2. Both servers are equipped >>>> with an installed Varnish application. When either API is accessed, the >>>> request is processed through the Varnish instance associated with the >>>> respective server. >>>> >>>> *Issue Description:* >>>> In a typical scenario, a client (browser) sends a request to API-1, >>>> which is handled by the Varnish instance on Server1. After initial >>>> processing, API-1 makes a subsequent request to API-2 on Server2. >>>> >>>> The Request Flow is as follows: >>>> *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on >>>> Server2 --> Tomcat on Server2* >>>> >>>> *Assuming, the request from Browser will be a miss at Server1 Varnish >>>> so that the request reaches Tomcat(Backend) on server1.* >>>> >>>> In cases where the browser *does not include any cache-control >>>> headers in the request* (e.g., no-cache, max-age=0), the Server1 >>>> Varnish instance correctly *does not receive any cache-control headers* >>>> . >>>> >>>> *However, when API-1 calls API-2, we observe that a cache-control: >>>> no-cache and p**ragma: no-cache headers are being transmitted to the >>>> Varnish instance on Server2*, despite the following conditions: >>>> >>>> 1. We are not explicitly sending any cache-control header in our >>>> application code during the call from API-1 to API-2. >>>> 2. Our application does not use the Spring-security dependency, which >>>> by default might add such a header. >>>> 3. The cache-control header is not being set by the Varnish instance on >>>> Server2. >>>> >>>> This unexpected behavior of receiving a cache-control header at >>>> Server2's Varnish instance when invoking API-2 from API-1 is the crux of >>>> our issue. >>>> >>>> We kindly request your assistance in understanding the cause of this >>>> unexpected behavior. Additionally, we would greatly appreciate any guidance >>>> on how to effectively prevent this issue from occurring in the future. >>>> >>>> Thanks & Regards >>>> Uday Kumar >>>> _______________________________________________ >>>> varnish-misc mailing list >>>> varnish-misc at varnish-cache.org >>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: