[prev in list] [next in list] [prev in thread] [next in thread] 

List:       nginx
Subject:    RE: 2 locations, 2 _different_ cache valid settings, but same cache & pass-through
From:       randyorbs <randyorbs () protonmail ! com>
Date:       2020-03-29 6:48:31
Message-ID: l2tTaQQs_5D1qZflhyLZVnlumwvp-Zv7Xnh0J4Q_7Hbr1GcTQhUuZvcPbo_uUriROmrnM3xFSO5jwE5loIaVzVroCOzU8liBvK0lQtXNr3I= () protonmail ! com
[Download RAW message or body]

> Just a quick idea...

Thank you for the idea. I was able to quickly demo it out and play around with it a \
bit.

You mentioned /bar objects being older than desired with this idea, which I agree is \
a problem, but in another way.

If I reconfigure /bar to say 60 minutes, then that means /bar would not benefit from \
/foo's more frequent requests to http://myhost.io/go i.e. /bar will use its cache for \
60 minutes, thus, missing out on all the fresh data /foo is seeing. I want both /foo \
and /bar to use the most recent data either has received.

I admit, my arbitrary choice of 5 and 10 minutes for my example was not ideal.

Thanks again.


------- Original Message -------
On Friday, March 27, 2020 4:03 PM, Reinis Rozitis <r@roze.lv> wrote:

> > What I need is a cache of data that is aware that the validity of its data is
> > dependent on who - foo or bar - is retrieving it. In practice, this means that
> > requests from both foo and bar may be responded to with cache data from
> > the other's previous request, but most likely the cache will be populated with
> > responses to foo's requests to the benefit of bar, which is fine.
> > I too get charged for hitting /myhost.io/go when my clients hit /foo and /bar.
> > It would be great if I could config my way into a solution to my use-case
> > using NGINX.
> 
> Just a quick idea (though maybe there is a more elegant way).
> 
> Depending on how do you charge your clients (like if it matters which client \
> actually made the request to the origin) instead of using the same proxy_cache for \
> both you could make 2 separate caches (one for the 5 min other for 10 min) and then \
> chain them. 
> Something like:
> 
> location /foo {
> proxy_pass "http://myhost.io/go";
> proxy_cache 5min_shared_cache;
> proxy_cache_valid any 5m;
> }
> 
> location /bar {
> proxy_pass http://localhost/foo; // obviously you need to adjust/rewrite the uris
> proxy_cache 10min_shared_cache;
> proxy_cache_valid any 10m;
> }
> 
> So in case there is a request to /foo it will be forwarded to origin and cached for \
> 5 mins, if there is request to /bar - a subrequest will be made to /foo and if \
> there is a cached object in the 5min cache you'll get the object without making \
> request to origin and the object will be saved for 10 mins, but if there is no \
> object in the 5 min cache a single request to origin will populate both caches. 
> There are some drawbacks of course - the /bar requests will always populate also \
> the 5min /foo cache - so every object will be saved twice = extra disk space. 
> Also depending if you ignore or not the Expires (origin) headers the object in the \
> 10min cache could end up actually older than 10 minutes (if you get the object from \
> the 5min cache at 4:59 age (sadly nginx doesn't have Age header) and then add 10 \
> mins to it. So it may or may not be a problem if you can adjust the times a bit - \
> like add only ~7 minutes if the /foo responds with cache hit status) 
> Going the other way around (/foo -> /bar) could be possible but has extra \
> complexity (like checking if the object is not too old etc) for example using LUA \
> you could implement whole proxy and file store logic. 
> rr
> 
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic