Jump to: navigation, search

Difference between revisions of "Swift/ideas/memoize lookups"

< Swift‎ | ideas
(Created page with "= Reduce memcache network lookups = Each Swift proxy server normally shares a single (per-region) memcache pool. The proxy checks this pool for account and container info. Wh...")
 
 
Line 1: Line 1:
 
= Reduce memcache network lookups =
 
= Reduce memcache network lookups =
  
Each Swift proxy server normally shares a single (per-region) memcache pool. The proxy checks this pool for account and container info. While faster than going to disk to get this, it's a common operation that has a (1 - 1/N) chance of going to a different proxy server, where N is the number of proxy servers.
+
Each Swift proxy server normally has a memcache instance running on the same box, and these instances form a single memcache pool. The proxy checks this cache pool for account and container info (amongst other things). While faster than going to disk to get this info, it's a common operation that has a (1 - 1/N) chance of going to a different proxy server, where N is the number of proxy servers.
  
 
We already have an LRU cache that can be used as a decorator to memoize functions. We could use this to store account and container info in the local proxy, similar to https://gist.github.com/notmyname/c4ca69c1ebf85079b673b6b153bb5bb9
 
We already have an LRU cache that can be used as a decorator to memoize functions. We could use this to store account and container info in the local proxy, similar to https://gist.github.com/notmyname/c4ca69c1ebf85079b673b6b153bb5bb9

Latest revision as of 05:43, 5 May 2016

Reduce memcache network lookups

Each Swift proxy server normally has a memcache instance running on the same box, and these instances form a single memcache pool. The proxy checks this cache pool for account and container info (amongst other things). While faster than going to disk to get this info, it's a common operation that has a (1 - 1/N) chance of going to a different proxy server, where N is the number of proxy servers.

We already have an LRU cache that can be used as a decorator to memoize functions. We could use this to store account and container info in the local proxy, similar to https://gist.github.com/notmyname/c4ca69c1ebf85079b673b6b153bb5bb9

Obviously this needs to be profiled to see if there's any benefit whatsoever (I haven't done that yet). Also, the idea of tiering caches has likely side effects about when the cache is invalidated or when data is expected to be accurate in the cluster. Some things like ratelimiting or quotas might be adversely affected. Other operations like normal reads and writes may be improved.

Want to talk more? Find notmyname on IRC.