int newVersion = specialFooCache.getCacheVersion();
// we only want one thread to refresh the cache, and we
// want the other ones to keep using the old data structures.
if(newVersion != cacheVersion && inProgress == false) {
synchronized(lock) {
// first thread is in b/c cacheVersion has not been updated yet.
// All other threads evaluate to false.
int checkVersion = flexibleStrategyCache.getCacheVersion();
if(checkVersion != cacheVersion) {
}
}
}
But I was (a) stuck and (b) had other code to refactor under the gun. So I didn't think about it much until today in the code review. I'm really glad I code reviewed with John, because he is a relentless simplifier whose lives by the 'less code' motto.
Right away I could tell that the whole recheck thing wasn't sitting well with him. The more I explained, the more concerned he looked, until he spotted a potential race condition between threads that might have piled up outside the synchronize when the someCache had incremented while it was being reloaded. These threads would immediately reload even when they didn't have to, which didn't affect the integrity of the in memory data structures, but was definitely a bad use of cpu. He also was insistent that there had to be an easier way, and under his direction this is that easier way:
int newVersion = 0;
// lock is shorter
synchronized(lock) {
newVersion = specialFooCache.getCacheVersion();
// leave right away if in progress
if(cacheVersion == newVersion || inProgress == true) {
return;
}
inProgress = true;
}
try {
// load from cache
} catch (Exception e) {
...
} finally {
synchronized(lock) {
// set the version.
cacheVersion = newVersion;
// we are done here.
inProgress = false;
 }
}
A couple of things to note:
(1) This solution is definitely easier and more performant in addition to being correct. The lock is not held during the reload, and when it is released other threads will exit the function immediately if a cache rewrite is in progress.
(2) The key thing that I learned from watching John solve the problem is that he wasn't happy until it was dead simple. When I don't take this approach, or abandon it under pressure, there are some potentially costly ramifications. The original solution had a complexity smell around it that I chose to ignore. The problem with ignoring that smell is that it comes back to haunt after you've forgotten why you wrote it in the first place. The solution is, of course, to write code that is simple enough to re-understand months from now. Or simple enough for anyone not familiar with your code to understand.
(3) the lock at the bottom is not actually necessary since (a) the variables being set are volatile and (b) only one thread will get this far. However, since only one thread is getting this far, it is not a performance issue, and actually lends some clarity to the resetting of test conditions.
(4) I still feel like this is somewhat a work in progress...I'm not convinced that (3) is really required (even though we both signed off on it). But I do feel a lot better about the current algorithm. That said,
(5) I love code reviews. The knowledge transfer is huge. I'm always looking for ways to strip my code down, and the insight I get from a good reviewer (like John) raises my game tremendously.
No comments:
Post a Comment