Some fixes/cleanup for cluster table size caching logic.
The previous code was not keeping the cached size properly up-to-date
which resulted in the DB client thinking that the consumer cluster
memory size is larger than it actually is and eventually winding up in a
state where the LRU importanceMap is constantly almost empty and the
code keeps throwing away clusters whenever the next cluster is loaded
due the misconception of used cluster memory.
The code is still not perfect/totally functional - I was still able to
get the thrashing situation to reproduce with A6 model imports, but not
as heavily as originally. I was able to import models with much more
initial conditions stored than previously without these changes.