Expiration and Eviction (In-Role Cache for Azure Cache)
Updated: July 17, 2010
|For guidance on choosing the right Azure Cache offering for your application, see Which Azure Cache offering is right for me?.|
Microsoft Azure Cache does not retain cached objects in memory permanently. In addition to being explicitly removed from cache by use of the Remove method, cached objects may also expire or be evicted by the cache cluster.
Cache expiration allows the cache cluster to automatically remove cached objects from the cache. When using the Put or Add methods, an optional object time-out value can be set for the particular cached object that will determine how long it will reside in cache. If the object time-out value is not provided at the time the object is cached, the cached object uses the default expiration time. This default varies depending on whether you are using caching on roles or using Shared Caching.
When using role-based caching, you have three options for expiration:
Expiration is disabled. Items remain in the cache until they are evicted or the cache cluster is restarted.
Items expire in a set period of time from when they were created.
Items expire in a set period of time from when they were last accessed. Each time the object is accessed, the sliding time window resets. This keeps frequently used items in the cache longer.
|It is important to note the behavior of Sliding expiration when used in combination with local cache. If an item is read from the local cache, this does not access the object on the cache cluster. So it is possible that the item will expire on the server even though it is being read locally.|
In Shared Caching, expiration is always Absolute and there is no way to set a default expiration time. Items in Shared Caching expire after 48 hours. However, you can use the Put and Add methods to set explicit expiration times in code. Note that the ASP.NET providers automatically use these overloads to provide explicit timeouts for session state and output caching. In either case, when your cache size exceeds the limits of your Shared Caching offering, the least recently used items in the cache are evicted.
When cached objects are locked for the purposes of concurrency, they will not be removed from cache even if they are past their expiration. As soon as they are unlocked, they will be immediately removed from cache if past their expiration.
To prevent instant removal when you unlock expired objects, the Unlock method also supports extending the expiration of the cached object.
Local Cache Invalidation
There are two complementary types of invalidation for local cache: time-out-based invalidation and notification-based invalidation.
|After objects are stored in the local cache, your application can use those objects until they are invalidated, regardless of whether those objects are updated by another client on the cache cluster. For this reason, it is best to enable local cache for data that changes infrequently.|
After objects are downloaded to local cache, they stay there until they reach the object time-out value specified in the cache client configuration settings. After they reach this time-out value, objects are invalidated so that the object can be refreshed from the cache cluster the next time that it is requested.
If your cache client has enabled local cache, you can also use cache notifications to automatically invalidate your locally cached objects. By shortening the lifetime of those objects on an "as needed" basis, you can reduce the possibility that your application is using stale data.
|Notifications are not supported in Shared Caching.|
When you use cache notifications, your application checks with the cache cluster on a regular interval to see if any new notifications are available. This interval, called the polling interval, is every 300 seconds by default. The polling interval is specified in units of seconds in the application configuration settings. Note that even with notification-based invalidation, timeouts still apply to items in the local cache. This makes notification-based invalidation complementary to timeout-based invalidation.
For more information and examples, see Local Cache (In-Role Cache for Azure Cache).
To maintain the memory capacity available for cache on each cache host, least recently used (LRU) eviction is supported. Thresholds are used to make sure that memory is evenly distributed across all cache hosts in the cluster. This threshold is determined by two factors: the amount of available physical memory on each machine and the percentage of caching memory reserved on each machine.
When memory consumption exceeds the memory threshold, objects are evicted from memory, regardless of whether they have expired or not, until the memory pressure is relieved. Subsequently cached objects may be rerouted to other machines in the cache cluster to maintain an optimal distribution of memory.
|If you disable eviction, you run the risk of throttling. In this condition, memory has exceeded the threshold, but there is no ability to alleviate the memory shortage. Clients that attempt to add items to the cache in this state receive an exception until it is resolved. Note that Shared Caching does not support disabling eviction on a cache.|
Specifying Expiration and Eviction Settings
Expiration and eviction behavior are configured at the named cache level in the cluster configuration settings.
The following methods allow you to override the default settings that are in the cache:
The Add and Put methods provide overloads that allow you to specify an expiration time-out value only for the object you add to the cache.
The PutAndUnlock and Unlock methods provide overloads that allow you to extend an object's expiration after unlocking it.
The ResetObjectTimeout method allows you to explicitly extend an object's lifetime, overriding the expiration settings of the cache.
Regardless of the expiration or eviction settings, if a cache cluster is restarted all objects in the cache are cleared. Your application code must reload the cache from a data source if the data is not found in the cache. This is often referred to as a cache-aside programming pattern.