Concurrency Model in Azure In-Role Cache
Updated: February 13, 2015
|For guidance on choosing the right Azure Cache offering for your application, see Which Azure Cache offering is right for me?.|
The caching architecture allows any cache client to access any cached data if those clients have the appropriate network access and configuration settings. This presents a challenge to concurrency.
To help your application deal with concurrency issues, optimistic and pessimistic concurrency models are supported.
In the optimistic concurrency model, updates to cached objects do not take locks. Instead, when the cache client gets an object from the cache, it also obtains and stores the current version of that object. When an update is required, the cache client sends the new value for the object along with the stored version object. The system only updates the object if the version sent matches the current version of the object in the cache. Every update to an object changes its version number, which prevents the update from overwriting someone else’s changes.
The example in this topic illustrates how optimistic concurrency maintains data consistency.
In this example, two cache clients (
cacheClientB) try to update the same cached object, having the same key
Time Zero: Both Clients Retrieve the Same Object
At time zero (T0), both cache clients instantiate a DataCacheItem class to capture the cached object that they intend to update, together with additional information associated with that cached object, such as version and tag information. This is illustrated in the following code example.
//cacheClientA pulls the FM radio inventory from cache DataCacheFactory clientACacheFactory = new DataCacheFactory(); DataCache cacheClientA = clientACacheFactory.GetCache("catalog"); DataCacheItem radioInventoryA = cacheClientA.GetCacheItem("RadioInventory"); //cacheClientB pulls the same FM radio inventory from cache DataCacheFactory clientBCacheFactory = new DataCacheFactory(); DataCache cacheClientB = clientBCacheFactory.GetCache("catalog"); DataCacheItem radioInventoryB= cacheClientB.GetCacheItem("RadioInventory");
|Although this example obtains the version information by using the GetCacheItem method to retrieve the DataCacheItem object, it is also possible to use the Get method to obtain the DataCacheItemVersion object associated with the retrieved cache item.|
Time One: The First Update Succeeds
At time one (T1),
cacheClientA updates the cached object
RadioInventory with a new value. When
cacheClientA executes the Put method, the version associated with the
RadioInventory cache item increments. At this time,
cacheClientB has an out-of-date cache item. This is illustrated in the following example.
//at time T1, cacheClientA updates the FM radio inventory int newRadioInventoryA = 155; cacheClientA.Put("RadioInventory", newRadioInventoryA, radioInventoryA.Version);
Time Two: The Second Update Fails
At time two (T2),
cacheClientB tries to update the
RadioInventory cached object by using what is now an out-of-date version number. To prevent the changes from
cacheClientA from being overwritten, the
cacheClientBPut method call fails. The cache client throws a DataCacheException object with the ErrorCode property set to CacheItemVersionMismatch. This is illustrated in the following code example.
In the pessimistic concurrency model, the client explicitly locks objects to perform operations. Other operations that request locks are rejected (the system does not block requests) until the locks are released. When objects are locked, a lock handle is returned as an output parameter. The lock handle is required to unlock the object. In case the client application ends before freeing a locked object, time-outs are provided to release the locks. Locked objects are never expired, but they may expire immediately after they are unlocked if it is past their expiry time.
For more information about the methods used with the pessimistic concurrency model, see Concurrency Methods
|Transactions spanning operations are not supported.|
|The application that uses cache is responsible for determining the order of the locks and detecting deadlocks, if any.|
|Locked objects in the cache can still be replaced by any cache client with the Put method. Cache-enabled applications are responsible for consistently using PutAndUnlock for items that use the pessimistic concurrency model.|