Concurrency Models (Velocity)
The Microsoft project code named "Velocity" architecture allows any cache client to openly access any cached data if those clients have the appropriate network access and configuration settings. This presents a challenge to both security and concurrency.
To mitigate security risks, all cache clients, cache servers, and the primary data source server should be members of the same corporate domain, and should be deployed within the perimeter of the corporate firewall. It is also highly recommended to secure your application configuration files on the cache clients.
To help your application deal with concurrency issues, "Velocity" supports optimistic and pessimistic concurrency models. For information about the methods available to align to these models, see Concurrency Methods (Velocity).
Optimistic Concurrency Model
In the optimistic concurrency model, updates to cached objects do not take locks. Instead, the cache client first reads the version of the object to be updated and then sends that version information together with the updated object to the cache for an update. The system only updates the object if the version sent matches the current version of the object. Every update to an object changes its version number, which prevents the update from overwriting someone else’s changes.
The example in this topic illustrates how optimistic concurrency maintains data consistency.
In this example, two cache clients (
cacheClientB) on two separate application servers try to update the same cached object, which is named
Time Zero: Both Clients Retrieve the Same Object
At time zero (T0), both cache clients instantiate a DataCacheItem class to capture the cached object that they intend to update, together with additional information associated with that cached object, such as version and tag information. This is illustrated in the following diagram and code example.
//cacheClientA pulls the FM radio inventory from cache DataCacheFactory clientACacheFactory = new DataCacheFactory(); DataCache cacheClientA = clientACacheFactory.GetCache("catalog"); DataCacheItem radioInventoryA = cacheClientA.GetCacheItem("RadioInventory","electronics"); //cacheClientB pulls the same FM radio inventory from cache DataCacheFactory clientBCacheFactory = new DataCacheFactory(); DataCache cacheClientB = clientBCacheFactory.GetCache("catalog"); DataCacheItem radioInventoryB= cacheClientA.GetCacheItem("RadioInventory", "electronics");
Time One: The First Update Succeeds
At time one (T1),
cacheClientA updates the cached object
RadioInventory with a new value. When
cacheClientA executes the Put method, the version associated with the
RadioInventory cache item increments. At this time,
cacheClientB has an out-of-date cache item. This is illustrated in the following diagram and code example.
Time Two: The Second Update Fails
At time two (T2),
cacheClientB tries to update the
RadioInventory cached object by using what is now an out-of-date version number. To prevent the changes from
cacheClientA from being overwritten, the
cacheClientB Put method call fails. This is illustrated in the following diagram and code example.
Pessimistic Locking Model
In the pessimistic locking model, the client explicitly locks objects to perform operations. Other operations that request locks are rejected (the system does not block requests) until the locks are released. When objects are locked, a lock handle is returned as an output parameter. The lock handle is required to unlock the object. In case the client application ends before freeing a locked object, timeouts are provided to release the locks. Locked objects are never expired, but they may expire immediately after they are unlocked if it is past their expiry time.
Transactions spanning operations are not supported. The application that uses cache is responsible for determining the order of the locks and detecting deadlocks, if any.