August 2014

Volume 29 Number 8

Microsoft Azure : Use Distributed Cache in Microsoft Azure

Iqbal Khan | August 2014

Microsoft Azure is rapidly becoming the cloud choice for .NET applications. Besides its rich set of cloud features, Azure provides full integration with the Microsoft .NET Framework. It’s also a good choice for Java, PHP, Ruby and Python apps. Many of the applications moving to Azure are high-traffic, so you can expect full support for high scalability. In-memory distributed cache can be an important component of a scalable environment.

This article will cover distributed caching in general and what it can provide.

The features described here relate to general-purpose in-memory distributed cache, and not specifically Azure Cache or NCache for Azure.  For .NET applications deployed in Azure, in-memory distributed cache has three primary benefits:

  1. Application performance and scalability
  2. Caching ASP.NET session state, view state and page output
  3. Sharing runtime data with events

Application Performance and Scalability

Azure makes it easy to scale an application infrastructure. For example, you can easily add more Web roles, worker roles or virtual machines (VMs) when you anticipate higher transaction load. Despite that flexibility, data storage can be a bottleneck that could keep you from being able to scale your app.

 This is where an in-memory distributed cache can be helpful. It lets you cache as much data as you want. It can reduce expensive database reads by as much as 90 percent. This also reduces transactional pressure on the database. It will be able to perform faster and take on a greater transaction load.

Unlike a relational database, an in-memory distributed cache scales in a linear fashion. It generally won’t become a scalability bottleneck, even though 90 percent of the read traffic might go to the cache instead of the database. All data in the cache is distributed to multiple cache servers. You can easily add more cache servers as your transaction load increases. Figure 1 shows how to direct apps to the cache.

Figure 1 Using In-Memory Distributed Cache in .NET Apps

// Check the cache before going to the database
  Customer Load(string customerId)
{
// The key for will be like Customer:PK:1000
  string key = "Customers:CustomerID:" + customerId;
  Customer cust = (Customer)Cache[key];
  if (cust == null)
{
// Item not found in the cache; therefore, load from database
  LoadCustomerFromDb(cust);
// Now, add this object to the cache for future reference
  Cache.Insert(key, cust, null,
  Cache.NoAbsoluteExpiration,
  Cache.NoSlidingExpiration,
  CacheItemPriority.Default, null );
}
  return cust;
}

An in-memory distributed cache can be faster and more scalable than a relational database. Figure 2 shows some performance data to give you an idea. As you can see, the scalability is linear. Compare this with your relational database or your ASP.NET Session State storage and you’ll see the benefit.

Figure 2 Performance Numbers for a Typical Distributed Cache

Cluster Size Reads per second Writes per second
2-node cluster 50,000 32,000
3-node cluster 72,000 48,000
4-node cluster 72,000 64,000
5-node cluster 120,000 80,000
6-node cluster 144,000 96,000

Caching ASP.NET Session State, View State and Page Output

Using in-memory distributed cache in Azure also helps with ASP.NET Session State, ASP.NET View State and ASP.NET Output Cache. You’ll need to store ASP.NET Session State somewhere. This can become a major scalability bottleneck. In Azure, you can store ASP.NET Session State in a SQL database, Azure Table Storage or an in-memory distributed cache.

A SQL database isn’t ideal for storing session state. Relational databases were never really designed for Blob storage, and an ASP.NET Session State object is stored as a Blob. This can cause performance issues and become a scalability bottleneck.

Similarly, Azure Table Storage isn’t ideal for Blob storage. It’s intended for storing structured entities. Although it’s more scalable than a SQL database, it’s still not ideal for storing ASP.NET Session State.

An in-memory distributed cache is better suited for storing ASP.NET Session State in Azure. It’s faster and more scalable than the other two options. It also replicates sessions so there’s no data loss if a cache server goes down. If you store sessions in a separate dedicated caching tier, then Web roles and Web server VMs become stateless, which is good because you can bring them down without losing any session data.

While running ASP.NET Session State in cache is ideal from a performance standpoint, if the cache goes down, your entire app will go down. And, of course, whatever is in your session would also be gone. The new Redis Cache for Azure session state provider will have a way you can know when these types of issues happen and at least display them to the user in a clean way.

Figure 3 shows how to configure an in-memory distributed cache to store ASP.NET Session State.

Figure 3 Configure ASP.NET Session State Storage in a Distributed Cache

// Check the Cache before going to the database
  Customer Load(string customerId)
{
// The key for will be like Customer:PK:1000
  string key = "Customers:CustomerID:" + customerId;
  Customer cust = (Customer)Cache[key];
  if (cust == null)
{
// Item not found in the cach; therefore, load from database
  LoadCustomerFromDb(cust);
// Now, add this object to the cache for future reference
  Cache.Insert(key, cust, null,
  Cache.NoAbsoluteExpiration,
  Cache.NoSlidingExpiration,
  CacheItemPriority.Default, null );
}
  return cust;
}

Although the ASP.NET MVC framework has removed the need for using ASP.NET View State, the majority of ASP.NET applications haven’t yet moved to the ASP.NET MVC framework. Therefore, they still require ASP.NET View State.

ASP.NET View State can be a major bandwidth burden, and cause a noticeable drop in your ASP.NET application response times. That’s because ASP.NET View State can be hundreds of kilobytes for each user. It’s unnecessarily sent to and from the browser in case of post back. If this ASP.NET View State is cached on the Web server end and a unique identifier is sent to the browser, it can improve response times and also reduce the bandwidth consumption.

In Azure, where your ASP.NET application is running in multiple Web roles or VMs, the least disruptive place to cache this ASP.NET View State is in an in-memory distributed cache. That way, you can get at it from any Web server. Here’s how you can configure the ASP.NET View State for storage in an in-memory distributed cache:

<!-- /App_Browsers/Default.browser -->
<browsers>
  <browser refID="Default" >
    <controlAdapters>
      <adapter
      controlType="System.Web.UI.Page"
      adapterType="DistCache.Adapters.PageAdapter"/>       
    </controlAdapters>
  </browser>
</browsers>

ASP.NET also provides an output cache framework that lets you cache page output that’s not likely to change. That way you don’t have to execute the page next time. This saves CPU resources and speeds up ASP.NET response time. In a multi-server deployment, the best place to cache page output is within a distributed cache so it will be accessible from all Web servers. Fortunately, ASP.NET Output Cache has a provider-based architecture so you can easily plug in an in-memory distributed cache (see Figure 4).

Figure 4 Configure ASP.NET Output Cache for In-Memory Distributed Cache

<!-- web.config -->
<system.web>
  <caching>
    <outputCache defaultProvider="DistributedCache">
      <providers>
        <add name="DistributedCache"
          type="Vendor.Web.DistributedCache.DistCacheOutputCacheProvider,
            Vendor.Web.DistributedCache"
          cacheName="default"
          dataCacheClientName="default"/>
      </providers>
    </outputCache>
  </caching>
</system.web>

Runtime Data Sharing Through Events

Another reason to consider using in-memory distributed cache in Azure is runtime data sharing. Applications typically do runtime data sharing in the following ways:

  1. Polling relational databases to detect data changes
  2. Using database events (such as SqlDependency or OracleDepedency)
  3. Using message queues such as MSMQ

These approaches all provide basic functionality, but each has certain performance and scalability issues. Polling is usually a bad idea. This involves many unnecessary database reads. Databases are already a scalability bottleneck, even without additional database events. With the added overhead of database events, databases will choke even more quickly under heavy transaction load.

Message queues specialize in sequenced data sharing and persisting events to permanent storage. They’re good for situations where the recipients might not receive events for a long time or where applications are distributed across the WAN. However, when it comes to a high-transaction environment, message queues might not perform or scale like an in-memory distributed cache.

So if you have a high-transaction environment where multiple applications need to share data at run time without any sequencing and you don’t need to persist events for a long time, you might want to consider using an in-memory distributed cache for runtime data sharing. An in-memory distributed cache lets you share data at run time in a variety of ways, all of which are asynchronous:

  1. Item level events on update, and remove
  2. Cache- and group/region-level events
  3. Continuous Query-based events
  4. Topic-based events (for publish/subscribe model)

The first three capabilities are essentially different ways to monitor data changes within the cache. Your application registers callbacks for each of these. The distributed cache is responsible for “firing the event” whenever the corresponding data in the cache changes. This results in your application callback being called.

When a specific cached item is updated or removed, there will be an item-level event fired. Cache- and group/region-level events are fired when data in that “container” is added, updated or removed. Continuous Query consists of search criteria to define a dataset in the distributed cache. The distributed cache fires events whenever you add, update or remove data from this dataset. You can use this to monitor cache changes:

string queryString = "SELECT Customers WHERE this.City = ?";
Hashtable values = new Hashtable();
values.Add("City", "New York");
Cache cache = CacheManager.GetCache(cacheName);
ContinuousQuery cQuery = new ContinuousQuery(queryString, values);
cQuery.RegisterAddNotification(
  new CQItemAddedCallback(cqItemAdded));
cQuery.RegisterUpdateNotification(
  new CQItemUpdatedCallback(cqItemUpdated));
cQuery.RegisterRemoveNotification(
  new CQItemRemovedCallback(cqItemRemoved));
cache.RegisterCQ(query);

Topic-based events are general purpose, and aren’t tied to any data changes in the cache. In this case, a cache client is responsible for “firing the event.” The distributed cache becomes something like a message bus and transports that event to all other clients connected to the cache.

With topic-based events, your applications can share data in a publish/subscribe model, where one application publishes data and fires a topic-based event. Other applications wait for that event and start consuming that data once it’s received.

Distributed Cache Architecture

High-traffic apps can’t afford downtime. For these apps running in Azure, there are three important aspects of in-memory distributed cache:

  1. High availability
  2. Linear scalability
  3. Data replication and reliability

Cache elasticity is an essential aspect of maintaining your in-memory distributed cache. Many in-memory distributed caches achieve elasticity and high availability with the following:

Self-healing peer-to-peer cache cluster: All cache servers form a cache cluster. A self-healing peer-to-peer cluster adjusts itself whenever nodes are added or removed. The more powerful caches form a self-healing peer-to-peer cache cluster, while others form master/slave clusters. Peer-to-peer is dynamic and lets you add or remove cache servers without stopping the cache. Master/slave clusters are limited because one or more of the designated nodes going down hampers cache operations. Some caches such as Memcached don’t form any cache cluster and, therefore, aren’t considered elastic.

Connection failover: The cache clients are apps running on app servers and Web servers that then access the cache servers. Connection failover capability is within the cache clients. This means if any cache server in the cluster goes down, the cache client will continue working by finding other servers in the cluster.

Dynamic configuration:Both cache servers and cache clients have this capability. Instead of requiring the cache clients to hardcode configuration details, cache servers propagate this information to the cache clients at run time, including any changes.

Caching Topologies

In many cases, you’re caching data that doesn’t exist in the database, such as ASP.NET Session State. Therefore, losing data can be quite painful. Even where data exists in the database, losing a lot of it when a cache node goes down can severely affect app performance.

Therefore, it’s better if your in-memory distributed cache replicates your data. However, replication does have performance and storage costs. A good in-memory cache provides a set of caching topologies to handle different types of data storage and replication needs:

Mirrored Cache: This topology has one active and one passive cache server. All reads and writes are made against the active node and updates are asynchronously applied to the mirror. This topology is normally used when you can only spare one dedicated cache server and share an app/Web server as the mirror.

Replicated Cache: This topology has two or more active cache servers. The entire cache is replicated to all of them. All updates are synchronous—they’re applied to all cache servers as one operation. Read transactions scale linearly as you add more servers. The disadvantage is that adding more nodes doesn’t increase storage or update transaction capacity.

Partitioned Cache: This topology has the entire cache partitioned, with each cache server containing one partition. Cache clients usually connect to all cache servers so they can directly access data in the desired partition. Your storage and transaction capacity grows as you add more servers so there’s linear scalability. There’s no replication, though, so you might lose data if a cache server goes down.

Partitioned-Replicated Cache: This is like a Partitioned Cache, except each partition is replicated to at least one other cache server. You don’t lose any data if a cache server goes down. The partition is usually active and the replica is passive. Your application never directly interacts with the replica. This topology provides the benefits of a Partitioned Cache like linear scalability, plus data reliability. There is a slight performance and storage cost associated with the replication.

Client Cache (Near Cache): Cache clients usually run on app/Web servers, so accessing the cache typically involves network traffic. Client Cache (also called Near Cache) is a local cache that keeps frequently used data close to your app and saves network trips. This local cache is also connected and synchronized with the distributed cache. Client Cache can be InProc (meaning inside your application process) or OutProc.

Deploying Distributed Cache in Azure

In Azure, you have multiple distributed cache options, including Microsoft Azure Cache, NCache for Azure and Memcached. Each cache provides a different set of options. These are the most common deployment options for a single region:

  1. In-Role Cache (co-located or dedicated)
  2. Cache Service
  3. Cache VMs (dedicated)
  4. Cache VMs across WAN (multi-regions)

In-Role Cache You can deploy an in-role cache on a co-located or dedicated role in Azure. Co-located means your application is also running on that VM and dedicated means it’s running only the cache. Although a good distributed cache provides elasticity and high availability, there’s overhead associated with adding or removing cache servers from the cache cluster. Your preference should be to have a stable cache cluster. You should add or remove cache servers only when you want to scale or reduce your cache capacity or when you have a cache server down.

The in-role cache is more volatile than other deployment options because Azure can easily start and stop roles. In a co-located role, the cache is also sharing CPU and memory resources with your applications. For one or two instances, it’s OK to use this deployment option. It’s not suitable for larger deployments, though, because of the negative performance impact.

You can also consider using a dedicated in-role cache. Bear in mind this cache is deployed as part of your cloud service and is only visible within that service. You can’t share this cache across multiple apps. Also, the cache runs only as long as your service is running. So, if you need to have the cache running even when you stop your application, don’t use this option.

Microsoft Azure Cache and NCache for Azure both offer the in-role deployment option. You can make Memcached run this configuration with some tweaking, but you lose data if a role is recycled because Memcached doesn’t replicate data.

Cache Service In a Cache Service, the distributed cache is deployed independent of your service, and offers you a cache-level view. You allocate a certain amount of memory and CPU capacity and create your cache. The benefit of a Cache Service is its simplicity. You don’t install and configure any of the distributed cache software yourself. As a result, your cache management effort is reduced because Azure is managing the distributed cache cluster for you. Another benefit is that you can share this cache across multiple applications.

The drawback of a Cache Service is your limited access. You can’t control or manipulate the cache VMs within the cluster like you would in an on-premises distributed cache. Also, you can’t deploy any server-side code such as Read-through, Write-through, cache loader and so on. You can’t control the number of cache VMs in the cluster because it’s handled by Azure. You can only choose among Basic, Standard and Premium deployment options. Microsoft Azure Cache provides a Cache Service deployment option, whereas NCache for Azure does not.

Cache VMs Another option is to deploy your distributed cache in Azure. This gives you total control over your distributed cache. When you deploy your application on Web roles, worker roles or dedicated VMs, you can also deploy the client portion of the distributed cache (the libraries). You can also install the cache client through Windows Installer when you create your role. This gives you more installation options like OutProc Client Cache (Near Cache).

Then you can allocate separate dedicated VMs as your cache servers, install your distributed cache software and build your cache cluster based on your needs. These dedicated cache VMs are stable and keep running even when your application stops. Your cache client talks to the cache cluster through a TCP protocol. For a Cache VM, there are two deployment scenarios you can use:

  1. Deploy within your virtual network: Your application roles/VMs and the cache VMs are all within the same virtual network. There are no endpoints between your application and the distributed cache. As a result, cache access is fast. The cache is also totally hidden from the outside world and, therefore, more secure.
  2. Deploy in separate virtual networks: Your application roles/VMs and the cache VMs are in different virtual networks. The distributed cache is in its own virtual network and exposed through endpoints. As a result, you can share the cache across multiple applications and multiple regions. However, you have a lot more control over your Cache VMs than a Cache Service.

In both Cache VM deployment options, you have full access to all cache servers. This lets you deploy server-side code such as read-through, write-through and cache loader, just like you would in an on-premises deployment. You also have more monitoring information, because you have full access to the cache VM. Microsoft Azure Cache doesn’t provide the Cache VM option, whereas it’s available in NCache for Azure and Memcached.

Microsoft plans to have its managed cache in general availability by July, which will replace the existing shared cache service. It will most likely not have an Azure portal presence and will require a Windows PowerShell command line to create and manage.

You can install NCache for Azure on dedicated VMs and access it from your Web and Worker Roles. You can also install NCache for Azure on Web and Worker Roles, but that’s not a recommended strategy. NCache for Azure doesn’t come as a cache service. Microsoft Azure also provides Redis Cache Service, which is available through the management portal.

Cache VMs Across WAN If you have a cloud service that’s deployed in multiple regions, consider deploying your Cache VMs across the WAN. The Cache VM deployment in each region is the same as the previous option. However, you have a situation with two additional requirements:

  1. Multi-site ASP.NET sessions: You can transfer an ASP.NET session from one site to another if a user is rerouted. This is a frequent occurrence for applications deployed in multiple active sites. They may reroute traffic to handle overflows or because they’re bringing one site down.
  2. WAN replication of cache: You can replicate all cache updates from one site to another. This can be an active-passive or active-active multi-site deployment. In active-passive, updates are replicated in one direction. In active-active, they’re bidirectional. The cache is kept synchronized across multiple sites through WAN replication, but keep in mind it consumes bandwidth.

Important Caching Features

When you use an in-memory distributed cache, it will handle a lot of data. A basic in-memory distributed cache only provides a hashtable (key, value) interface. An enterprise-level distributed cache may provide the following features as well:

Search features:Instead of always finding data based on a key, it’s a lot easier to search the cache based on some other logical data. For example, many distributed caches let you use groups and tags to logically group cached items. Many also let you search the cache through LINQ, which is popular for object searching in the .NET Framework. Some also provide their own Object Query Language (OQL), a SQL-like querying language with which you can search the cache. The ability to search a distributed cache based on attributes other than keys makes the distributed cache look and feel like a relational database. Figure 5 shows how you might execute such a search.

Figure 5 Configure a LINQ Search of a Distributed Cache

public IList<Customer> GetCutomers(string city)
{
  // Use LINQ to search distributed cache
  IQueryable<Customer> customers = 
    new DistCacheQuery<Customer>(cache);
  try {
    var result = from customer in customers
      where customer.City = city
      select customer;
    IList<Customer> prods = new List<Customer>();
    foreach (Customer cust in result) {
      customers.Add(cust);
    }
  }
  return customers;
}

Read-through and write-through/write-behind: Read-through and write-through handlers simplify your application code because you’ve moved a large chunk of your persistent code into the cache cluster. Your application simply assumes the cache is its data store. This is another way of reusing code across multiple applications.

The cache calls read-through whenever your application tries to fetch something that isn’t in the cache (this is called a “miss”). When a miss happens, the cache calls the read-through handler and it fetches the data from your database (see Figure 6). The distributed cache puts it in the cache and returns it to your application.

Figure 6 Read-Through Handler for a Distributed Cache

public class SqlReadThruProvider : IReadhThruProvider
{
  // Called upon startup to initialize connection
  public void Start(IDictionary parameters) { ... }
  // Called at the end to close connection
  public void Stop() { ... }
  // Responsible for loading object from external data source
  public object Load(string key, ref CacheDependency dep)
  {
    string sql = "SELECT * FROM Customers WHERE ";
    sql += "CustomerID = @ID";
    SqlCommand cmd = new SqlCommand(sql, _connection);
    cmd.Parameters.Add("@ID", System.Data.SqlDbType.VarChar);
      // Extract actual customerID from "key"
      int keyFormatLen = "Customers:CustomerID:".Length;
      string custId = key.Substring(keyFormatLen, 
        key.Length - keyFormatLen);
      cmd.Parameters["@ID"].Value = custId;
    // Fetch the row in the table
    SqlDataReader reader = cmd.ExecuteReader();
    // Copy data from "reader" to "cust" object
    Customers cust = new Customers();
    FillCustomers(reader, cust);
    // Specify a SqlCacheDependency for this object
    dep = new SqlCacheDependency(cmd);
    return cust;
  }
}

The cache also calls your write-through handler whenever you update the cache and want it to automatically update the database. Your write-through handler runs on one or more cache servers in the cluster and talks to your database. If the write-through is called asynchronously after a delay and not as part of the cache update transaction, however, this operation is called write-behind. Write-behind improves application performance because you don’t have to wait for the database update to be completed.

Synchronize cache with relational database: Most data within a distributed cache comes from your application database. That means there are now two copies of the data, the master copy in the database and one in the distributed cache. If you have applications directly updating data in the database, but don’t have access to your in-memory distributed cache, you end up with stale data in the cache.

Some distributed caches provide database synchronization to ensure the cache never has stale data. This synchronization is either event-driven (using database notifications such as SqlDependency) or polling-based. Event-driven is closer to real time, but has more overhead because it creates a separate SqlDependency in the database server for each cached item. Polling is more efficient because one database call can synchronize thousands of items. But there’s usually a delay in synchronization, in order to avoid flooding the database with unnecessary polling calls. So your choice is between closer to real-time database synchronization or more efficient polling-based synchronization with a slight delay.

Handling relational data in a distributed cache: An in-memory distributed cache is a key-value object store, but most data cached within comes from a relational database. This data has one-to-many, one-to-one and many-to-many relationships. Relational databases provide referential integrity constraints and cascaded updates and deletes to enforce these relationships. The cache needs something similar as well.

CacheDependency lets you have one cached item depend on another. If the other cached item is updated or deleted, the original cached item is automatically deleted. This operates like a cascading delete. It’s useful for handling one-to-one and one-to-many relationships between objects in the cache. Some in-memory distributed caches have implemented this feature as well. Figure 7 shows how you would configure CacheDependency.

Figure 7 Use CacheDependency to Manage Relationships in the Cache

public void CacheCustomerAndOrder(Customer cust, Order order)
{
  Cache cache = HttpRuntime.Cache;
  // Customer has one-to-many with Order. Cache customer first
  // then cache Order and create CacheDependency on customer
  string custKey = "Customer:CustomerId:" + cust.CustomerId;
  cache.Add(custKey, cust, null,
    Cache.NoAbsoluteExpiration,
    Cache.NoSlidingExpiration,
    CacheItemPriority.Default, null);
  // CacheDependency ensures order is removed if Cust updated or removed
  string[] keys = new string[1];
  keys[0] = custKey;
  CacheDependency dep = new CacheDependency(null, keys);
  string orderKey = "Order:CustomerId:" + order.CustomerId
    + ":ProductId:" + order.ProductId;
  // This will cache Order object with CacheDependency on customer
  cache.Add(orderKey, order, dep,
    absolutionExpiration,
    slidingExpiration,
    priority, null);
}

Wrapping Up

Microsoft Azure is a powerful cloud platform and a scalable environment. An in-memory distributed cache can be an important component of this environment. If you’ve written your application in the .NET Framework, then you should consider using a .NET distributed cache. If it’s in Java, you have various Java-based distributed caching solutions. If you have a combination of .NET and Java applications, use a distributed cache that supports both and provides data portability. Most caches are totally focused on .NET or Java, although some do support both.

For PHP, Ruby, Python and other applications, you can use Memcached. This supports all these environments. Memcached isn’t an elastic cache and, therefore, has high-availability and data-­reliability limitations. Either way, keep in mind that a distributed cache is a sensitive part of your production environment. Therefore, you must evaluate all available caching solutions in your environment thoroughly and select one that best meets your needs.


Iqbal Khan is the technology evangelist for Alachisoft, which provides Ncache distributed cache for .NET and Microsoft Azure. You can reach him at iqbal@alachisoft.com.

Jeremiah Talkar is a Microsoft Azure tech evangelist within the Microsoft Developer Platform Evangelism Corporate Team with 26 years of experience in product engineering, consulting and sales. Reach him at jtalkar@microsoft.com.

Thanks to the following Microsoft technical experts for reviewing this article: Scott Hunter, and Trent Swanson