Eksportér (0) Udskriv
Udvid alt
Dette indhold er ikke tilgængeligt på dit sprog, men her er den engelske version.

About In-Role Cache for Azure Cache

Updated: February 28, 2014

In-Role Cache on Microsoft Azure Cache supports the ability to host Cache services on Microsoft Azure roles. In this model, the cache is part of your cloud service. One role within the cloud service is selected to host In-Role Cache. The running instances of that role join memory resources to form a cache cluster. This private cache cluster is available only to the roles within the same deployment. There are two main deployment topologies for In-Role Cache: co-located and dedicated. Co-located roles also host other non-caching application code and services. Dedicated roles are worker roles that are only used for Cache. The following topics discuss these Cache topologies in more detail.

For a step-by-step walkthrough of role-based In-Role Cache, see How to Use Azure Caching. For downloadable samples, see In-Role Cache Samples (Azure Cache).

Benefits of Role-based In-Role Cache

There are several benefits to hosting In-Role Cache within a Microsoft Azure role. The following table provides an overview of these benefits along with a more detailed explanation.


Characteristic Description

No quotas or throttling

Your application is the only consumer of the cache. There are no predefined quotas or throttling. Physical capacity (memory and other physical resources) is the only limiting factor.

Isolation, flexibility, and control

Co-located and dedicated topologies maximize your resources. You have as much control over In-Role Cache as you do over your own application.

Lower cost

There is no premium for cache. Pay only for the web/worker roles on which In-Role Cache runs. In a co-located scenario, you have already paid for the role.


Scale In-Role Cache in the same way that you scale your application: change the virtual machine size or the number of running instances of the role. It is also possible to create large caches (>100 GB).

Visual Studio integration

In-Role Cache integrates with Visual Studio to make it easy to add Cache to your application. There is also full fidelity with the compute emulator for debugging your application before deployment.

Memcache support

Memcache binary and text protocols are now supported for easy migration of memcache-based applications to Microsoft Azure.


The ability to host In-Role Cache within a cloud service deployment increases the number of supported Cache features. These features include: named caches, regions, tagging, high availability, local cache with notifications, and greater API symmetry with Microsoft AppFabric 1.1 for Windows Server.


Microsoft Azure already locates role instances for a cloud service in proximity to improve performance. Hosting In-Role Cache on a role in your cloud service takes advantage of this fact to deliver low latency and high throughput.

In-Role Cache Concepts

This section provides an overview of three key concepts related to role-based In-Role Cache.

  1. Cache Cluster

  2. Named Caches

  3. Cache Clients

Cache Cluster

Microsoft Azure roles have one or more instances. Each instance is a virtual machine that is configured to host the specified role. When a role that has In-Role Cache enabled runs on multiple instances, a cache cluster is formed. A cache cluster is a distributed caching service that uses the combined memory from all of the machines in the cluster. Applications can add and retrieve items from the cache cluster without having to know which machine the items is stored on. If high availability is enabled, a backup copy of the item is automatically stored on a different virtual machine instance.

Only one cache cluster is supported for each cloud service. It is possible to setup multiple cache clusters in a cloud service by specifying separate storage accounts for each role. However, this configuration is not supported.

When you enable In-Role Cache on a Microsoft Azure role, you specify the amount of memory that can be used for caching. In a co-located scenario, you choose a percentage of the available memory on the virtual machines that host the role. In a dedicated scenario, all of the available memory on the virtual machines is used for caching. However, the available memory is always less than the total physical memory on the virtual machine, because of operating system memory requirements.

Therefore, the total amount of Cache memory depends on the role memory reserved for Cache multiplied by the number of roles. You can effectively scale the total Cache memory up or down by increasing or decreasing the number of running instances for that role.

When scaling down running instances of the role that hosts In-Role Cache, reduce the instance count by no more than three at a time. After that change completes, you can remove up to three additional running instances; repeat until you reach the required number of running instances. Scaling back simultaneously by more than three instances causes cache cluster instability.

Each cache cluster maintains shared information about the cluster's runtime state in Microsoft Azure storage. During development, you can use the Microsoft Azure storage emulator. Deployed roles must specify a valid Microsoft Azure storage account. In Visual Studio, you can specify the appropriate storage account on the Caching tab of the role properties.

Named Caches

Every cache cluster has at least one cache named default. With role-based In-Role Cache, you can also add additional named caches. There are various settings that can be changed for each cache. The following screenshot shows the Named Cache Settings section of the Caching tab on the Visual Studio role settings.

Caching Properties for Named Caches

In Visual Studio, click the Add Named Cache button to add additional named caches. In the previous example, two additional caches were added, NamedCache1 and NamedCache2. Each cache has different settings. Change the settings by selecting and modifying the specific fields in the table.

Named caches provide flexibility to application designers. Each named cache has its own properties. For example, one cache could enable High Availability to take advantage of high availability. Other caches might not require this setting, and high availability requires twice the memory for each cached item. It is a better use of resources to use high availability only on the caches that require it. There are other similar scenarios where multiple caches could be used with varying properties to meet application requirements.

Cache Clients

A cache client is any application code that stores and retrieves items from the cache cluster. With In-Role Cache on roles, cache clients must be part of the same Cache role or incorporated into other roles in the deployment. Configure cache clients by using the application or web configuration files. For more information, see How to: Prepare Visual Studio to Use In-Role Cache (Azure Cache). The following example shows the dataCacheClient element in a configuration file.

  <dataCacheClient name="default">
    <autoDiscover isEnabled="true" identifier="CachingRole1" />
    <!--<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />-->

In the previous example, the autoDiscover element has an identifier attribute set to CachingRole1. This identifier specifies that the CachingRole1 has In-Role Cache enabled. It provides the location of the cache server. The cache client uses CachingRole1 automatically in any Cache operations.

Once the cache client has been configured, it can access any cache by name. The following example accesses the NamedCache1 cache and adds an item to it.

DataCache cache = new DataCache("NamedCache1", "default");
cache.Put("testkey", "testobject");

The DataCache constructor takes two parameters: the cache name and the dataCacheClient section name. For information about the cache name, see the previous section on Named Caches.

In This Section

See Also




© 2014 Microsoft