Dette indhold er ikke tilgængeligt på dit sprog, men her er den engelske version.
Denne dokumentation er arkiveret og vedligeholdes ikke.

Memcached Wrapper for Azure In-Role Cache

Updated: August 25, 2015

Microsoft recommends all new developments use Azure Redis Cache. For current documentation and guidance on choosing an Azure Cache offering, see Which Azure Cache offering is right for me?

Memcache is a distributed, in-memory caching solution used to help speed up large scale web applications by taking pressure off the database. Memcache is used by many of the internet’s biggest websites and has been merged with other technologies in innovative ways.

Azure supports the Memcache protocol to enable customers with existing Memcache implementations to easily migrate to Azure. If an application already uses Memcache, there is no necessity to replace this code with new code.

Running Azure Caching with Memcache is a better option than, for example, running just Memcache itself in a worker role. This is because Azure Caching offer value-added features such as graceful shutdown, high availability, local caching (on client shim), notifications (on client shim), data consistency, high availability (HA), and easy scale-up and scale-down that’s transparent to the clients, to name a few. For example, the server hashing scheme and partition management in Azure Caching with Memcache help load balancing and preserve data consistency.

Azure Caching supports the Memcache wire protocol. There are two versions of the protocol, a binary version and a text version

Azure Caching supports this protocol in addition to its own wire protocol. A Memcache client should expect compatibility with Azure. Azure Caching supports almost every API that other Memcache implementations support.

As such, should a user bring a Memcache application to Azure, point the application at Azure’s implementation of Memcache, it should continue to work as-is, with no additional application modifications.

Memcache supports two distinct developer experiences: the use of a “server gateway,” and the use of a “client shim.”

From the perspectives of implementation, deployment, and conceptual understanding, a server gateway is simple, but it also comes with important caveats discussed further below.

When using the server gateway, the server cache cluster listens on a Memcache socket. In other words, it opens a socket and listens for packets on the Memcache protocol. There is no translation layer (discussed below.)

To turn on this feature, in your cache cluster, open an additional internal endpoint, give it a name, and make it your Memcache port. All traffic bound to that port will be received over the Memcache protocol.

However, the server gateway degrades performance in highly performance-sensitive scenarios when compared to using the client shim scenario. This is because Memcache implementations implement hashing differently from how Azure Caching implements hashing. Memcache implementations defer the hashing scheme to the cache client. In Azure, the cache server generates the hash. Azure cache behavior is to let the cache server specify the hash behavior. This allows the server to load balance, inflate, deflate and ensure no data loss, etc.

When Azure caches an item, a hash is generated based on the item’s key. Azure uses the hash to determine which server in the cache cluster will contain the cached item. As a result, the Azure server gateway needs to re-hash the key and route the item to the destination server in cache cluster. This operation involves an extra network hop, and that degrades the performance.

The Memcache client shim is installed on the client that is accessing the cache. This is generally the Azure role that has the application itself. The client shim supports local cache.

The shim is a translation layer. It translates Memcache client calls to the Azure Caching API. The shim has two parts –a Memcache protocol handler, and an Azure Caching client. The shim – the translation layer – is installed on the client itself, wherever the Get and Put calls to Azure Caching API are made from.

When the Memcache client is pointed to localhost as the Memcache server, Put operations will be initially handled by the local instance of the shim instead of the cache server in Azure. The shim will then determine the correct destination server in the cache cluster and redirect the Put operation to Azure.

This eliminates the extra network hop present in the server gateway scenario. The downside is that you have to get this shim and put it in your app.

There are two topologies of caching: co-located caching and a dedicated cache role.

If deploying the cache cluster in a dedicated cache worker role, use the Memcache shim from the cache client. This gives you better performance and avoids auto-discovery code.

When using co-located caching and the cache client is hosted in the same role, use the Memcache server gateway. Using the client shim involves an additional layer of processing and redirection and this is not necessary when accessing the cache from within the same role. The addition redirection will add unnecessary overhead.

There is no programming model to use the server gateway or the client shim. All that is necessary is configuration setting changes. The client shim will also require an installation.

The use of a server gateway or a client shim is much more of a deployment operation than a programming model one. As a programmer, one is still going to call the same Get or Put APIs – only the application is wired a little differently. Instead of pointing at the original caching server, it will now be pointed at either the server gateway or the client shim.

Finally, both the server gateway and client shim are agnostic of the Memcache client library being used because standard Memcache implementations use the same protocol. The cache server is concerned with the packets of data that adhere to the standard Memcache protocol and not with the Memcache client implementations themselves.

  1. On the role which will host the cache server, go to the role properties and to the Caching tab

  2. Select the checkbox “Enable Caching”.  This will add input endpoints in csdef, the importModule element, and other csdef/cscfg settings. The deployment experience, next, is to manually add an input endpoint called "memcache_default" on the Endpoints tab.

  3. Now the client must be configured to point to this cluster. If using the server gateway with co-located caching and or the client shim with dedicated caching, simply point the app to “localhost_thisrolename” – there is no auto-discovery necessary

  1. On the role which has the Memcache client, right click on the role name and select “Add Library Package Reference” to launch the NuGet window

  2. Search for “Azure Caching Memcache Shim”. Install this NuGet package

  3. The package will create the startup task, add an internal endpoint for memcache_default and map it to 11211, and add the appropriate dataCacheClients sections to App.config and web.config. This can be changed in the internal endpoints tab.

  4. Provide the role name in the App.config or Web.config autoDiscovery element

  5. The client must now be configured to “point” to the shim. Edit the memcache client configuration and set the server to “localhost”. The correct port number(s) must also be set.