SALES: 1-800-867-1380

About the A8, A9, A10, and A11 Compute Intensive Instances

Updated: March 5, 2015

This topic provides background information and considerations to use the Azure A8, A9, A10, and A11 compute intensive instances. Key features of these instances include:

  • Hardware designed and optimized for compute and network intensive applications including high performance computing (HPC) cluster applications, modeling, and simulations.

  • Support for preconfigured or custom Windows Server and Linux operating system images in Azure VMs (IaaS), or standard Azure Guest OS releases in cloud services (PaaS).

  • For A8 and A9 instances, the ability to communicate over a low latency, high throughput network in Azure that is based on remote direct memory access (RDMA) technology, boosting performance for parallel Message Passing Interface (MPI) applications. (RDMA access is currently only supported for cloud services and Windows Server-based VMs.)

  • A10 and A11 instances, which are designed for HPC applications that do not require constant and low-latency communication between nodes, also known as parametric or embarrassingly parallel applications. The A10 and A11 instances have the same performance optimizations and specifications as the A8 and A9 instances. However, they do not include access to the RDMA network in Azure.

In this topic:

The Azure A8, A9, A10, and A11 compute intensive instances feature high speed, multicore CPUs and large amounts of memory, as shown in the following table.


Size CPU Memory

A8 and A10

Intel® Xeon® E5-2670
8 cores @ 2.6 GHz

DDR3-1600 MHz
56 GB

A9 and A11

Intel® Xeon® E5-2670
16 cores @ 2.6 GHz

DDR3-1600 MHz
112 GB

Additional processor details including supported instruction set extensions are at the website.

Network adapters

The A8 and A9 instances have two network adapters, which connect to the following two backend Azure networks.


Network Description

10 Gbps Ethernet

Connects to Azure services (such as storage and virtual network) and to the Internet

32 Gbps backend, RDMA capable

Enables low latency, high throughput application communication between instances within a single cloud service

Access to the RDMA network is only enabled through applications that use the Microsoft Network Direct interface. For more information, see Access the RDMA network in this topic.

The A10 and A11 instances have a single, 10 Gbps Ethernet network adapter for connecting to Azure services and the Internet.

For additional configuration information, see Virtual Machine and Cloud Service Sizes for Azure.

  • Availability   The compute intensive instances are offered initially in some of the Azure regions. The instances will be introduced in additional regions over time.

  • Pricing   Compute intensive instances are priced differently than the standard and memory intensive instances. We recommend that you plan to monitor your use of the compute intensive instances and stop (deallocate) the instances when they are not running applications.

For details about availability and pricing, see Cloud Services Pricing Details and Virtual Machines Pricing Details.

  • Azure account   If you don't have an account or access to a subscription, you can create a free trial account in just a couple of minutes. For details, see Azure Free Trial. Because a trial account will limit your use of Azure resources such as compute cores, if you want to deploy more than a small number of compute intensive instances, consider a pay-as-you-go subscription or other purchase options. You can also use your MSDN subscription. See Azure benefit for MSDN subscribers.

  • The quota of cores   To use multicore instances such as A8, A9, A10, or A11, you might need to increase the cores quota in your Azure subscription. For example, certain subscriptions provide a default quota of 20 cores, which is not enough for many scenarios with 8-core or 16-core instances. For initial tests of the compute intensive instances, a quota of up to 100 cores might be needed. If you need to increase the quota, see Request a cores quota increase in this topic.

  • Affinity group   An Azure affinity group can help optimize performance by grouping services or virtual machines in the same Azure data center. To group the compute intensive instances, we recommend that you create a new affinity group in a region in which the compute intensive instances are available. As a best practice, only use the affinity group for the compute intensive instances, not instances of other sizes.

  • Virtual network   An Azure virtual network is not required to use the compute intensive instances. However, you may need at least a cloud-based Azure virtual network for many IaaS scenarios. You may also need a site-to-site connection to Azure to access on-premises resources. You will need to create a new (regional) virtual network before deploying the instances. Adding an A8, A9, A10, or A11 VM to a virtual network in an affinity group is not supported. For more information, see Configure a Cloud-Only Virtual Network in the Management Portal and Configure a Site-to-Site VPN in the Management Portal.

HPC Pack, Microsoft’s free HPC cluster and job management solution, is not required for using the A8, A9, A10, and A11 instances, but it is a recommended tool to create clustered compute resources, and in the case of A8 and A9, the most efficient way to run MPI applications that access the RDMA network in Azure. HPC Pack also includes a runtime environment for the Microsoft implementation of the Message Passing Interface (MS-MPI) for Windows.

For more information and checklists to use the compute intensive instances with HPC Pack, see A8 and A9 Compute Intensive Instances: Quick Start with HPC Pack.

Within a single cloud service, the A8 and A9 instances can access the RDMA network in Azure when running MPI applications that use the Microsoft Network Direct interface to communicate between instances. At this time Network Direct is only supported by Microsoft’s MS-MPI implementation for Windows.

Like other Azure instances, A8 and A9 instances can run workloads other than MPI applications, by using their available CPU cores, memory, and disk space. However, in these cases, the instances do not connect to the RDMA network.

Following are system prerequisites for MPI applications to access the RDMA network in cloud service (PaaS) or virtual machine (IaaS) deployments of the A8 or A9 instances. See A8 and A9 Compute Intensive Instances: Quick Start with HPC Pack for typical deployment scenarios.


Prerequisite Cloud services (PaaS) Virtual machines (IaaS)

Operating system

Windows Server 2012 or Windows Server 2008 R2 Guest OS family

Windows Server 2012 R2 or Windows Server 2012 VMs

The HpcVmDrivers Extension extension must be added to the VMs to install drivers needed for RDMA connectivity.


MS-MPI 2012 R2 or later, installed via HPC Pack 2012 R2 or later

MS-MPI 2012 R2 or later, either standalone or installed via HPC Pack 2012 R2 or later

Additional considerations

  • The RDMA network in Azure reserves the address space If you run MPI applications on compute intensive instances deployed in an Azure virtual network, make sure that the virtual network address space does not overlap the RDMA network.

  • For PaaS deployments, RDMA connectivity is not currently supported in Guest OS versions in the Windows Server 2012 R2 family.

  • For IaaS deployments, RDMA connectivity is not currently supported in Linux VMs.

  • You cannot resize an existing Azure VM to the A8, A9, A10, or A11 size.

  • A8, A9, A10, and A11 instances cannot currently be deployed by using a cloud service that is part of an existing affinity group. Likewise, an affinity group with a cloud service containing A8, A9, A10, and A11 instances cannot be used for deployments of other instance sizes. If you attempt these deployments you will see an error message similar to Azure deployment failure (Compute.OverconstrainedAllocationRequest): The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints.

Use the following procedure if you need to increase the quota of cores in your Azure subscription. If you are considering a large scale deployment, discuss your needs with Microsoft Support or your sales representative or account manager.

  • There is no charge to increase the quota. You only incur charges when you use Azure services.

  • When requesting a quota increase, consider the number of compute intensive instances you plan to deploy and then multiply by the number of cores per instance.

  • A subscription quota is a credit limit, not a capacity guarantee.

  • Generally, Microsoft Support will increase the quota within one business day.

  1. To see your current usage of cores and your cores quota, in the Azure Management Portal, click Settings, and then click Usage.

  2. To contact Microsoft Support to request an increase in the cores quota, in the Management Portal, click the name of your account, and then, click Contact Microsoft Support.

  3. Under Create Support Ticket, in Support Type, select Billing. Verify or modify the remaining settings, and then click Create Ticket.

  4. On the Contact us page, do the following:

    1. In Problem type, select Quota or Core Increase Requests.

    2. In Category, select Compute.

    3. Click Continue.

  5. Verify or modify your contact information, and then click Continue.

  6. On the Problem Details page, provide the requested information, including the number of cores in the current quota and the number of additional cores you would like to use. Then click Submit.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
© 2015 Microsoft