SALES: 1-800-867-1380

A8 and A9 Compute Intensive Instances: Quick Start with HPC Pack

Updated: July 23, 2015

This topic provides checklists and background information to help you start deploying the Azure A8 and A9 compute intensive instances with Microsoft® HPC Pack. You can use the A8 and A9 instances in Windows Server-based HPC Pack clusters in the cloud to efficiently run compute intensive applications such as parallel MPI applications.

This topic introduces the following two scenarios to deploy A8 and A9 instances with HPC Pack.

noteNote
Azure also provides A10 and A11 compute intensive instances, with processing capabilities identical to the A8 and A9 instances, but without a connection to an RDMA backend network. To run MPI workloads with HPC Pack in Azure, you will generally get best performance with the A8 and A9 instances.

From an existing HPC Pack cluster, add extra compute resources in the form of Azure worker role instances (Azure nodes) running in a cloud service (PaaS). This feature, also called “burst to Azure” from HPC Pack, supports a range of sizes for the worker role instances. To use the compute intensive instances, simply specify a size of A8 or A9 when adding the Azure nodes.

The following are steps to burst to A8 or A9 Azure instances from an existing (typically on-premises) cluster. You can use similar procedures to add worker role instances to an HPC Pack head node that is deployed in an Azure VM (see Add Azure Nodes to an HPC Pack Head Node VM).

Burst to A8 or A9 worker role instances

Checklist

 

Step Description

1. Review background information and considerations about the compute intensive instances

See About the A8, A9, A10, and A11 Compute Intensive Instances.

2. Obtain an Azure account

If you don't have an account or access to a subscription, you can create a free trial account in just a couple of minutes. For details, see Azure Free Trial. You can also use your MSDN subscription. See Azure benefits for MSDN subscribers.

3. Ensure that the quota of cores in the subscription is large enough

The default quota of cores in certain Azure subscriptions is 20, which is not enough to deploy A8 and A9 instances with HPC Pack in this scenario. If you need to, increase the quota of cores by contacting Microsoft Support. See Understanding Azure Limits and Increases.

4. Deploy and configure an HPC Pack 2012 R2 head node, and prepare for an Azure burst deployment

Download the latest HPC Pack installation package from the Microsoft Download Center. For requirements and installation instructions, see:

5. Configure a management certificate in the Azure subscription

Configure a certificate to secure the connection between the head node and Azure. For options and procedures, see Scenarios to Configure the Azure Management Certificate for HPC Pack.

6. Create a new cloud service and a storage account

Use the Azure Management Portal to create a cloud service and a storage account for the deployment in a region where the compute intensive instances are available. (Don’t associate the cloud service and storage account with an existing affinity group used for other deployments.)

7. Create an Azure node template

Use the Create Node Template Wizard in HPC Cluster Manager. For steps, see Create an Azure node template in “Steps to Deploy Azure Nodes with Microsoft HPC Pack”.

On the Specify Worker Role Properties page, in OS family, select Windows Server 2012 or Windows Server 2008 R2 as the Guest OS of the worker role instances.

For initial tests, we suggest configuring a manual availability policy in the template.

8. Add nodes to the cluster

Use the Add Node Wizard in HPC Cluster Manager. For more information, see Add Azure Nodes to the Windows HPC Cluster.

When specifying the size of the nodes, select A8 or A9.

9. Start (provision) the nodes and bring them online to run jobs

Select the nodes and use the Start action in HPC Cluster Manager. When provisioning is complete, select the nodes and use the Bring Online action in HPC Cluster Manager. The nodes are ready to run jobs.

10. Submit jobs to the cluster

Use HPC Pack job submission tools to run cluster jobs. See Microsoft HPC Pack: Job Management.

11. Stop (deprovision) the nodes

When you are done running jobs, take the nodes offline and use the Stop action in HPC Cluster Manager.

Additional considerations

  • Proxy nodes   In each burst to Azure deployment with the compute intensive instances, HPC Pack automatically deploys a minimum of 2 additional size A8 instances as proxy nodes, in addition to the Azure worker role instances you specify. For more information, see Set the Number of Azure Proxy Nodes. The proxy nodes use cores that are allocated to the subscription and incur charges along with the Azure worker role instances.

  • Virtual network   HPC Pack does not currently support configuration of a point-to-site VPN or a regional virtual network for PaaS deployments.

You can deploy the head node and compute nodes of an HPC Pack cluster in VMs joined to an Active Directory domain in an Azure virtual network. The HPC Pack IaaS Deployment Script automates most of this process, and provides flexible deployment options including the ability to specify the A8 or A9 VM size for the cluster nodes. The following steps guide you to use this automated deployment method. Alternatively, you can manually deploy the Active Directory domain, the head node V<, compute node VMs, and other parts of the HPC Pack cluster infrastructure in Azure. See Microsoft HPC Pack in Azure VMs for options.

Compute nodes in A8 or A9 VMs

Checklist

 

Step Description

1. Review background information and considerations about the compute intensive instances

See About the A8, A9, A10, and A11 Compute Intensive Instances.

2. Obtain an Azure account

If you don't have an account or access to a subscription, you can create a free trial account in just a couple of minutes. For details, see Azure Free Trial. You can also use your MSDN subscription. See Azure benefits for MSDN subscribers.

3. Ensure that the quota of cores in the subscription is large enough

The default quota of cores in certain Azure subscriptions is 20, which allows you to deploy at most 1 A9 or 2 A8 VMs as compute nodes (not including the additional VM for the head node and additional VMs needed in certain cluster scenarios). If you need to, increase the quota of cores by contacting Microsoft Support. See Understanding Azure Limits and Increases.

4. Create a cluster head node and compute node VMs by running the IaaS deployment script on a client computer

Download the HPC Pack IaaS Deployment Script package from the Microsoft Download Center.

To prepare the client computer, create the script configuration file, and run the script, see Create an HPC Cluster with the HPC Pack IaaS Deployment Script. To deploy size A8 and A9 compute nodes, see the additional considerations later in the topic.

5. Bring the compute nodes online to run jobs

Select the nodes and use the Bring Online action in HPC Cluster Manager. The nodes are ready to run jobs.

6. Submit jobs to the cluster

Connect to the head node to submit jobs, or set up an on-premises computer to do this. For information, see Submit Jobs to an HPC Pack Head Node VM.

7. Take the nodes offline and stop (deallocate) them

When you are done running jobs, take the nodes offline in HPC Cluster Manager. Then, use Azure management tools to shut them down.

Additional considerations for running the cluster deployment script

  • Virtual network   Ensure that you specify a new virtual network in a region in which the A8 and A9 instances are available.

  • Windows Server operating system   To support RDMA connectivity, specify a Windows Server 2012 R2 or Windows Server 2012 operating system for the size A8 or A9 compute node VMs.

  • Cloud services   We recommend deploying your head node in one cloud service and your A8 and A9 compute nodes in a different cloud service.

  • Head node size   When adding compute node VMs in the A8 or A9 size, consider a size of at least A4 (Extra Large) for the head node.

  • HpcVmDrivers extension   The deployment script installs the Azure VM Agent and the HpcVmDrivers extension automatically when you deploy size A8 or A9 compute nodes. The HpcVmDrivers extension installs drivers on the compute node VMs so they can connect to the RDMA network. For details, see HpcVmDrivers Extension.

  • Cluster network configuration   The deployment script automatically configures the HPC Pack cluster in Topology 5 (all nodes on the Enterprise network). This topology is required for all HPC Pack cluster deployments in VMs, including those with size A8 or A9 compute nodes. Do not change the cluster network topology later.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2015 Microsoft