SALES: 1-800-867-1380

A8 and A9 Compute Intensive Instances: Quick Start with HPC Pack

Updated: June 20, 2014

This topic provides checklists and background information to help you start deploying the Azure A8 and A9 compute intensive instances with Microsoft® HPC Pack. You can use the A8 and A9 instances in Windows Server-based HPC Pack clusters in the cloud to efficiently run compute intensive applications such as parallel MPI applications.

This topic introduces the following two scenarios to deploy compute intensive instances with HPC Pack 2012 R2. For more information about using HPC Pack and other Big Compute solutions, see Big Compute Solutions in Azure: Overview.

From an existing HPC Pack cluster, add extra compute resources in the form of Azure worker role instances (Azure nodes) running in a cloud service. This PaaS capability, also called “burst to Azure” from HPC Pack, supports a range of sizes for the worker role instances. To use the compute intensive instances, specify a size of A8 or A9 when adding the Azure nodes.

The following are steps to burst to Azure from an on-premises cluster. You can use similar procedures to add worker role instances to an HPC Pack head node that is deployed in Azure (see Add Azure Nodes to an HPC Pack Head Node VM).

Burst to A8 or A9 worker role instances

Checklist

 

Step Description

1. Review background information and considerations about the compute intensive instances

See About the A8 and A9 Compute Intensive Instances.

2. Obtain an Azure account

If you don't have an account or access to a subscription, you can create a free trial account in just a couple of minutes. For details, see Azure Free Trial.

3. Ensure that the quota of cores in the subscription is large enough

The default quota of cores in certain Azure subscriptions is 20, which is not enough to deploy A8 and A9 instances with HPC Pack in this scenario. If you need to, increase the quota of cores by contacting Microsoft Support. See Request a cores quota increase in “About the A8 and A9 Instances”.

4. Deploy and configure an HPC Pack 2012 R2 head node, and prepare for an Azure burst deployment

Download the HPC Pack 2012 R2 installation package from the Microsoft Download Center. For requirements and installation instructions, see:

5. Configure a management certificate in the Azure subscription

Configure a certificate to secure the connection between the head node and Azure. For options and procedures, see Scenarios to Configure the Azure Management Certificate for HPC Pack.

6. Create a new cloud service and a storage account

Use the Azure Management Portal to create a cloud service and a storage account for the deployment in a region where the compute intensive instances are available. (Do not associate the cloud service and storage account with an existing affinity group used for other deployments.)

7. Create an Azure node template

Use the Create Node Template Wizard in HPC Cluster Manager. For steps, see Create an Azure node template in “Steps to Deploy Azure Nodes with Microsoft HPC Pack”.

On the Specify Worker Role Properties page, in OS family, select either Windows Server 2012 or Windows Server 2008 R2 as the Guest OS of the worker role instances.

For initial tests, we suggest configuring a manual availability policy in the template.

8. Add nodes to the cluster

Use the Add Node Wizard in HPC Cluster Manager. For more information, see Add Azure Nodes to the Windows HPC Cluster.

When specifying the size of the nodes, select A8 or A9.

9. Start (provision) the nodes and bring them online to run jobs

Select the nodes and use the Start action in HPC Cluster Manager. When provisioning is complete, select the nodes and use the Bring Online action in HPC Cluster Manager. The nodes are ready to run jobs.

10. Submit jobs to the cluster

Use HPC Pack job submission tools to run cluster jobs. See Microsoft HPC Pack: Job Management.

11. Stop (deprovision) the nodes

When you are done running jobs, take the nodes offline and use the Stop action in HPC Cluster Manager.

Additional considerations

  • Guest OS   RDMA connectivity is not currently supported in worker role instances running Guest OS versions in the Windows Server 2012 R2 family.

  • Proxy nodes   In each burst to Azure deployment with the compute intensive instances, HPC Pack automatically deploys a minimum of 2 additional size A8 instances as proxy nodes, in addition to the Azure worker role instances you specify. For more information, see Set the Number of Azure Proxy Nodes. The proxy nodes use cores that are allocated to the subscription and incur charges along with the Azure worker role instances.

  • Virtual network   HPC Pack does not currently support configuration of a point-to-site VPN or a regional virtual network for PaaS deployments.

You can deploy the head node and compute nodes of an HPC Pack cluster in VMs joined to an Active Directory domain in an Azure virtual network.

Compute nodes in A8 or A9 VMs

Checklist

 

Step Description

1. Review background information and considerations about the compute intensive instances

See About the A8 and A9 Compute Intensive Instances.

2. Obtain an Azure account

If you don't have an account or access to a subscription, you can create a free trial account in just a couple of minutes. For details, see Azure Free Trial.

3. Ensure that the quota of cores in the subscription is large enough

The default quota of cores in certain Azure subscriptions is 20, which allows you to deploy at most 1 A9 or 2 A8 VMs as compute nodes. If you need to, increase the quota of cores by contacting Microsoft Support. See Request a cores quota increase in “About the A8 and A9 Instances”.

4. Deploy and configure an HPC Pack 2012 R2 head node in an Azure VM

Download the HPC Pack 2012 R2 installation package from the Microsoft Download Center. To create an Active Directory domain and deploy the head node in Azure, see Deploy an HPC Pack Head Node in an Azure VM.

Ensure that you create a new (regional) virtual network in a region in which the A8 and A9 instances are available.

To later add compute nodes in the A8 or A9 size, consider selecting a size of at least A4 (Extra Large) for the head node.

5. Create a VM image to deploy cluster compute nodes

Create an Azure VM running Windows Server and HPC Pack, and capture the image. For procedures, see Create and capture a Window Server VM image that includes HPC Pack.

6. Deploy compute node VMs by using the custom image

Using the custom Azure VM image, deploy compute node VMs in the Active Directory domain where the head node is deployed. To deploy several compute nodes, you can use an Azure PowerShell script. For procedures, see Deploy compute node VMs.

When specifying the size of the VMs, select A8 or A9.

7. Configure drivers on the compute nodes for connectivity to the RDMA network

Add the HpcVmDrivers extension to the compute node VMs. The extension installs, configures, and manages the necessary drivers. For details, see HpcVmDrivers.

8. Add the compute nodes to the cluster

Create a node template to add the compute nodes, and then add the nodes. See Add Preconfigured Nodes.

When creating the node template, in the Create Node Template Wizard, on the Select Deployment Type page, select Without operating system.

9. Bring the compute nodes online to run jobs

Select the nodes and use the Bring Online action in HPC Cluster Manager. The nodes are ready to run jobs.

10. Submit jobs to the cluster

Connect to the head node to submit jobs, or set up an on-premises computer to do this. For information, see Submit Jobs to an HPC Pack Head Node VM.

11. Take the nodes offline and stop (deallocate) them

When you are done running jobs, take the nodes offline in HPC Cluster Manager. Then, use Azure management tools to shut them down.

Additional considerations

  • Windows Server VM image   RDMA connectivity is currently supported only in VMs created using a Windows Server 2012 R2 or Windows Server 2012 operating system image.

  • HpcVmDrivers extension   RDMA connectivity requires addition of the HpcVmDrivers extension to the compute node VMs. A prerequisite for adding the extension is that the Azure VM Agent is installed on the VMs.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft