Set Up a Development Cluster with Windows HPC Server

To begin developing a LINQ to HPC application, you must set up a small Windows HPC cluster. This topic provides information and resources for setting up and beginning to use a development cluster. If you are setting up a large production cluster, see Deploying a Windows HPC Server Cluster for LINQ to HPC.

In this topic:

Development cluster requirements

Minimum cluster size

The minimum cluster size is determined by the DSC replication factor (which determines the number of copies that the cluster maintains for each new DSC file that it creates). The default replication factor is set to three, so the cluster must contain at least three compute nodes. In a small cluster, you can enable the head node to also act as a compute node – which means that you can set up a development cluster with three servers – one server to act as a head node and compute node, and two servers to act as compute nodes. In this case, all three nodes must be added to the DSC.

noteNote
You can change the replication factor on your cluster depending on the desired tradeoff between storage overhead and tolerance to node failures. You can change this setting, which is a configuration property of the DSC service, after you deploy your cluster and before adding nodes to the DSC. For more information, see Add Compute Nodes to the DSC.

Basic cluster requirements

Compute nodes that will run LINQ to HPC jobs should have a minimum of 4-8 GB of RAM, and a minimum of 200 GB available disk space. The appropriate amount of hard disk drive storage depends on the amount of data that you expect to process.

The computers that you use to make your cluster must have a 64-bit version of the Windows Server 2008 R2 operating system installed. You can download an evaluation version of the operating system with the Windows HPC Server 2008 R2 Evaluation Suite. The evaluation suite also includes Microsoft HPC Pack 2008 R2 with Service Pack 3. You install HPC Pack on each server to create a Windows HPC cluster. Microsoft HPC Pack 2008 R2 with Service Pack 3 includes built-in support for LINQ to HPC. You can download the evaluation suite here:

Download the Windows HPC Server 2008 R2 Suite Evaluation

noteNote
The nodes in your cluster must be part of an Active Directory domain. More information about this and other general requirements, see one of the deployment guides referenced in the next section.

Deployment guides

The following documents walk through the requirements and steps for setting up a Windows HPC Server 2008 R2 cluster:

After your basic cluster is set up, you can Add Compute Nodes to the DSC.

Cluster management basics

The information in this section is intended to help you get started quickly on your cluster.

HPC cluster management tools

The HPC Pack client utilities include several tools that you can use to manage and monitor your cluster. For example, you can use clusrun to run a command on all (or a subset) of your nodes at the same time. You can run any of the client utilities directly on the head node or on a client computer that is on the same network as the head node. The client computer must have the HPC Pack client utilities installed.

  • HPC Cluster Manager: HPC Pack includes a management console that helps you manage cluster configuration, nodes, jobs, and run diagnostic test or usage reports. To get started, see Overview of HPC Cluster Manager

  • Command prompt window: The command-line commands provide a keyboard alternative to most actions that you would otherwise perform with HPC Cluster Manager. To get started, see Windows HPC Server 2008 R2 Command Reference.

  • HPC PowerShell: The HPC cmdlets provide an alternative to most actions that you would otherwise perform with HPC Cluster Manager. To get started, see Windows HPC Server 2008 R2 Cmdlets.

Giving other people access to the cluster

If you want other people to have access to the cluster, you can add them as users or administrators on the cluster. For information and procedures, see Managing Cluster Users.

Enabling nodes to accept jobs

The Online and Offline states allow you to control whether or not a node should accept and run cluster jobs. When you first add nodes to your cluster, they join the cluster in the Offline node state. Offline compute nodes will not be used to run jobs. If you want a compute node to begin accepting jobs, you can change the node state to Online. Consider that these states reflect only the intention to run jobs, and that the node must also be healthy and reachable in order to actually run jobs. For more information, see Understanding Node States, Health, and Operations.

In Microsoft HPC Pack 2008 R2, the head node is automatically configured to act as a compute node. To begin using your head node as a compute node, you must bring the head node to the Online node state. For more information, see Understanding Node Roles in Windows HPC Server 2008 R2,

You can use one of the following methods to bring all of your nodes Online:

  • In HPC Cluster Manager, click Node Management. In the node list, select all the nodes. Right-click your selection, and the click Bring Online.

  • In a command prompt window, use the node resume command. For example, type:

    node resume /all

  • In an elevated HPC PowerShell window, use Get-HpcNode (to get on object that includes all nodes) and Set-HpcNodeState cmdlet. For example, type:

    Get-HpcNode|Set-HpcNodeState -State Online

For more information, see the Windows HPC Server 2008 R2 Technical Library.



Show: