Export (0) Print
Expand All

Planning Your Lab

As you prepare for widespread use of Lab Management in your test lab, several questions are likely to come to mind:

  • How many physical servers do I need?

  • What kinds of servers should I buy?

  • How much storage capacity do I need?

  • Can I use a SAN for storage?

  • Can I setup everything on one powerful machine?

  • How do I set up an isolated lab?

This topic provides general guidelines for estimating the numbers and types of physical servers, virtual machines (VMs), and controllers you will need for using Visual Studio Lab Management. In general, the number of servers is not as important as the capacity of each server. For example, a server that uses a dual-core or quad-core processor will be able to support more VMs than a server that uses a single-core processor. Similarly, a server that has 32 GB of RAM can host more VMs at the same time than a server that has only 8GB of RAM.

The following sections contain guidelines to help you provide sufficient capacity for your virtual testing lab. The guidelines are expressed as things to do (Do’s) and not do (Don’ts) when you acquire or configure hardware or when you install and configure the required software.

Don’t

  • Set up everything on a single machine. Only if you will use a single machine just for demonstration or proof of concept purposes should you set up all the components on a single machine.

  • Use the NetworkService account as the service account for Team Foundation Server if your instance of Team Foundation Server uses more than one server to run the logical application tier, and you will be using the Lab Management feature of Visual Studio Using the NetworkService account requires more manual work later to maintain the physical host machines for virtual environments. This extra work is necessary because the NetworkService account for each new application-tier machine has to be added to the local Administrator group on each physical host machine. For example, if you run a virtual lab with 20 physical hosts and add or replace an application-tier machine, you would then have to update each of the 20 host machines with the name of the new application-tier machine and assign permissions. Instead of using the NetworkService account, use a standard domain user account and password for the TFSservice account. By doing this, the domain user account is added once at the initial configuration of the physical host and each subsequent application-tier machine uses the same account. For more information about the limitations of the NetworkService account, see NetworkService Account.

Don’t

  • Install SCVMM on a virtual machine. Installing SCVMM on a virtual machine will make it harder to administer the physical host that virtual machine is running on, and it will slow the performance of the library if you set the library up on the same virtual machine.

  • Use clustering with the SCVMM 2008 R2 library servers. Lab Management only supports clustering in SCVMM environments when using SCVMM 2012, not SCVMM 2008 R2.

  • Set virtual LAN IDs for network adapters. Lab Management does not support setting and using a virtual LAN ID in System Center Virtual Machine Manager. If you manually set the VLAN ID on a network adapter for a virtual machine and then store the virtual machine in the SCVMM library, the VLAN ID will be cleared when the virtual machine is deployed. For more information about how to use VLAN IDs, see Configuring Virtual Networks in VMM.

Do

  • Provide the SCVMM machine enough resources. If you expect to have fewer than 50 VMs in your lab, the machine running SCVMM should have at least:

    • A 64-bit processor

    • 4 GB of memory

    • A 300 GB hard disk drive

    • Windows Server 2008 R2 operating system

    If you expect to have more than 50 VMs, increase these resources. If you plan to install SCVMM along with some other software on the same machine, give SCVMM server the amount of resources that were described earlier in this topic. However, be sure to determine the amount after you deduct the resource consumption of the other software. For instance, if you want to install SCVMM on the machine that is running Team Foundation Server, add the requirements to those of Team Foundation Server, and then ensure that the machine has enough capacity.

  • Provide the server that runs the library at least 200GB of free space on the hard disk drive. In the default installation, make sure that the drive used by the library share has more than 200GB free space.

  • Create the default library share on D: and not C:. By default, SCVMM creates the library share on the same machine it is on and creates the library in the C: drive. Changing the default library share to D: makes it easier to upgrade the machine later.

  • Use a hard disk drive with sufficient speed for the library. If you plan to use the library lightly, a hard disk with sufficient speed will be sufficient. If you plan to use the library moderately, use a RAID 5 disk configuration with 6 to 12 disks for better throughput. If you plan to use the library heavily, use multiple library servers. You can use direct-attached storage or SAN. When using SAN, create a LUN to be used solely for library machine.

  • Run Team Foundation Server under a regular domain user account instead of the network service account. This is required if you put Team Foundation Server and SCVMM on the same machine.

  • If SCVMM is installed on a Hyper-V host, store the Hyper-V hosted virtual machines on a different hard disk drive than the SCVMM library. For example, use C: from one disk for the library, and D: from another disk for Hyper-V virtual machines. SCVMM server, in this case, will be running in the primary OS in Hyper-V. This ensures that when the primary OS is loaded, all guest OS (VMs deployed in Hyper-V) will be impacted. To reduce this impact, configure the host reserves for that machine by adding the Hyper-V host reserves (described below) to the SCVMM machine requirements mentioned earlier. Host reserves can be configured using the SCVMM Administrator Console.

  • Provide line-of-sight network routing between SCVMM and Team Foundation Server, hosts, and other library servers.

  • Update the SCVMM machine with all the latest Windows updates and ensure these updates get applied automatically. If this is not feasible, you should plan to keep track of Windows and SCVMM updates, and apply them manually as they become available.

Don’t

  • Install any additional software such as Team Foundation Server on the physical host machine. If you have sufficiently powerful hosts (exceeding the aggregate needs of the hypervisor and virtual machines), then you can have SCVMM or library server co-located on the host, as long as you also account for the resource constraints of those servers. For example, if you want to install SCVMM on a Hyper-V host machine, then add the host requirements, virtual machine requirements, and SCVMM requirements, and then ensure that the machine has enough capacity.

  • Use clustering with Hyper-V host servers. Lab Management supports clustering in SCVMM environments.

  • Schedule tens of VMs deployments simultaneously. Limit the number of concurrent environment deployments to hosts.

  • Use physical hosts that are in different geographic locations that the library servers. If you must use hosts that are in a different geographic location than the SCVMM library servers, the network speed between SCVMM and hosts should be at least 100 Mbps and not subject to high latencies.

  • Create multiple network adapters on a virtual machine that connects to a specific network. Lab Management overrides this configuration and creates two adapters. One adapter connects to the lab network and the other adapter handles internal communication between virtual machines.

  • Configure the MAC address on a network adapter used in a network-isolated environment. Lab Management clears the MAC address at the time the network-isolated environment is created.

Do

  • Provide the host machines with enough resources and configure them correctly. The number of Hyper-V hosts and the capacity of each host depends on the number of VMs that you host in your lab. If you decide to setup a relatively small lab, install the Hyper-V role on machines with the following configuration:

    • Two, dual-core, 64-bit processors that are Hyper-V capable

    • 16 GB memory

    • 300 GB hard disk space

    • Windows Server 2008 R2 operating system

    • The latest updates of the Windows operating system.

    If you have relatively larger number of virtual machines, and you decide to set up a few, powerful hosts, configure each host as follows:

    • two, quad-core, 64-bit processors that are Hyper-V capable

    • 64 GB memory

    • 1 TB hard disk space

    • Windows Server 2008 R2 operating system

    • The latest updates of the Windows operating system.

  • Reserve enough RAM memory on the host. Out of the host capacity requirements listed above, you must set aside the following resources for the smooth functioning of the hypervisor. For a 16GB host, set aside 20% for the CPU and 2 GB memory. For a 64 GB host, set aside 30% for the CPU and 4 GB memory. These host reserves must be configured in the host properties pane of SCVMM Administrator Console. Only the resources remaining on the host after deducting the host reserves can be used for virtual machines.

  • Provide enough storage for virtual machines. You should use a different disk partition for virtual machine storage than the primary partition of Hyper-V server. For example, use D: for virtual machine storage and C: for the primary partition for the hypervisor. After you decide on the virtual machine storage location, configure that location in Hyper-V Manager or using SCVMM Administrator Console. In Hyper-V Manager, change the Virtual Hard Disks folder and the Virtual Machines folder. In the SCVMM Administrator Console, change the Placement Path under the host properties.

  • Provide the hosts with fast hard disk drives and configure the drives correctly. Disk types for hosts: A disk with good speed is necessary. RAID 5 configured disks are highly recommended. The storage for hosts can come from Direct-attached storage or from SAN. However, if you decide to have your host’s disk come from a SAN drive for space and reliability needs, you will have to have separate LUNs mapped to each host. Even if the LUNs are managed by same controller, given that Visual Studio Lab Management does not leverage any of SAN functionalities, the underlying BITS copy during a virtual machine deployment will happen all the way from library to host via your LAN network.

  • For SCVMM to be installed on a Hyper-V host, it is highly recommended that the hard disk drive used for storing Hyper-V hosted virtual machines is different from the disk used for library. SCVMM server, in this case, will be running in the primary OS in Hyper-V. So, when the primary OS is loaded, all Guest OS (VMs deployed in Hyper-V) will have performance impact. To reduce this impact, configure the host reserves for that machine by adding the Hyper-V machine’s host reserves to the SCVMM machine requirements mentioned earlier. Host reserves can be configured using SCVMM administration console.

  • For a Hyper-V host to be used as a library server as well, you must have multiple disks in the machine. You should use separate hard disks on the host for the virtual machines and for the library storage.

  • Provide the Hyper-V host with line-of-sight networking to Team Foundation Server, SCVMM, and other library servers.

  • If the Hyper-V hosts are in different geographic locations, have a local library server for each location as well.

  • Update the hosts regularly. Hyper-V hosts should be on a network from where operating system updates can be automatically applied. If this is not feasible, you should plan on keeping track of Windows and SCVMM updates, and apply them manually when they become available.

Don’t

  • Install a test controller inside an environment. Only the build, test, and lab agents should be installed on the virtual machines inside an environment.

Do

  • Use more than one build controller when building and deploying an application for testing. The first controller is used by the build process and is not heavily utilized. The second controller is used to deploy the build to virtual machines and run tests; therefore, it can be heavily used if there are a large number of virtual machines in your lab. The second controller is also used to take snapshots of the environment.

  • Use test controllers in the same domain as Team Foundation Server. If esprtfs and a test controller are in a workgroup or untrusted domain, you must create a local user account with same user name and password on both machines, add this user on Team Foundation Server to the "[Project Collection]\Project Collection Test Service Accounts" security group, and then register the test controller with team project collection using this local account.

Do

  • Use a gigabit network to connect the server where SCVMM is installed to the library servers and to the Hyper-V hosts.

  • Establish a full, two-way trust relationship among the domains where Team Foundation Server, the test controller, the build controller, SCVMM, and the physical host of the virtual machines are running.

There are several topologies you can use when setting up Lab Management for testing your application. The simplest topology for using Lab Management requires only two servers: install all Team Foundation Server components on the same server and install all SCVMM 2008 components on an additional server. Alternatively, you might have complex networking topology requirements that restrict the networks in which Team Foundation Server, SCVMM, Hyper-V hosts, and virtual machines running the application-under-test can be located. In another alternative, you might want to configure network load balancing for your Team Foundation Server. The following list suggests several possible dimensions for your topology and the variations within each dimension.

Networking

  • DNS

  • Firewall

  • Threat Management Gateway

Domain

  • One-way trust

  • Two-way trust

  • No-trust

Team Foundation Server logical application tier

  • Single server

  • Multiple servers without network load balancing

  • Multiple servers with network load balancing

Team Foundation Server logical data tier

  • Single server

  • Multiple servers without clustering

  • Multiple servers with clustering

Tests

  • Inside the environment

  • Outside the environment

The following four sample topologies are examples of how you can set up combinations of the above dimensions according to your testing needs.

The Team Foundation Server logical application tier is run on several servers and those servers are controlled by a network load balancer. There is also a separate test network with firewall settings to control the test traffic into and out of the domain network. The following diagram illustrates topology 1.

All machines joined to corporate network

For instructions to set up this topology, see Setting up various topologies to test with Visual Studio Lab Management – Part 1.

The Team Foundation Server logical application tier and data tiers are run on several servers, but those servers are not controlled by a load balancer. There is also a separate test network with a SAN-based library and host. The following diagram illustrates topology 2.

Machines without load balancer but with SAN

For instructions to set up this topology, see Setting up various topologies to test with Visual Studio Lab Management – Part 2.

The Team Foundation Server logical application tier is run on several servers and those servers are controlled by a network load balancer. There is also a separate test network. The applications being tested make calls to a database outside the virtual environment. The following diagram illustrates topology 3.

Machines with database outside the environment

For instructions to set up this topology, see Setting up various topologies to test with Visual Studio Lab Management – Part 3.

The Team Foundation Server logical application tier and data tiers are run on several servers and those servers are controlled by a network load balancer. The test network and environments are in a separate domain. The following diagram illustrates topology 4.

Machines inside two domains

For instructions to set up this topology, see Setting up various topologies to test with Visual Studio Lab Management – Part 4.

Show:
© 2014 Microsoft