Application Architecture for .NET: Designing Applications and Services
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
The latest version of this guidance is available here: http://www.microsoft.com/architectureguide.
Physical Deployment and Operational Requirements
Summary: This chapter describes the different options available when deploying your application in a physical environment and suggests strategies for meeting the operational (nonfunctional) requirements of your application.
This is Chapter 4 of Application Architecture for .NET: Designing Applications and Services. Start here to get the full picture.
This chapter includes the following sections:
- Deploying Application Components
- Common Deployment Patterns
- Operational Requirements
- Feedback and Support
So far, this guide has described the application architecture in terms of logical layers. It is important to remember that these layers are simply a convenient way to describe the different kinds of functionality in the application. They are conceptual divisions rather than a physical deployment pattern. How you deploy your physical application layers into tiers is driven by how the layers interact with each other and the different requirements they have in terms of security, operations, and communication.
Your application will eventually be deployed into a physical infrastructure. In some cases, the application architect will be able to define the physical infrastructure, but in many other cases, the IT department will determine it. Physical deployment patterns are usually decided through negotiation between the IT department and application developers driven by the solution architect.
In any deployment scenario, you must:
- Know your target physical deployment environment early, from the planning stage of the lifecycle.
- Clearly communicate what environmental constraints drive software design and architecture decisions.
- Clearly communicate what software design decisions require certain infrastructure attributes.
Physical Deployment Environments
Physical deployment environments vary depending on the kind of application being deployed, the user base of the application, scalability, performance requirements, organizational policies, and other factors. A number of infrastructure patterns with similar characteristics can be identified for specific kinds of applications, particularly Internet-based solutions. For example, the Microsoft® Systems Architecture Internet Data Center documentation describes a recommended physical deployment pattern for Web-based applications, as shown in Figure 4.1. For more information, see "Microsoft Systems Architecture: Internet Data Center" on Microsoft TechNet (http://www.microsoft.com/resources/documentation/msa/idc/all/solution/en-us/default.mspx).
Figure 4.1. The Internet Data Center architecture
Just as an application is made up of components and services, the infrastructure that hosts an application can be thought of as consisting of a number of infrastructure building blocks, referred to as physical tiers. These physical tiers represent the physical divisions between the components of your application, and may or may not map directly to the logical tiers or layers used to abstract the different kinds of functionality in the application. The physical tiers may be separated by firewalls or other security boundaries to create different units of trust or security contexts. There are two main families of physical tiers, farms and clusters. Farms consist of identically configured and extendable sets of servers sharing the workload. Clusters are specialized sets of computers controlling a shared resource such as a data store, designed to handle failures of individual nodes gracefully.
A number of common infrastructure building blocks can be found in many application deployment environments.
A Web farm is a load-balanced array of Web servers. A number of technologies can be used to implement the load-balancing mechanism, including hardware solutions such as those offered by Cisco and Nortel switches and routers, and software solutions such as Network Load Balancing. In either case, the principle is the same: A user makes a request for an Internet resource using a URL, and the incoming request is serviced by one of the servers in the farm. Because the requests are load balanced between the servers in the farm, a server failure will not cause the Web site to cease functioning. The requests can be load balanced with no affinity (that is, each request can be serviced by any of the servers in the farm), or with affinity based on the requesting computer's IP address, in which case requests from a particular range of IP addresses are always balanced to the same Web server. In general, you should try to implement load balancing with no affinity to provide the highest level of scalability and availability.
For more information about how Web farms are implemented in Microsoft Systems Architecture Internet Data Center, see the Internet Data Center Reference Architecture Guide on TechNet (http://www.microsoft.com/resources/documentation/msa/idc/all/solution/en-us/rag/ragc02.mspx).
When designing a Web-based user interface that will be deployed in a Web farm, you should consider the following issues:
- Session state. In Active Server Pages (ASP)based applications, you should avoid depending on the ASP Session object for state data between requests because each new request may be sent to a different server. ASP holds session data in-process, so the same session data will not exists on all servers in the farm. With Microsoft ASP.NET-based solutions, this limitation is removed. ASP.NET-based applications can be configured to store their session state out of process on a remote Microsoft Internet Information Services (IIS) server, or in a Microsoft SQL Server database. ASP.NET also provides an easy way to configure "cookieless" sessions, so that the Session object can be used even when the user's browser does not support cookies. For more information about using the Session object in ASP.NETbased applications, see "ASP.NET Session State" on MSDN (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspnet/html/asp12282000.asp).
- ViewState. ViewState is used in ASP.NET pages to maintain user interface consistency between post-back requests. For example, a page may contain a drop-down list that automatically posts the page's data back to the Web server for server-side processing. ViewState is used to ensure that the other controls on the page are not reset to their original default values after the post-back. ViewState is implemented as a hidden form field and can be secured using encryption. In a Web farm environment, this requires consistency between settings in the machine.config file on each server in the farm. For more information about using ViewState in a Web farm, see "Taking a Bite Out of ASP.NET ViewState" on MSDN (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspnet/html/asp11222001.asp).
- SSL Communications. If you are using Secure Sockets Layer (SSL) to encrypt traffic between the client and the Web farm, you need to ensure that affinity is maintained between the client and the particular Web server with which it establishes the SSL session key. To maximize scalability and performance, you may choose to use a separate farm for HTTPS connections, allowing you to load balance HTTP requests with no affinity, but maintain "sticky sessions" for HTTPS requests.
Applications farms are conceptually similar to Web farms, but they are used to load balance requests for business components across multiple application servers. Application farms are used to host business components, in particular those components that use .NET Enterprise Services (COM+) services such as transaction management, loosely coupled events, and other component services. If the components are designed to be stateless, you can implement the load-balancing mechanism of the application farm using Network Load Balancing, because each request can be serviced by any of the identically configured servers in the farm. Alternatively, you can implement an application farm using Component Load Balancing (CLB), a function provided by Microsoft Application Center 2000. For more information about CLB, see the Application Center home page (http://www.microsoft.com/applicationcenter/).
Database clusters are used to provide high availability of a database server. Windows Clustering provides the basis for a clustered SQL Serverbased solution and supports two or four node clusters. Clusters can be configured in Active/Passive mode (where one member of the cluster acts as a failover node), or Active/Active mode (where each cluster member controls its own databases while acting as a failover node for the other cluster member).
For more information about implementing clustered SQL Serverbased solutions, see Chapter 5 of the Internet Data Center Reference Architecture Guide (http://www.microsoft.com/resources/documentation/msa/idc/all/solution/en-us/rag/ragc05.mspx).
When designing a .NETbased application that will connect to a database hosted in a cluster, you should take extra care to open and close connections as you need them, and not hold on to open connection objects. This will ensure that ADO.NET can reconnect to the active database server node in case of a failover in the cluster.
Microsoft BizTalk® Server relies on four SQL Server databases to store its messaging and orchestration data. These databases can benefit from Windows Clustering for high availability. For general information about clustering BizTalk Server, see "High-Availability Solutions Using Microsoft Windows 2000 Cluster Service" in the BizTalk Server 2002 documentation on MSDN (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbiz2k2/html/bts_2002clustering.asp). For information about clustering BizTalk Server in the Internet Data Center infrastructure, see the Internet Data Center Reference Architecture Guide.
BizTalk Server Orchestration persists its schedule data in a SQL Server database. Because the enterprise application integration (EAI) tier is a unit of trust, this data store should be considered private, and it should not be directly accessible to any software component outside the tier. You will need to decide whether you want to deploy the integration functionality in a perimeter network (also known as demilitarized zone, or DMZ) that can interact with the Internet, or on the internal network, which provides better connectivity with the organization's services and applications. The Internet Data Center Reference Architecture Guide discusses these issues in detail.
By introducing multiple BizTalk "Receive" and "Worker" servers around a single shared work queue (itself hosted in a clustered SQL Server environment), you can increase the performance and throughput of the BizTalk cluster as needed and achieve high availability.
Your physical environment will probably include some, if not all, of these common infrastructure building blocks, on which your application components will be deployed.
Another possibility is to deploy components to rich clients. It is assumed that rich clients are running the Microsoft Windows® operating system and that they are able to run .NET components. You can also create a rich user interface through integration with applications such as those in the Microsoft Office suite.
In most enterprises, using rich clients implies:
- The ability to authenticate users through Microsoft Active Directory® directory service (thus having access to a Windows Identity and Principal).
- Access to richer state management options, including maintaining session-related state in memory. (In high scalability and availability scenarios, it is not a good idea to keep in-memory state on the server.)
- The ability to work offline.
It is important to thoroughly test rich client applications, because the security context that they run under is typically constrained by the user policy and any code access security policy present on the computer.
Thin clients usually manage HTML or even simpler UI models, so they are not typically considered a deployment target for your components. You can include .NET controls in HTML pages, but in that case you are simply using the browser as a deployment vehicle, and should consider your user interface to be rich.
Planning the Physical Location of Application Components
One of the most important decisions you need to make as an application architect is where you will physically deploy the components in your application. As with all aspects of application architecture, physical deployment decisions involve trade-offs between performance, reusability, security, and other factors. Organizational policies regarding security, communications, and operational management also affect the deployment decisions you make.
It is common to wonder whether different pieces of interacting software should be deployed together, especially if they are part of the same service or application. There is no one correct answer to the question of whether to distribute your components across separate physical tiers. However, there are certain factors to consider that can help you reach a decision about deploying components together or deploying them separately.
When deciding on the physical architecture of your application, you should keep one thing in mind: distributing your components results in a performance hit. There are a number of good reasons to distribute components, but doing so always affects performance negatively. Distributing components can improve the scalability and manageability of your application, lower financial costs, and so on.
In general, choosing a deployment consists of three main stages involving both infrastructure and application architects:
- Identifying the minimum topologies that work. Early in the design phase, you must determine what conditions your application requires if it is to work at all. For example, your service agents may need to call out to Web services on the Internet. The application will not work if you cannot establish the appropriate outgoing communication. You should make a list of these typed of "must have" requirements.
- Applying restrictions and enforcing requirements. A requirement from your application design (for example, the use of Microsoft Distributed Transaction Coordinator [DTC] transactions) translates to a set of requirements for the infrastructure (for example, the DTC uses remote procedure call [RPC] ports to communicate, so those must be open in the internal firewalls).
The infrastructure architect should make a list of "must have" requirements for his or her data center similar to the one you made in the previous stage. Then you should start at the infrastructure and follow the same process of applying restrictions and identifying requirements. A design characteristic of the infrastructure may be considered unchangeable, and it may affect how you design your application. For example, the infrastructure may not provide access to corporate domain users on an external Web farm due to security. This is a design constraint that precludes you from authenticating users of your application with Windows authentication.
As in the previous step, these requirements and constraints should be laid out early in the design cycle before building the application. Sometimes the requirements of the application and those of the infrastructure will conflict. The solution architect should arbitrate the decision.
- Optimizing the infrastructure and application. After you have determined the requirements and constraints for the infrastructure and application and have resolved all conflicts, you may find that many characteristics of both the application and infrastructure design have been left unspecified. Both the application and infrastructure should then be tuned to improve their characteristics in these areas. For example, if the infrastructure architect has provided access through firewall ports for Message Queuing, but your application is not using it, he or she may improve security by closing those ports. On the other hand, the infrastructure may be agnostic to the authentication mechanism you use with your database, so you may choose to use integrated Windows or SQL Server authentication depending on your application security model.
Factors Affecting Component Deployment
A number of quantitative and qualitative factors influence the decision to deploy components together or distribute them. These factors can be grouped around abilities of your application and are closely related to the policies: security, operational management, and communication:
In deciding how to deploy components, you should consider the following security factors:
- Location of sensitive resources and data. Your security policy may determine that certain libraries, encryption keys, or other resources cannot be deployed in particular security contexts (for example, on a Web server or on users' desktop computers).
You may also want to prevent access to sensitive resources from components deployed in less trusted physical zones. For example, you may not want to allow access to your database from a Web farm, but may instead require a separate layer of components behind a firewall to perform database access.
- Increased security boundaries. By physically distributing components over several tiers, you increase the number of obstacles that a potential attacker must overcome to compromise the system.
- Security context of running code. Physically distributing your components may cause them to run in drastically different security contexts. For example, a remote component tier usually runs under a service account, whereas Web tier components may run under the authenticated user account. If you distribute your components, you will have to decide how you will manage identity flow, impersonation, and auditing of actions performed under service accounts.
The management factors affecting component deployment are as follows:
- Management and monitoring. To make it easier to manage and monitor a piece of your application logic that is used by multiple consumers, you may want to deploy it in only one place where everyone can access it. For example, you may decide to deploy a business component that is used by multiple user interfaces in a single central location.
Backup and restore capabilities may not be available for all physical tiers of your application , so you should make sure that critical databases and queues are accessible to your backup and restore solution.
- Component location dependencies. Some of your components may rely on existing software or hardware and must be physically located on the same computer. For example, your application may use a connection to a proprietary network that can only be established from a particular computer in the existing physical environment. In this case, some of your application logic needs to be deployed on that particular server.
- Licensing. Some libraries and adaptors cannot be deployed freely without incurring extra costs. Also, some products are licensed on a per-CPU basis. CPU-based licensing makes it more efficient to dedicate fewer CPUs to such a product rather than to share many CPUs among many products and tasks.
- Political factors. In some organizations, political factors may influence where you locate certain functionality. For example, a group within an organization may want ownership of a particular piece of a service or application.
Performance, Availability, and Scalability
Your decision to deploy components together or to distribute them should take into account the following factors involving performance, availability, and scalability:
- Complexity of interfaces. It is more efficient to distribute components whenever the interface between them is designed to require fewer information exchanges or calls with more data. Such an interface is usually referred to as "chunky" (as opposed to a "chatty" interface). The granularity of interaction between your components thus dramatically affects performance and how state is managed, with the related impact on scalability and availability.
- Communications. You will need to move your atomic transaction root to a place where it can communicate with all resource managers. DTC uses RPC to communicate through port 135 and a dynamic range of other ports. You may not want to open these ports on a firewall that separates your Web farm from your business components.
- Availability. You can improve your application's availability by physically separating business-critical activities from other computers and components that could fail. For example, you may choose to implement long-running business processes on a separate tier of clustered servers, so that a failure in your Web farm does not prevent business processes from being completed.
- Performance. As mentioned before, distributing components results in the performance hit of serializing and deserializing data and establishing network connections. However, you may improve the overall scalability of your application by separating units of work that affect each other.
- Hardware capabilities. Specific types of servers are better suited to perform particular tasks and host particular products and technologies. For example, Web servers are typically computers with good memory and processing power. However, they do not tend to have robust storage capabilities (such as RAID mirroring, and so on) that can be replaced rapidly in the event of a hardware failure. Because of this, you shouldn't install a database with mission critical data on a computer that is intended as a Web server.
Distribution Boundaries Between Components
If you design your application according to the guidelines in chapters 2 and 3 of this guide, you will find that it is more efficient to deploy certain types of components together, whereas other types of components interact with their callers in a way better suited for remote access.
Planning User Interface Deployment
Deciding on a deployment location for the user interface components is very straightforward: You deploy Windows-based applications on the clients, and ASP.NET pages on Web servers.
User process components should be deployed together with the user interface components that they orchestrate. In Web environments, this means deploying the user process components on the IIS Web servers, and for Windows clients this means deploying the user process components with the Windows Formsbased application. The user process components should be deployed in a.NET assembly that is separate from the user interface logic to facilitate reuse and easy maintenance.
Planning Business Component Deployment
The question of where to deploy business logic usually provokes strong feelings and debate among application and infrastructure architects. Although there are many possible physical deployment patterns for business components, you should consider the following recommendations:
- Business components that are used synchronously by user interfaces or user process components can be deployed with the user interface to maximize performance and ease operational management. This approach is more appropriate in Web-based applications than in Windows-based applications because you would probably not want to deploy your business components to every desktop. However, even in Web scenarios, if you want to isolate your business logic so it is not in the same trust boundary as the user interface, or if you need to reuse the business logic for multiple user interfaces, you may choose to deploy the business components on a separate tier of application servers and use a communications technology such as .NET remoting, DCOM, or SOAP over HTTP to make them accessible to the user interface logic. In Web scenarios, the inclusion of a firewall between the user interface and the application servers may add configuration and management complexity.
- Business processes that are implemented as a service, and are therefore communicated with asynchronously, should generally be deployed on a separate physical tier. Usually, asynchronous services should have their own application cluster, separate from other synchronous application servers, so that they form their own trust zone. This is true when implementing a business workflow using custom .NET components or BizTalk Server orchestration. The business components used "internally" by the service should generally be deployed on the same physical tier as the service interface components used to call into the service.
- Service agent components should generally be deployed with the business components or processes that use them. However, you may want to deploy service agents on a separate physical tier if the tier handles communication with an external service over the Internet and you want to isolate the Internet-facing communication in a different security context from your business components.
- Business entity components and strongly typed DataSets should generally be deployed with the code that uses them. Calling business entities remotely is usually not a good design choice from a performance perspective, because they tend to be stateful and expose "chatty" interfaces, which would cause a great deal of network traffic in a remote deployment scenario.
Planning Service Interface and Service Agent Deployment
Service interfaces and service agent components receive calls from, and make calls to, external applications and services. These external applications and services may be located within the organization's network, in a zone that shares security and management policies, or they may be located outside the organization, probably requiring communication over the intranet or extranet.
Service interfaces can be deployed together with the business components and workflows they expose, or they can be deployed remotely. The criteria for deciding whether to deploy service interfaces together with the business logic are similar to those used when deciding where to deploy the user interface. If the service interface requires a connection to the Internet or a less trusted environment, the extra network hop may provide the extra security required. Having your service interfaces deployed remotely from your business components may allow two Web farms (one for ASP.NET-based UIs, and one for XML Web services) to call into the same application farm that hosts your business components.
Service agents pose a similar set of decisions, except that these components call services instead of receiving calls. Common infrastructure designs may limit the servers from which outgoing HTTP calls are made.
Planning Business Workflow Deployment
It is recommended that you deploy any BizTalk EAI clusters in a set of computers separate from the servers hosting any ASP.NET user interfaces and business components used by the UI. Doing so enables you to optimize processor usage for the typically asynchronous business workflow tasks and provide management processes that are adequate for BizTalk, Message Queuing, and the other specific technologies business workflows rely on.
It is important to decide whether to deploy the business components and data access components used by the business workflow into the same cluster. It is common to do so because the business workflows are usually deployed in a secure environment. However, deploying the same business components in multiple places adds complexity to the management processes, so it is generally recommended that you separate the following into distinct assemblies:
- Business components called by UI components
- Business components used only from business workflows or other business components
Planning Data Access Component Deployment
Application data is nearly always stored on a dedicated database server, which for all but the most simple of applications should be clustered to ensure high availability. In Web applications, this database server should be in a VLAN somewhere behind the second firewall of the perimeter network to protect your data.
Deploying data access components with the components that use them yields the following advantages:
- Data transfers will be optimized because cross-process marshalling is avoided.
- Transactions involving business processes and data access components do not need to travel through firewalls, which means that extra ports do not need to be opened.
- Distributing components adds transaction failure nodes.
- Deploying components together guarantees automatic security context flow, so there is no need to set principal objects or reauthenticate remoting channels. Doing so also enables you to leverage code-access security to restrict which assemblies can call your data access components.
- You want to prevent direct network access to your data sources from your Web farms for security reasons (this is a common reason to deploy the components separately). In such cases, you should deploy data access components in a physical business tier (and therefore a separate security context) and invoke them remotely from your Web tier.
- You want to use the data access components from both business components and the user interface components, but do not want to deploy duplicate components in two locations.
Partitioning Your Application or Service into Assemblies
.NET assemblies are units of deploymenta .NET assembly is deployed and versioned as a unit. .NET provides rich versioning and deployment capabilities that allow for versioning policy enforcement after an application has been deployed, but you need to carefully plan assembly partitioning to take full advantage of them. The assemblies that you create and the way that you distribute the components among them have a long-term impact on how your application is developed, deployed, managed, updated, and maintained.
Many factors affect how you distribute your components into separate assemblies. The following recommendations will help you make the appropriate choices for your application size, team composition and distribution, and management processes:
- Create a separate assembly for each component type. Using separate assemblies for data access components, business components, service interfaces, business entities, and so on gives you basic flexibility for deployment and maintenance of the application.
- Avoid deploying one assembly into multiple locations. Deploying the same components in multiple places increases the complexity of your deployment and management processes, so carefully consider whether you can consolidate all deployments into one physical tier, or whether you should use more than one assembly for a particular component type.
- Consider having more than one assembly per component type. Not all components of the same type follow the same development and maintenance cycles. For example, you may have multiple service agent components abstracting service calls for multiple business partners. In this case, it may be better to create one assembly per business partner to simplify versioning. Consider the following factors when deciding whether to use more than one assembly per component type:
- What components, services, or data sources the assembly deals withyou may want to have a different assembly for service agent components that deal with different business partners, for components that deal with a specific primary interop assembly, or for business components that will be invoked from the user interface or business workflow exclusively. Separating components based on where they are called from or what they call improves your application management because you won't need to redeploy components; it also prevents you from having unused code deployed in different places.
- Data access components may deal with multiple data sources. Separating data access components that work with different data sources into different assemblies may be beneficial if the implementation accessing a particular data source changes frequently. Otherwise, it is recommended that you use only one data access component assembly to provide abstraction from the fact that you are working with multiple sources.
- Separate shared types into separate assemblies. Many components in your application may rely on the same types to perform their work. It is recommended that you separate the following types into their distinct assemblies:
- Exceptions. Many application layers may need to deal with the same exception types. If you factor out in a separate assembly the exceptions that all your application layers rely on, you will not need to deploy assemblies containing business logic where the logic is not needed.
- Shared interfaces and base classes. Your application may define interfaces for other developers to use, or for easy addition of logic after the application is deployed. Separating interfaces and base classes used by others into assemblies that are separate from your business logic implementation will prevent complex versioning bindings in case your implementation changes, and will let you share the assemblies with the interface definition without sharing the assembly with your organization's code to external developers.
- Utility components. Your application typically relies on a set of utility components or building blocks that encapsulate specific technologies or provide services that may be used by many application layers, such as data access helpers, exception management, and security frameworks. Factoring these into their own assemblies simplifies development, maintenance, and versioning.
- Consider the impact on the development process. Having a large number of assemblies adds flexibility for deployment and maintenance, but it may increase the complexity of the development process because more build references, projects, and versioning issues will need to be taken care of. However, using separate assemblies that deal with a particular technology may help to distribute the workload to the right developers with the right skills, and using multiple Microsoft Visual Studio® .NET projects may facilitate work across development teams. For detailed guidelines on how to partition assemblies with regard to complex development teams or assembly dependencies, see Chapter 3 of "Team Development with Visual Studio .NET and Visual SourceSafe" on MSDN (http://msdn.microsoft.com/library/?url=/library/en-us/dnbda/html/tdlg_rm.asp?frame=true).
- Avoid deploying unused code. If you partition assemblies that may be invoked from multiple components and deploy them in multiple places, you may end up deploying unused code. Some organizations may consider this a security or intellectual property risk, so consider whether you can re-factor your assemblies so that a component is deployed only where it is needed. .NET assemblies have a very small footprint, so disk space is not an important consideration.
- Use a factoring approach to assembly partitioning. You may want to start your project by defining a base set of well-planned assemblies, and then use common re-factoring disciplines to drive the creation of further assemblies by analyzing change frequencies, dependencies, and the other factors outlined earlier in this chapter.
- Enforce assembly partitioning with enterprise templates. Visual Studio .NET Enterprise templates let you define and enforce policies that developers use when creating the application, including assembly structure and dependency. If you will be developing a large application or developing many applications with a similar architecture, consider creating or tailoring an enterprise template to suit your needs.
Packaging and Distributing Application Components
To distribute your application, you will need to choose a way to package it and deploy it. Visual Studio .NET provides multiple options for packaging your applications, including but not limited to Microsoft Windows Installer files and CAB files.
You can also deploy some .NETbased applications with no packaging by copying the right files to the destination, sending them through e-mail, or providing FTP downloads.
There are also other tools and Microsoft services which you can use to distribute your application. These include:
- Microsoft Application Center
- Microsoft Systems Management Server
- Microsoft Active Directory
The deployment pattern a particular application uses is typically determined by the architect in a process that involves parties responsible for operations and development. Different organizations or software vendors will approach the problem differently, so there is no single approach to determining the infrastructure. This section discusses several deployment patterns for your components and considers their pros, cons, and requirements.
Many variations of deployment patterns are possible (for example, you may need to deploy Microsoft Mobile Information Server in your solution), but not all are described in this section. To understand specific deployment characteristics and requirements, see the Internet Data Center guidelines earlier in this chapter and the appropriate product documentation.
You should also note that you can combine deployment patterns. It is advisable to deploy each component of the solution in only one physical tier or farm, but for security reasons you may want to consider deploying the same component in multiple locations at the expense of manageability.
Note In the discussion that follows, the figures reference component types, but not specific assemblies. To determine assembly partitioning, follow the guidelines provided earlier in this chapter.
These figures look slightly different from Figure 4.1, which illustrates the Internet Data Center architecture, in that they show individual firewall instances between farms. The physical firewall devices in Internet Data Center may host multiple firewall instances, which in turn makes the physical network layout look different. All deployment patterns illustrated in the following diagrams can be mapped directly to small variations of the Internet Data Center illustrated in Figure 4.1.
Web-Based User Interface Scenarios
The two deployment scenarios outlined in the following discussion are common variations found when working with Web-based user interfaces.
Web Farm with Local Business Logic
A Web farm with local business logic is a common deployment pattern that places all application componentsuser interface components (ASP.NET pages), user process components (if used), business components, and data access componentson the Web farm servers. Having the data access on the Web farm allows you to use data readers for fast data rendering. This pattern provides the highest performance, because all component calls are local, and only the databases are accessed remotely, as illustrated in Figure 4.2.
Figure 4.2. Web farm with local business logic
Requirements and considerations for using a Web farm with local business logic include:
- Clients (1) can access the Web farm through a firewall (2) using HTTP and possibly SSL ports.
- The Web farm (3) can host ASP.NET pages and your business components, possibly in Enterprise Services.
- Access to the database is allowed from the Web farm through a firewall (4). The Web farm will need to host client libraries and manage connection strings, which adds important security requirements.
- If the components are using Enterprise Services transactions, RPC ports are open in (4) to allow access to the data sources (5).
Web Farm with Remote Business Logic
Another common deployment pattern is the Web farm with remote business logic. This places all application business components on another farm that is accessed remotely from the ASP.NET pages on the Web farm servers. Performance is slower than in the previous scenario, but this pattern allows multiple clients (for example, desktop clients on an intranet) to share an application farm, which simplifies management. This pattern also provides better separation of the servers managing user interface and the servers managing business transactions, which improves availability by isolating failure points. Scalability may be better in some scenarios where independent resource-intensive operations are needed in both the Web and application farms because these operations will not compete for resources: Your Web servers will serve pages faster and your components will finish sooner.
Figure 4.3 illustrates this deployment pattern.
Figure 4.3. Web farm with remote business logic
Requirements and considerations for using a Web farm with remote business logic include:
- Clients (1) can access the Web farm through a firewall (2) using HTTP and possibly SSL ports.
- The Web farm (3) can host ASP.NET pages and user process components. These pages will not be able to take advantage of DataReaders to render data from data access components unless you deploy data access components on the Web farm and enable the appropriate firewall ports to access the data.
- All business components are hosted in an application farm (5) that other clients can also access. These components are reached through a firewall (4). Depending on the communication channel being used, you may need to open different ports. If your business components are hosted in Enterprise Services, you will need to open RPC ports. For more information about port requirements, see "Designing the Communications Policy" in Chapter 3, "Security, Operational Management, and Communications Policies."
- An infrastructure will typically have either firewall (4) or (6) in place. Internet Data Center provides the capability to have both.
- Access to the database is allowed from the Web farm through the firewall (6). The application farm will need to host client libraries and manage connection strings.
- If the components are using Enterprise Services transactions, RPC ports are open in (6) to allow access to the data sources (7).
Rich Client User Interface Scenarios
The following two scenarios assume a rich client.
Rich Client with Remote Components
A common deployment pattern for rich client applications deployed on an intranet uses remote components. The pattern consists of one server farm that hosts data access components and business components, with all user process and user interface components deployed on the client. as shown in Figure 4.4.
Figure 4.4. Rich client with remote components
Requirements and considerations for using a rich client with remote components include:
- Rich clients (1) have locally deployed user interface components (for example, Windows Forms, user controls, and so on) and user process components (if used). You can deploy these components using SMS, over Active Directory, or download them using HTTP. If your application provides offline functionality, rich clients will also provide the local storage and queuing infrastructure required for offline work.
- Although shown, firewalls (2) and (4) are not present in any but the largest enterprise data centers. Smaller environments will have clients, application servers, and data sources on the intranet with no network separation. Firewall (2) will require ports to be opened for your specific remoting strategy between clients and servers (typically, a TCP port if using .NET remoting, or DCOM ports, and Message Queuing, if used). Firewall (4) will require ports open to access the database and allow for transaction coordination with the data sources.
- Having remote business components in the application farm (3) as shown allows other clients (for example, a Web farm facing the Internet or intranet) share the deployment. Data access components will also be located in this farm and will be accessed remotely from the clients.
Rich Client with Web Service Access
In some cases, you want to provide rich client experience to your users while accessing data and business logic over the Internet. In these cases, you can expose your business logic and data access logic used by the client in a façade or service interface. The rich clients can then invoke this service interface directly with the Web service proxies that Visual Studio .NET generates. Because the rich functionality needed by the user interface is exposed to a larger audience, you must take extra care in the areas of authentication, authorization, and secure communication between clients and the service interface.
Figure 4.5 illustrates the rich client with Web access pattern.
Figure 4.5. Rich client with Web service access
Requirements and considerations for using a rich client with Web service access include:
- This scenario is similar to using a rich client with remote components, except that in this case an XML Web service (ASP.NET .asmx file) service interface provides access to appropriate parts of your application's business logic and data access logic. This service can access your application components locally in the application farm (3) as shown or they can invoke components remotely (not shown).
- Rich clients can access the server functionality using standard protocols and formats. The use of SOAP allows others to build other UI layers that meet their needs.
Service Integration Scenarios
The following scenarios show patterns that are commonly used when you need to expose and invoke external services and applications.
Service Agents and Interfaces Deployed with Business Components
Deploying the service interfaces (such as XML Web services) and service agents (components that may call Web Services, or that may connect with other platforms) with the business logic is a scenario very similar to deploying ASP.NET user interfaces and business logic components together. Figure 4.6 shows a physical deployment pattern for a service-based application.
Figure 4.6. A service with local business logic
Requirements and considerations for using service agents and interfaces with local business logic include:
- Clients and services calling into your application (1) can access the Web farm through a firewall (2) using HTTP and possibly SSL ports. The Web farm (3) can host XML Web services, Message Queuing listeners, and other service interface code.
- The service interfaces in the Web farm invoke your business components that will potentially reside in Enterprise Services. When determining the infrastructure for application tiers using Message Queuing, you need to consider the scalability and availability of your application: You will need to make a Web farm to load balance XML Web service calls, but if your components are receiving Message Queuing messages, you will need to build a failover cluster to ensure the message store availability. Because components may be farmed, a failover cluster may not be the most economically efficient way to utilize the servers. You may decide to split the infrastructure pattern used for Message Queuing messages and XML Web service calls if a small set of computers cannot provide your scalability and availability requirements.
- Calls to data sources (4) and internal services (5) can be initiated anywhere from the farm. This requires that the firewall at (5) allow outgoing calls (HTTP calls in the case of Web services). In Internet Data Center, outgoing calls to outside services are made through a separate logical firewall (6). Using a different firewall to allow incoming and outgoing HTTP sessions to the Internet can increase security if the computers making the calls and those receiving them are on different VLANs. With the appropriate firewall rules, firewalls (2) and (6) can be merged.
- Access to the data sources is allowed from the Web farm through the firewall at (5). The Web farm will need to host client libraries and manage connection strings, which adds important security requirements.
- If the components are using Enterprise Services transactions, RPC ports are open in (5) to allow access to the data sources. Message Queuing ports may be need to be opened on this firewall if queues are used to communicate with the internal services.
Business Components Separated from Service Agents and Interfaces
Another pattern used in service integration scenarios is the separation of business components from the service agents and service interfaces. This infrastructure model is used to separate the tiers that have contact with the Internet (either by receiving calls or by making calls to other servers) from the farms hosting business logic. When using this pattern, you also need to deploy service agent components in a different cluster when using clustered Message Queuing to receive messages, so that you can achieve availability and still have a load-balanced farm hosting your business components. Figure 4.7 shows this approach.
Requirements and considerations for using a Web farm with remote business logic include:
- Calling services (1) can access the service interfaces in the Web farm (3) hosting XML Web services or Message Queuing HTTP endpoints through a firewall (2) using HTTP and possibly SSL ports.
- The Web farm can host XML Web services and possibly data access logic components as discussed in Chapter 2, "Designing the Components of an Application or Service." You can deploy data access components in this Web farm to take advantage of DataReaders to render data for the results of Web service calls. If you do so, though, you will have to allow database access through a second firewall (4). If this is a security concern, you will have to access the data provided by data access layer components and business components remotely.
- All business components are hosted in an application farm (4) that other clients may also access. These components are reached from the Web farm through the second firewall. Depending on the communication channel being used, you may need to open different ports. If your business components are hosted in Enterprise Services, you will need RPC ports open for DCOM. For more information about port requirements, see "Designing the Communications Policy" in Chapter 3, "Security, Operational Management, and Communications Policies."
- The business components will call data access components (5) and service agents for internal services locally (6). Databases and internal services are accessed through the firewall at (7).
- An infrastructure will typically have either firewall (4) or (7) in place, depending on whether business components can be inside the DMZ or need extra protection. Internet Data Center provides the capability to have both.
- If the components are using Enterprise Services transactions, RPC ports are open in firewall (7) to allow access to the data sources.
- Service agents (8) that need to make calls out to the Internet can be deployed in the Web farm (or another farm) to isolate the tier that has Internet exposure from the business logic that has access to internal databases and services. Note that there are two firewalls separating the application from the Internetone for incoming calls (2) and one for outgoing calls (9). If you are implementing security by isolation, you should use this deployment pattern to deploy service agents remotely. If you need to consolidate the servers hosting the service interfaces and service agents, you can also merge these two firewalls into one firewall with both outgoing and incoming ports open.
EAI Clusters and Application Components
You should approach Enterprise Application Integration (EAI) infrastructure components separately from the infrastructure that hosts traditional applications.
However, the EAI cluster will probably host business workflows that use business components to implement steps in the business processes. These components may be hosted locally or remotely from the cluster running the business workflow. You have three options in this case:
- You could host the business components locally on the EAI cluster if the EAI cluster can access the database and if the components will only be used in the context of the business workflows that run in this cluster.
- You could call your business components through .NET remoting, DCOM, or XML Web services and access them on the application or Web farm where they are deployed. This implies that your EAI cluster can make calls to the application farm.
- Finally, you could deploy your business components assemblies on both the EAI cluster and the application or Web farm, with the associated management costs of having the same assembly in more than one location.
Figure 4.8. Separating EAI components from business components
Figure 4.8 shows user interface components on a Web farm (1) calling business components on an application farm (2), which in turn work with the application data source (3). The EAI cluster (4) has its own business components needed to perform the steps in its business workflows, and accesses other services (in this example, only internal services) through a firewall (5).
Composing Deployment Scenarios
The deployment patterns in the preceding discussions are commonly found in well-architected applications. Of course, particular scenarios may vary, and these examples may not precisely match your requirements and needs. You can compose almost any infrastructure required for a layered application based on these patterns. The important thing is to follow the conceptual model outlined earlier and to understand the application design, the infrastructure design, and how they affect each other early in the application lifecycle.
Production, Test, and Staging Environments
You may have separate data centers for developing, testing, staging, and stress-testing your application. These data centers will usually vary in design, mainly because it is not cost-effective to have a full production data center just for application staging. If your data centers are different, here are some things you should consider:
- Firewalls: Even if you don't have firewalls deployed in non-production environments, you should plan ahead and test taking into account port restrictions and direction of communication. Software products that emulate firewalls are available and are a good addition to the test platform.
- Network topology: Your staging environment may be smaller than the production environment, but you should strive to keep the network topology consistent. In other words, you want to make sure communication across computers works as expected.
- Processor count: If your target environment has multiple processors, you should test your application on multiple processors to make sure multithreaded code will not behave in unexpected ways.
The goal of the following discussion is to provide you with design techniques and practices that will enable you to achieve the operational (nonfunctional) requirements for your application and services. These requirements include the levels of scalability, availability, maintainability, security, and manageability your application must achieve. They may affect the design of the application policies, but they will also affect the way you design your application logic.
In some cases, complying with some operational requirements will create challenges to comply with others. For example, it is common to lower the manageability of an application favoring security. It is important to prioritize application features supporting operational requirements early in the life cycle so these tradeoffs and decisions can be factored into the application implementation from the start.
The following discussion is by no means complete, but will help you isolate key issues pertaining important operational requirements.
An application's scalability is its ability to provide an acceptable level of overall performance when one or more load factors is increased. Common load factors include the number of users, the amount of data being managed by the application, and the number of transactions.
Overall performance can be measured in terms of throughput and response time. Throughput measures the amount of work that the application can perform in a given time frame, and response time measures the amount of time between a user or a process making a request and seeing the results of the request. A number of factors can affect both throughput and response time, including hardware performance, physical resources such as memory, network latency (the amount of time it takes to transmit data over a network link), and application design. While many performance and scalability issues can be resolved by increasing hardware resources, an application that is not designed to operate efficiently will nearly always perform poorly regardless of how much hardware you throw at the problem.
Consider the following design guidelines for highly scalable applications:
- Use asynchronous operations. Reduce response time and throughput demand by using asynchronous operations.
Synchronous operations require that the user wait until a business operation is complete. By making business operations asynchronous, system control can be returned to the user more quickly and processing requests can be queued, helping to control throughput demand without overwhelming the business components. For example, suppose that a user places an order in an e-commerce site. If the order process is performed synchronously, the user will have to wait until the credit card has been authorized and the goods have been ordered from the supplier before receiving confirmation. If you implement the order process asynchronously, the user can be given a confirmation or failure message by e-mail after the operation is complete. Designing asynchronous applications creates more work for the developer (especially when they require transactional logic) but can greatly improve scalability.
- Cache data where it is required. Whenever possible, you should try to cache data at the location where it is required, and therefore minimize the number of remote data requests made to your data store. For example, the e-commerce site described earlier will provide a much higher level of scalability if the product data is cached in the Web site instead of being retrieved from the database each time a user tries to view a list of products.
- Avoid holding state unnecessarily. Where possible, you should design your operations to be stateless. Doing so prevents resource contention, improves data consistency, and allows requests to be load balanced across multiple servers in a farm. On some occasions, state will need to be persisted; for example, a customer's shopping cart must be stored across HTTP requests. In these scenarios, you must plan your state persistence and rehydration logic carefully. You should only rehydrate state when it is actually needed (for example, when a user wants to view their shopping cart or check out).
- Avoid resource contention. Some resources, such as database connections, are limited, and some resources, such as database locks, are exclusive. You should design your application in such a way that resources are held for the shortest possible time. You should use database connection pooling effectively, and you should design operations to open the most contentious resources last (so that they are not held for the entire operation). This is particularly true when using atomic transactions. For example, if the Orders table of a database is used by many parts of the application, you should make the insertion of order data the last step in the ordering process to avoid holding a lock on the table while waiting for credit card authorization.
- Partition data, resources, and operations. You can spread the load of your application across farms of servers using load balancing technologies such as Network Load Balancing. This allows you to adopt a "scale out" strategy whereby you increase scalability simply by adding more servers to the farm. Scaling out is usually more cost effective than scaling up by adding hardware resources to your servers.
Databases should be scaled up primarily by adding hardware resources, but you can also scale out data by partitioning your database across multiple database servers, with each server assuming responsibility for a subset of the data. Dynamic data routing logic is used in the middle-tier to direct requests to the appropriate database server. For more information about partitioning a SQL Server database, see Chapter 5, "SQL Server Database Design" in the "Internet Data Center Reference Architecture Guide" on TechNet (http://www.microsoft.com/resources/documentation/msa/idc/all/solution/en-us/rag/ragc05.mspx).
Availability is a measure of the percentage of time your application is able to respond to requests in a way that its callers expect. It is generally accepted that even the most robust of applications must occasionally be unavailable, but you should design your application in such a way that the risk of unexpected outages is minimized. For business critical applications, many organizations aim for "five nines," or 99.999% availability, and this level of robustness requires careful planning and design.
Consider the following high availability strategies for application design:
- Avoid single points of failure. In your application design and deployment infrastructure, you should try to avoid having any single component that, if taken offline, would render the application unusable. You can avoid single points of failure in a Web or application farm by using load balancing management software, such as that provided with Microsoft Application Center, which will remove an unresponsive server from a load balanced farm without disrupting the operations of the remaining servers.
You should store business data in data stores (such as databases or queues) that are deployed in failover clusters, so that if a server controlling the data store fails for any reason, the application will "fail over" to the standby server. You should also provide redundant data paths so that there is more than one physical network path to the database server, allowing the application to continue to function in the event of a network cable failure.
To protect the application from hard disk failures, disk redundancy measures such as Redundant Array of Inexpensive Disk (RAID) technologies should be used.
- Use caching and queuing to minimize "same time and place" requirements. Caching read-only reference data where it is needed not only provides improved scalability, but it also reduces reliance on the underlying data store. In the event that the database becomes unavailable, the application can continue to function because the data is still available in the cache.
Similarly, by queuing requests to insert or update data, the application can still service client requests even when the underlying data sources and services are unavailable. This would allow an e-commerce organization to continue taking orders, even though the order data could not be written to the database immediately.
- Plan an effective backup strategy. Regardless of the high availability measures in place, you must ensure that you have an effective backup strategy that minimizes the time taken to recover the system to an operable state in the event of a catastrophic failure.
- Rigorously test and debug your code. Of course, you should always test and debug your code, but when high availability is a requirement it is even more important to ensure that you remove any potential infinite loops, memory leaks, or unhandled exceptions that might cause the application to fail or stop responding.
With respect to maintainability, your application should be designed and deployed in such a way that it can be maintained and repaired easily.
Consider the following recommendations for designing a maintainable application:
- Structure your code in a predictable manner. Keeping your coding techniques consistent throughout the application makes it easier to maintain. You should use a standardized convention for namespace, variable, class, and constant names, consistent array boundaries, and inline comments.
- Isolate frequently changing data and behavior. Encapsulate frequently changing logic and data into separate components that can be updated independent of the rest of the application.
- Use metadata for configuration and program parameters. Storing application configuration data, such as connection strings and environmental variables, in external metadata repositories, such as XML configuration files, makes it easy to change these values in the production environment without editing code or recompiling the application. For more information about using metadata, see "Designing the Operational Management Policy" in Chapter 3, "Security, Operational Management, and Communications Policies."
- Use pluggable types. When a certain piece of application logic can be implemented in many ways, it is useful to define an interface and have the application load the correct class that implements the interface at run time. This lets you "plug in" other components that implement the interface after the application has been deployed without having to modify it. You can store fully qualified type names in a configuration store and use them to instantiate objects at run time. When using this approach, you must ensure that your configuration store is adequately secured to prevent an attacker from forcing your application to use a component of his or her own devising.
- Interface design. Design your component interfaces so that all public properties and method parameters are of common types. Using common types reduces dependencies between your component and its consumers.
Security is always a major concern when designing an application, particularly when the application will be exposed to the Web. To a large extent, the decisions you make regarding security will depend on your security policy. Regardless of the specific details of your security policy, you should always consider the following recommendations:
- Evaluate the risks. Take some time during the design of your application to evaluate the risks posed by each implementation or deployment decision. Remember to consider internal risks, as well as those posed by external hackers. For example, you may use secure HTTP connections to prevent a customer's credit card number from being "sniffed" as it is passed to your site over the Internet, but if you then store the credit card number in plain text in your database, you run the risk of an unauthorized employee obtaining it.
- Apply the principle of "least privilege. " The principle of least privilege is a standard security design policy that ensures each user account has exactly the right level of privilege to perform the tasks required of it and no more. For example, if an application needs to read data from a file, the user account it uses should be assigned Read permission, not Modify, or Full Control. No account should have permission to do anything it does not need to do.
- Perform authentication checks at the boundary of each security zone. Authentication should always be performed "at the gate." A user's process should not be allowed to perform any tasks in a given security zone until a valid identity has been established.
- Carefully consider the role of user context in asynchronous business processes. When your application performs business tasks asynchronously, remember that user context is less meaningful than if the task is performed synchronously. You should consider using a "trusted server" model for asynchronous operations, rather than an impersonation/delegation approach.
Your organization's operational management policy will determine the aspects of your application that need to be managed. You should design instrumentation into your application so that it exposes the critical management information needed for health monitoring, service level agreement (SLA) verification, and capacity planning. For a more complete discussion about management of distributed .NET-based applications, see Chapter 3, "Security, Operational Management, and Communications Policies."
Application and service performance is critical to a good user experience and efficient hardware utilization. While performance is an attribute that can be improved by tuning the implementation and code of the system after it is built, it is important to give thought to performance at the architecture and design stages. While a detailed discussion on profiling is beyond the scope of this guide, you may want to follow this process at various stages in application prototyping, development, testing, and so on to make sure that performance goals are met, or that expectations are being reset as early as possible:
- Define the measurable performance requirements for specific operations (for example, throughput and/or latency under certain utilization, such as "50 requests per second with 70% average CPU usage on a specific hardware configuration").
- Do performance testing: Stress test the system and collect profiling information.
- Analyze the test results: Does the application meet the performance goals?
- If the application does not meet the performance goals, identify bottlenecks in the application. (For tools that can help you isolate performance bottlenecks, see the articles referred to at the end of this list.)
- Repeat Step 2 until the performance results meet the goals.
".NET Framework SDK: Enabling Profiling" (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconenablingprofiling.asp?frame=true)
".NET CLR Profiling Services: Track Your Managed Components to Boost Application Performance," MSDN Magazine, November 2001 (http://msdn.microsoft.com/en-us/magazine/cc301839.aspx)
Questions? Comments? Suggestions? To give feedback on this guide, please send an e-mail message to email@example.com.
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
The latest version of this guidance is available here: http://www.microsoft.com/architectureguide.