Infrastructure Architectural Capabilities
Senior Architect, Microsoft Corporation
Microsoft Solutions Framework (MSF)
Enterprise Architecture (EA) Frameworks
Summary: This paper attempts to provide a presentation model, called "infrastructure capabilities," and a broad process model to help better apply existing patterns and frameworks to customers' infrastructures. (33 printed pages)
Introduction to Patterns
Building Capabilities Within the Infrastructure
Infrastructure Solution Lifecycle
Documenting Infrastructure Capabilities -Difference Architecture
Aligning to Other Methodologies
A Real World Example
This paper attempts to provide a presentation model, called "infrastructure capabilities," and a broad process model to help better apply existing patterns and frameworks to customers' infrastructures.
What are the capabilities of an organization's infrastructure? This whitepaper investigates the core theme of introducing the concept of the continuing capabilities within any company's infrastructure.
A secondary high level goal of this paper is to explain the communication layer between the published Microsoft Windows Server System Reference Architecture (WSSRA, formerly called Microsoft Solutions Architecture [MSA]) and difference architectures. A difference architecture takes a known reference architecture as its baseline (in this case WSSRA) and documents the variance from the reference architecture. We present this layer in a structured approach using infrastructure capabilities. The layers will include the concept of a difference architecture document that will be based on the various infrastructure patterns and capabilities of the WSSRA.
This document was created to loosely map to a conversation held with several infrastructure architects in talking about what they do and how they interact with their organizations. One example was an Infrastructure Architect (IA) talking about his organization's desire to deploy Wi-Fi technologies. A solution architect wished to create a wireless infrastructure to support a customer relations management (CRM) application. At the same time, the organization was undergoing a massive video-on-demand system to each desktop. The architect considered the Wi-Fi request and evaluated it based on the overall capabilities of the infrastructure. The end result of this was the realization that implementing Wi-Fi at that particular time would be significantly more expensive for the business then waiting and implementing Wi-Fi after building the video-on-demand infrastructure. At this point the IA went back to the solution architect to determine what the value of the CRM solution was in incremental revenue. It was quickly determined that the cost of the solution, right now, was greater than the incremental revenue to be generated. The result of this was scheduling the CRM solution that required Wi-Fi for after the completion of the video on demand system. This incident was the basis for developing the infrastructure capabilities concept.
As we head down the path of developing infrastructure capabilities, we must examine the broader concept of architectural patterns. Patterns have been slow starters in the infrastructure world to date. This problem exists for a number of reasons, not the least of which is the way infrastructures have grown. Starting many years ago as communications infrastructures supporting mainframe applications, the client-server revolution has impacted the infrastructure of virtually every company. From messaging to directory services, infrastructure services for business applications and end-users continue to expand. Frameworks, patterns, and reference architectures abound in the infrastructure space. Lean, Togaf, Zachman and others all have components of their frameworks that apply directly to infrastructure. Patterns in this space have so far focused on broader initiatives within IT such as information security or application development.
Microsoft Corporation builds and sells world-class software. The products groups build and deliver reference architectures (WSSRA) for customers to leverage in their solutions. Microsoft also has a consulting and support business that interacts with their customers. The dilemma for infrastructure architects working in the Microsoft space is the balancing act in which they live, between the product and support groups, when building a solution.
Services organizations also face this problem. These deliver solutions based on guidance from the product groups (best practice) but also provide additional guidance to customers (supportability). At times solutions that do not adhere to the product group's guidance may be supported by the services organizations. That disparity—the separation between "supportability" and "guidance"—is a core component in the delivery of the infrastructure capability concept.
The infrastructure capabilities process will benefit infrastructure professionals in several ways:
- Create a common taxonomy for the definition of solutions within an organization:
- Providing a clear understanding of what's required for the solution.
- Providing easy-to-map existing solutions to newly deployed technology.
- Instantiate a framework based on WSSRA to deliver infrastructure solutions:
- Providing consistent deliverables that map the differences between patterns and the deployed solution.
- Providing framework and toolkits to build solutions.
- Deliver cost-effective solutions:
- Presenting a framework (capabilities) that allows for the easy creation of difference architectures mapped to reference architectures.
- Providing a means to decrease technology deployment costs.
An issue in the industry today is the separation of the "infrastructure" from the solution. Organizations have separated the two, sometimes even to the point of infrastructure becoming a capital expenditure and solutions becoming one-off processes. This makes the solution a business benefit and the infrastructure a business cost. This artificial chasm creates more noise in an organization than is required. Frequently new components of the infrastructure (for example, WI-FI) are buried in a solution (sales force automation). Breaking that out into a technology list (sales force automation), we can see a list of core deliverable technologies that looks something like this:
- Standard file server solutions.
- Security services (such as firewall and IDs).
- User provisioning services—There is considerable argument in the computer industry today about user provisioning (creation) and the lifecycle management of users (user retirement services). For the sake of this paper, for this we are only dealing with the automated creation and removal of user objects from the various directory structures.
- User retirement services.
- Explicit and tacit knowledge systems.
- Communication and collaboration services.
- Business process.
- Business enablement.
As organizations consider developing their infrastructure capabilities, they must consider the reuse of components within a structured architecture framework. This is the thrust of Service-Oriented Architecture (SOA). The goal of this paper is to build that reuse into a clear, concise view of both the solution and the required infrastructure from a capabilities perspective. For example, a voicemail solution may require few capabilities from the deployed architecture, but may include hooks for later enhancements that require significant consideration, such as unified mailboxes.
To build the required infrastructure requires a baseline and a reference architecture. Rather then build our own reference architecture, the team has decided to use the already published WSSRA for this paper. WSSRA is a published blueprint developed by the Microsoft Windows Product group.
This reference architecture includes security, storage, network, application, and management components and provides the baseline for developing the more formalized solutions around communication and collaboration, identity management, and other services. Additional WSSRA solutions (Communication and Collaboration being an example) are in the works and will continue to be released.
Introductions to Infrastructure Capabilities
What are infrastructure capabilities? They are the potential of the infrastructure within an organization; the ability of the infrastructure to provide varied and increased services to the various solutions deployed for the business. They represent a communications layer between a reference architecture (WSSRA) and the resulting difference architecture for an organization. Ultimately infrastructure patterns are a baseline in the creation of an infrastructure-based SOA. The underlying concept of this whitepaper—the infrastructure capabilities—is to provide a common ground between SOA and infrastructure projects so that any one of a number of different delivery frameworks can be leveraged. These frameworks include Zachman, MSF, Togaf, and others.
This difference architecture will:
- Define patterns and their application to infrastructure.
- Demonstrate the value both to infrastructure and application development professionals.
- Create the alignment between the WSSRA design patterns and the customer's deployed scenario:
- Define infrastructure capabilities.
- Map out the discovery process for adding new solutions.
- Define and map the process of applying the infrastructure capabilities to existing organizations.
- Develop an easily-reusable process for developing difference architectures:
- Cleary define a difference architecture.
- Clearly define the components of a difference architecture.
WSSRA represents the framework reference architecture for Microsoft Technology solutions. In building patterns against the reference architecture, the solution is then linked to the verified architecture solution developed by Microsoft.
This document is aimed at the Infrastructure Architect (IA). An IA is a professional who provides the glue between the desired state (to be) and the current state (as is) in the solution process. The industry defines IAs (as compiled from searches of Monster and other job postings with the title "Infrastructure Architect") as: "Responsible for researching, comparing, cost-justifying, recommending, and establishing current and future hardware and software architectures for all aspects of information technology, from networks up through operating systems and shared software services."
In interviewing and speaking with dozens of IA's you can quickly see that no two are alike, and no two have the same job. A definition bandied about for an IA as "the person who can talk to anyone, about anything" seems the best description of all.
Many organizations do not have an infrastructure architect on their various architectures teams. In that case the role-specific information would be meant for the broader infrastructure team.
Others will leverage this information, including consultants seeking to help customers achieve their overall business goals.
Strategic advisors will also find value in this document.
Patterns are a core component of the capabilities within the infrastructure. They are repeatable processes and procedures that represent the design components of various solutions. To date, patterns have been slow starters in the infrastructure world. They exist today mostly in the form of vendor patterns. Examples of vendor patterns include Rational Unified Process (RUP) and Microsoft's Pattern and Practices (formerly PAG). As a new way of looking at patterns within the infrastructure, infrastructure capabilities represents a key component for using patterns. Capabilities give us a more "services" view of the infrastructure and its patterns. This services view can be mapped to an almost "SOA" view of infrastructure, with reusable components. This section is intended as a high-level overview just to define patterns in our context.
What are patterns? Some would call them the building blocks of a solution. These patterns or building blocks represent the instantiated components of the infrastructure that are built, either for a solution that is to be added to the infrastructure, or by a planned or vendor-based upgrade.
The old consulting adage about infrastructure is that it represents the streets and sewers of a city. Within an organization, the infrastructure is often forgotten simply because it appears to be nothing more than background noise of the organizational dial tone. This means that at times people do not consider the impact of adding new applications to the infrastructure. It's a set-and-forget component of the organization that is not frequently considered in the process of identifying and deploying new solutions. However, sometimes the net new value provided by an application can be less than the impact of the application on the infrastructure. In this sense, the infrastructure becomes a gate within the organization. Sometimes the gate becomes more like a roadblock, prohibiting any traffic flow causing the new solution to crash because of the limitations inherent to the infrastructure. Balancing the requirements of the new with the reality of the deployed is the role of patterns within the infrastructure.
There are many SOA and other architectural theories that build off the base of an organization's existing infrastructure. If we build on the metaphors they present in their models, we begin to see that the process of communication requires a messenger and a common language for them to communicate the message. Once an application turns over its "message" to the infrastructure, the proceeding movement of both data and materials becomes the critical component. This message can be as simple as an LDAP bind request, or as complex as using specific attributes of the schema assigned to an application. The goal of this document is to build out the overall communication layer (infrastructure capabilities in Figure 1) to show both how the WSSRA exists as the foundation or baseline of the deployed solution, and also how it is communicated to the solution by the capabilities map.
A capabilities map, while sounding simple, is actually very complex. Capabilities maps represent the current possible and potential possible features within the business. Infrastructure capabilities then represent the potential capabilities or planned capabilities within the infrastructure mapped to the business. This mapping allows solution architects and SOA teams within the organization to see when and how their application can fit into the capabilities of the infrastructure. If a feature is not in the capabilities map, the IA can implement the processes around change management to add the missing capability. Capabilities are simply the things that infrastructure can do for the business. This might include LDAP binds, authentication/authorization, groups, messaging, collaboration components, and data storage. Or it may be as complex as a tight-coupling of the phone system with the messaging and other infrastructure. What an infrastructure can do defines its capabilities.
Figure 1. Infrastructure capabilities within the framework
While Figure 1 lays out a framework, it is not all-inclusive. It's conceivable that a small or medium-sized business will not be interested in building a solution that maps to a significant portion of the WSSRA framework. As such, their difference architecture will be larger than would an enterprise customer who has adopted the majority of "suggestions" in the WSSRA Framework. The difference architecture takes the reference and maps it to the "as is" or current state of the infrastructure.
Patterns represent repeatable processes within a solution that represent the molds and components of the solution. Defining patterns as components of a solution allows us to then provide a better view of how patterns map to solutions and solutions map into the existing infrastructure. For example, in building architecture, windows are frequently the same shape and size. This represents a building pattern. Tile flooring is another excellent example of a pattern, in which the contractor takes two known patterns, the size of the tile, and the size of the floor, and maps them together with grout to provide the solution: a new tile floor (Figure 2).
Figure 2. Flooring tiles are a good metaphor for patterns
Within technology there are also many patterns. Web services, collaborations services, and search services all represent patterns. Building a set of guiding principals or anchors for the eventual development of infrastructure patterns that can be pushed together into a unified form to present a solution and a consistent repeatable set of patterns provides a service structure that can be easily:
- Controlled/Measured (just like the tile floor the parts are measured, as is the space they will fill).
- Documented (we only document the variances from the pattern).
- Managed (the patterns provide broad links into management solutions).
Patterns provide us with the basic components of the solution. For example, a customer may ask for a collaboration solution. A considerable difficulty in building out infrastructure patterns is the reality that infrastructure solutions mean different things to different organizations. Collaboration can mean just messaging solutions (Exchange) or can be as complex as non-linear routed workflow with tacit and explicit knowledge systems. The quick answer from a technologist would be "let me help you deploy Microsoft Exchange." The longer answer would be to examine the "requirements" of the business that drove the request for a collaboration solution. To bring the various concepts together, we will begin by building a common taxonomy for infrastructure solutions by providing a more consistent view of the people, processes, and technologies of the infrastructure.
Going back to the collaboration example started earlier, an examination of the business requirements would begin simply with the question "what do you mean by collaboration?" In determining the business requirements, the infrastructure architect can qualify the requirements of the organization against any new solution. This initial line of questioning would allow us to draw out the required patterns for the solution. In our example the customer has very complex collaboration requirements:
- A solution that includes team workspaces allowing for early intellectual capital (IC) creation and development.
- Authoritative system for IC that allows them to publish guidance and standards in their organization.
- A system that allows them to capture and share tacit knowledge.
- The ability to easily manage task assignment, completion, and status.
- Finally, they need to integrate all of this into a routing system that includes workflow, calendaring, and support for formal and informal meetings.
Based on this you can see that the first technologist's answer of "Exchange" would not have met the needs for a collaborative solution. Microsoft Exchange would have met the criteria as a component, but not as the solution. In this case, Exchange might be the first pattern deployed so that the other systems would have the routing system required. Microsoft SharePoint Portal Server, BizTalk, LCS, and Live Meeting would also be necessary to complete the solution.
Patterns represent common use and application of the components of a solution. They are the architects' pallet used to map the business requirements "collaboration" to broader technology-based solutions or "collaboration technologies." For example, in designing a mail solution that meets the requirements of the customer mentioned previously, there are components that must be in place. These components include:
- Directory Services:
- Lookup Services
- Authentication services
- Authorization services
- Network services:
- Name resolution
- "Dial tone" carrier
- Firewall services
- Communication and Collaboration Infrastructure:
- Other patterns
Bridging from Business to Technology
From an infrastructure perspective there are two set solutions that build into the required core capabilities. The first is the business-driven solution, which may not contain technology. The second map is the technical solution, which does not always contain direct alignment to the business. Infrastructure capabilities allow the technology and business solution to exist in a planned managed environment that enables the development and deployment of both.
The business solutions drive the technology solutions. The technology solution comprises something that end-users will leverage. Underneath the technology solution there can be one or many technology patterns. Each of these technology patterns represents a series of pieces within the solution that map to business or technology solutions. This definition of an overall solution fits into the infrastructure architecture project lifecycle very nicely.
When we present the patterns of the solution we see the larger overall pattern that emerges. This larger pattern maps the overall solutions in process within the organization to a larger infrastructure process that we call the infrastructure capabilities. This larger process helps us organize the response to a requested feature that is added to the infrastructure. It also aids in process planning and feature justification, as the entire infrastructure can be presented as the foundation for a new solution.
Figure 3. Solutions applied to the infrastructure
The initial mapping of Figure 3 provides us with a broad structure (controlled/measured) of how the solution fits within the organization. Each component of the broader communication and collaboration solution has a function and interaction with the broader solution of the organization's infrastructure. This interaction is measured and controlled by its interaction with the infrastructure. For example, the project server requires the network, authentication/authorization, and naming processes in order to complete its "higher level function." It's almost the infrastructure version of the Open Systems Interconnection (OSI) layers.
Figure 4. The crossroads of infrastructure
This brings us to the crossroads of infrastructure: required functions and features, meeting planned and deployed features and functions. The resulting traffic jam slows deployments and moves the infrastructure team out of the "enabler" position and into the "delayer" role. Organizations need a process that will help paint the picture of what is needed, while helping the business see clearly what is already deployed. How many solutions can be deployed if we know the common patterns within the organization? We need a single, reusable view of the infrastructure that will allow us to build our solution set against those known resources. This is the realm of the difference architecture and infrastructure capabilities.
It should be noted that infrastructure capabilities in and of themselves are definitions and extensions of other concepts and well known processes. This would include connections to the business modeling processes of Motion, mapping infrastructure to the overall business architecture and requirements. Infrastructure architects, consulting organizations, and vendors often work with customers and their own internal staffs to develop a "to be" state or vision of what the infrastructure will look like at some point in the future. At times this vision or future state is far too low on the horizon (less than 6 months). Capabilities mapping provides a longer term view of what needs to be added to the existing infrastructure capabilities to ensure that the business and IT staff can continue to benefit from a robust and successful computing environment. The other side of this equation is the development of the current capabilities map within the infrastructure (or the as is state). How we handle the gulf between to be and as is becomes the difference architecture for planning change within the organization.
This conceptual view of infrastructure capabilities includes two components. The first component is the formal process where components are added to the infrastructure of an organization. This process typically includes:
Developing a Conceptual Infrastructure Capability Map
The next concept, which represents the very first step in the process, is the actual capabilities of the infrastructure. The infrastructure capabilities are stable in the sense that they are a constant, but vary as discussed later in that they change over time. These two steps are the to be and as is architectural states. An initial view of the infrastructure's capabilities would represent a set of core services or components that are easily leveraged within the organization. This linkage to SOA represents the capabilities mapping to the broader SOA architecture initiatives. SOA builds reusable components within an application structure or solution. Infrastructure capabilities represent the components of the infrastructure that also fit an SOA model (reusable components). This can be represented as a visual mapping of the existing WSSRA components deployed. This could also be easily represented as a difference architecture showing the mapping between what is deployed and the reference architecture going forward.
Figure 5. Initial capabilities map of the infrastructure
Our initial build shows the infrastructure as a single segment (Figure 5), with multiple points of connection. These connections can be simplified into a series of services presented along the Infrastructure. These services include the creation of IDs, which results in the authorization/authentication systems for the organization.
The new solution may require leveraging components of the existing infrastructure. A concept presented in this document is the idea that infrastructure has two core buckets: business process and business enablement. These definitions were created for this document to simplify the view of the infrastructure and are not supported by any external definitions. Business process represents that business bucket that is transient and may exist for only one solution. It represents the core changing area of the infrastructure where all new solutions are added. Business enablement represents those components that are consistent and are shared by many processes (a data warehouse or mail system are examples of this).
As we move to the next tier we see that the processes of the initial view are actually separate and distinct. We also add on the broader process area of business process (Figure 6).
Figure 6. Layering process (adding business process)
Figure 7 introduces business enablement. This might include the directory, SQL server farms, or Web farms. We see the new solution deployed in the infrastructure that also leverages some of the existing business enablement components.
Figure 7. Adding business enablement
Later in this document we will discuss the issue around moving from being a new solution to becoming part of the business enablement bucket.
In figure 8 we iterate the capabilities map to reflect the logical view. This creates a link between the capabilities and the customer's existing infrastructure.
Figure 8. Infrastructure capabilities
Now we see a linear flow across the various business components of the infrastructure. This process flow moves us from the conceptual idea of moving data points, users, and applications through the infrastructure to a broader concept of the infrastructure serving as a base component of the user and application process.
In figure 9 we break out a specific component of the capabilities map, the area of network services. The infrastructure of the organization supports the how and why of users accessing information. Mapping this shows the clear connection between network services such as authentication/authorization and the broader infrastructure capabilities.
The primary purpose of a firewall solution is to prevent external "unauthorized" users accessing both authentication and authorization within the organization. As such, this represents the process whereby a user or application accesses the authentication/authorization system. File and print services connect the rest of the solution. File servers sit between applications and users. They are accessed on both sides. Applications use file storage for data and printers. Services such as collaboration and communication use file storage and file servers as access and transit points. End users store information and interact with the file servers directly as well. Now it becomes easy to represent the last layer that is not currently detailed. Network services then fit into the connection bus whereby users and applications request authentication/authorization from the infrastructure.
Figure 9. Network layer
This simplifies the conceptual requirements for authentication/authorization and infrastructure services. Under this layer are the more standardized hardware components that comprise the network (routers, concentrators, switches, and so on) that can be layered onto the name resolution services to provide the first tier of essential services. Regardless of the application type and platform, they will leverage the core services of the network to acquire data. This reflects a layer approach such as the OSI approach. This in the sense that each layer is dependent on the other, provides information to the previous and next, and all are required in conjunction for the requested solution.
Figure 10. Management, Operations, and Security
In Figure 10 we layer two additional services that are provided by the infrastructure. The first is security in its two primary forms, physical and application security. The second is enterprise management and monitoring. Both of these are overlay functions in that they touch every component of the infrastructure. Transient (or one time only) solutions as well as permanent solutions require management and security. These two components provide direct services across the entire infrastructure rather than at any one entry point. For example, for a critical piece of the infrastructure to be functional requires user provisioning and identity management (IDM) services. IDM provides the key framework against which new user accounts can be created, or for that matter deprovisioned.
Part of the thinking around the infrastructure capabilities is to generate technology-specific patterns. For a pattern to be useful it has to map against organizational standards. Management, based on the IT Infrastructure Library (ITIL) or Microsoft Operations Framework (MOF), as well as monitoring provide a framework around guarantee of services.
If we then take the concept of the infrastructure capabilities one step further we can consider what this infrastructure would look like outside of the theoretical. The logical and conceptual presentations give us a basis on which to determine how we would work with applications and other solutions within the organization. The first component, before we work with end-users, is to present the infrastructure to application architects. They will have specific requests in the areas of authentication/authorization and interactions with the physical network including bandwidth and routing/name resolution.
Figure 11. Infrastructure architecture process
Similar to Microsoft Solutions Framework (MSF), the process an IA undergoes is one of assessment, evaluation, and then planning/designing the solution. From the initial presentation to the solution developer to the more complex Total Cost of Ownership (TCO) calculations, this linear model helps the IA justify the new components of the solution for the business.
Figure 12. Application developers' view of the infrastructure
It should be noted that in many organizations, version one of an application is built by solutions architects. At some point in the lifecycle of an application, the application reaches a point where it may be absorbed into the infrastructure going forward. Where this cutoff exists (going from solution to a part of the infrastructure) is discussed later in this paper. Hence the models rely heavily on the initial operations and management plan built prior to releasing the required solution. The critical piece of the infrastructure capabilities is the process provided for the infrastructure architect. The process is not new and most infrastructure architects today use these steps, although perhaps not with these names:
Figure 13. Cause and effect of "proposed solution"
Walking a solution through this process allows the architect to gather the required information. There are many processes (such as Lean or the Microsoft-created Motion) that cover this. The TCO models are as varied as the information gathering models and include models from Forrester, Gartner, and other analysts. The TCO process has to consider short-term physical TCO as well as the softer return on investment (ROI) numbers. Any solution deployed in an infrastructure will have an ROI of some amount of time. The issue is mapping the overall TCO to that ROI. As in the example in the Introduction, the IA has to balance what the business will accept against what the solution requires or is proposing.
We do not have a preferred model for TCO. Microsoft uses the Motion framework for information gathering and process mapping within our strategic consulting offerings. For developing infrastructure capabilities the Motion framework is the preferred information gathering system.
Mapping the Solution to the Infrastructure
Figure 14. Mapping the solution to the infrastructure
Once the data is gathered regarding the specific requirements of the solution, the architect can map this solution to the infrastructure itself. In this scenario the infrastructure represents the "offerings" of the infrastructure. This may be a short term view (90 days) or a long term view (5 years) of the potential functionality and features that may be available.
Assessment is where the application owner/architect sits down with the infrastructure capabilities owner (which tends to be the IA) and assess what the specific application needs. This process is represented by the cause and effect flow of Figure 16. The infrastructure architect then maps this requirement against the existing requirements (pending) and the existing installed solutions. This maps to the evaluation phase outlined in Figure 14. This allows for the final evaluation process that would include risk analysis and benefit/risk tradeoff discussions.
If the application requires no additional infrastructure pieces it would then continue on, leveraging the existing organizational standards. These standards may include how to connect (bind) to an LDAP source, or how to request information from a directory server. The other side of the assessment and evaluation is the development of new solutions. The new solution requires validation against the goals of the business (assessment and evaluation) as well as a carefully planned implementation. For example, an application requiring a simple schema change may not have a significant impact on the business and as such may have a simple Assessment, Evaluation, and Plan/Design process. A more complex application that might force additional workloads for the directory servers may require a more complex evaluation process as well as a more formalized design component. (Sample below):
Table 1. Functionality Evaluation
|Functionality Requested||Benefit to business||Cost to Business||Timeline for implementation|
|Additional utilization requirements for Directory Servers.||Additional directory servers will spread the load out over a greater number of domain controllers.||There are hardware, software and operations costs for additional directory servers that must be considered.||As additional domain controllers may require additional network and other larger implementation concerns, this may be placed into a broader infrastructure deployment planning process.|
Infrastructure "Capability" Timeline
Upon the completion of the broader process, the application will be mapped against both the organization's change process and its functionality timeline. The change process maps to the business and organization change requirements and would be the initial component of any change to the infrastructure.
Once the change management process has been completed, the solution and its requirements can be mapped against the broader infrastructure requirements. There are three pieces to this overall consideration:
- How long before this new application becomes part of the infrastructure solution within the organization?
- What functionality does this new solution offer and do other applications require this functionality?
- Is the change requirement significant (such as developing a new LDAP directory structure), or minor (such as a schema change)?
Figure 15. Infrastructure Planning Timeline
Depending upon the overall requirements of the solution, moving through the three phases of the infrastructure capabilities analysis may require time. When this planning process is complete the infrastructure architect now has the first pass at the "infrastructure capabilities schedule." The first value proposition in this is that the solution architects within the business can receive the schedule prior to requesting new functionality. In checking the timeline they can ascertain if their required functionality is already a planned addition.
Once this Evaluation phase is completed we move into the more formal Plan and Design phase. This process includes compiling the evaluation information and applying this across the projected requirements of the infrastructure.
So let's take the scenario of the collaboration solution presented in the early section of this document. In that solution we need to have three core infrastructure patterns as well as the baseline authentication/authorization pattern. How does that map to the infrastructure capabilities?
Figure 16. A communications and collaboration solution
With this model we take a single business process, communication and collaboration, and explode its section of the infrastructure capabilities. Communication and collaboration is by default a component of the infrastructure capabilities. In this scenario however, we now build out the dependences around the various other infrastructure components to show the solution. For example, in a standard user provisioning scenario the system may require IDs in various directories. In a team space or on a project server the provisioning tool may also pre-provision Web sites and distribution lists.
Figure 17. Enterprise view of the infrastructure
From Figure 17 we now see the process of building and applying the infrastructure from a linear perspective.
Figure 18. Solution Lifecycle
A solution once deployed offers new features to the infrastructure that can be added into later upgrades of the infrastructure itself. Once these "features" becomes requirements of other applications this component of the original solution—or the solution itself—becomes part of the infrastructure. This moment, be it version 1.1 or version 14 of an application, represents the moment when the application moves from the business process bucket into the business enablement bucket.
Figure 19. Solution becoming part of the infrastructure
In Figure 19 we see version one deployed in the business process space. Version 2.0 of the solution has some components that are reused by other applications but some components that are not shared resources. Finally the solution becomes an infrastructure component. A solution can stop in any of these stages and not advance any further.
When we look at a reference architecture as a component of defining the capabilities of the infrastructure, what is it that we document? Consulting organizations in the past often provided customers with thud architectures. That represents a template that when the customer's specific data is included, the architecture makes a loud thud when striking a desk. A thud factor architecture more often then not was simply too long to read. As such it sat in someone's office like a trophy. "There's my Exchange architecture, it's 1000 pages!" Our goal with mapping infrastructure capabilities is to move from the understood and desired to the known. This process relies heavily on the Motion framework on the front end as the business information gathering process. We will then map the requirements of the business against what is deployed today in the infrastructure, what is recommended in the reference architecture, and compare the two.
For this process, a difference architecture represents a document that details the difference between what was designed and the reference architecture that was referenced as the baseline. Essentially, it is the required infrastructure capabilities of a solution mapped against the reference architecture and delivered to the customer. This document will, as an architecture document should, grow over the course of the phases within the delivered solution.
Figure 20. Laying out the initial phase
Difference architectures represent an architectural tool that is built against a reference architecture. While we would like to believe that all customers deploy all technologies in the "actualization" of a reference architecture that is simply not possible. A new company might in fact build and deploy their infrastructure based wholly on reference architecture. Sadly, the second iteration of that infrastructure would include variances from the reference architecture. So those companies that have developed their organic infrastructure may be very far from the projected reference architecture. This brings us to the why of a reference architecture and the variance documentation process that we call difference architecture.
The first reason is actually quite simple. If we deploy a reference-based architecture for a customer, we can leverage the reference architecture to provide the customer with a blueprint from which to build many copies of the deployed solution. Over time this will help them achieve greater flexibility in expansion (see the POD references at the end of the document) as well as in planning for new sites and facilities. By using a difference architecture and only documenting the variances from the reference architecture we can also measure the impact of some patches and fixes on the customer environment prior to deployment by simply determining if the fix applies to their specific variance or the reference components of their infrastructure.
Our next reason is a feedback loop. Once we build and deploy a series of reference architectures we can capture the consistent difference components. For example, if the reference architecture called for one Exchange front end server for every 5,000 end-users, yet customers, MCS, and partners consistently deploy two FE servers in that scenario, we can go back and change the configuration of the reference architecture to reflect these real-world design components. Another benefit with difference architectures is the feedback to the technology creator. In this example, we would be able to provide scalability data back to the MS Exchange Product group to say that FE servers don't scale as well as projected, and that we should either change the product or evaluate our testing methodologies.
Capturing these differences can also help us develop broader guidance around variances that don't work well, or even more prescriptive guidance around which variances should not, and which variances can co-exist in the same physical infrastructure. This guidance allows us to shape the early process of infrastructure development.
In Figure 20 we see the initial process once a solution has been determined (or even as part of the solution selection process within the organization). The initial documentation within the difference architecture is then the documentation of the two components that are "changed." This change could represent the enablement of new capabilities or the "alteration or refinement" of existing capabilities such that that change needs to be measured. This new mapping now alters the solution timeline (reflecting the new data). The new capabilities are then published into the infrastructure capabilities map. In the case of a unified messaging solution (fax and voicemail) this initial map might look something like this.
Figure 21. Mapped to a unified inbox solution
The documentation process for this solution would take the reference architecture and map out the differences from the deployed solution. This presentation would then only contain those items that are unique to this solution rather then documentation guidance, reference architecture components, and solution variances.
We used several and methods and frameworks in building out the concept of infrastructure capabilities. The following section compares these briefly. This is by no means an exhaustive work in the comparison and discussion of these various other frameworks and methods. This overview merely congeals the points where the various frameworks support and augment the concept of infrastructure capabilities.
The Motion Framework is a business process around structured information gathering. Motion builds a capabilities map for the organization. This map can and often will include the components of the infrastructure capabilities. This alignment is very smooth and the two work together very well in producing a view of the customer's environment. As with MSF, Motion and the data collection framework/enterprise architecture alignment processes are core components of the process and building of infrastructure capabilities.
Microsoft Solutions Framework (MSF)
The infrastructure capabilities leverage many of the concepts of MSF (including the risk model detailed above). The integration points are many including the overall concepts of project lifecycle and implementation within an organization.
Enterprise Architecture (EA) Frameworks
EA frameworks vary (there are many of them).
- Focused on primary stages (who, what, when, where, why, and how)
- Focused on delivering artifacts that represent the closing point of each phase.
- Infrastructure capabilities works well in Zachman Framework—the concept of determining what is required and documenting the requirements integrates at every level.
- MSF EA was based on MSF and Zachman. Infrastructure capabilities mapping was also based heavily on MSF so the commonalities are apparent.
Over the course of the development of this whitepaper the opportunity to build out both a difference architecture for a customer and to apply the concept of infrastructure capabilities occurred many times. The following is a concept mapping of the WSSRA to a messaging POD. A POD simply represents a complete stand alone unit of functionality (like a bean pod). In this case the difference architecture is a one-time description of the messaging components for a large business deploying multiple data center types using Microsoft Exchange Server. This can then be mapped back into the larger concept of messaging capabilities within the infrastructure.
WSSRA and the "POD" Concept
In building difference architectures you start from a known quantity, in this case WSSRA (thanks to Scott Beaudreau for the "POD" concept). Implementing a reference architecture in small components is the process that has been designated as "Pods." A POD represents a unit that can be placed within an organization to handle certain components of functionality within the organization. For example, below we have an example of Pods based entirely upon a customer's requested messaging solution. Pods can be tailored to any size customer as their functionality is tied to the overall WSSRA solution. Scaling the POD up or down does not change the overall course of the solution. Rather the scaling process is a component of the reference architecture.
A secondary consideration of this process is the broader concept of components. Each of the Pods has components (for example, firewalls) that are not unique to the POD but are included with them. This represents that piece of the component (firewall) that is directly related to the concept of the specific POD (such as messaging).
Microsoft Reference Architecture
The conceptual solution described in this proposal response follows the guidelines established by the Windows Server System Reference Architecture (WSSRA). The WSSRA has not released its Microsoft Exchange guidelines so we have leveraged the MSA for Exchange. WSSRA is a set of guidelines for different technology implementations to assist customers and integrators with design, build, and operation using a standardized approach to IT architecture across the organization. In this way, the solution uses validated architectural guidance to insure the integrity of the infrastructure.
For the proposed solution, the MSA Figure 22represents a logical model for the primary data centers at CUSTOMER XYZ, for example in City XYZ, City WXY or other major cities. The MSA model separates the major functional components of the messaging system and shows their relationship to each other. It is a useful tool for discussing the various functional layers of the messaging system, and for deciding where certain functions should be performed. This model can be envisioned as three major layers: internal, perimeter network, and external. The Pods then represent the documented difference architecture in that they are not specific to the MSA/WSSRA example but are extensions of that reference architecture enabling messaging capabilities within the organization.
The internal layer is broken into functional elements: messaging servers and connectivity servers. The future Exchange mailbox servers, Exchange Public Folder servers, and legacy messaging servers are the layer closest to the end-users. This layer contains the systems that users employ for sending and receiving messages. The connectivity servers which are part of the internal layer sit above the legacy and Exchange mailbox servers, Exchange Public Folder servers, and the legacy messaging servers and logically represent all paths between internal messaging systems, and the connection to the outside world. The perimeter network layer sits above the connectivity servers and below the external layer. Its purpose is to isolate the internal layers from the external layers, while processing and relaying messages passing between layers. The external layer is the final layer and represents the Internet and non-CUSTOMER XYZ messaging users.
Figure 22. A logical architecture model for the primary data centers
The build out for this infrastructure would leverage the Service Level Objectives (SLOs) and Service Level Agreements (SLAs) established for Customer XYZ (Customer XYZ) and would allow for a global infrastructure. The above solution would represent a series of Pods (security, directory, messaging, and so forth) that could be deployed as needed.
This POD concept allows an organization to map their requirements, both technical and business, to a broader concept and map of their overall infrastructure (see below). This managed process around infrastructure provides a conceptual and logical view of the new infrastructure to enable the deployment of patterns and components.
Figure 23. Infrastructure Bus
Hub Site Interconnection
Based on the architecture model, the five Class A Sites, City XYZ, City WXY, or other major cities would provide geographic hub data centers that will be interconnected via high speed networks. Each Class A Site will function as a geographic hub providing the connectivity between the geographic regions, legacy e-mail servers, the perimeter network, and the external layer in the reference architecture. Using this approach will concentrate the servers into the centers with the appropriate environment and network connections to provide the messaging and collaboration backbone services. The diagram illustrates this connectivity between the Class A Hub Sites and also illustrates the logical model for Class B, C, D and E sites.
Figure 24. Site models
Each hub site will host a collection of messaging and collaboration servers. These servers would represent functional components of the messaging and collaboration infrastructure. Microsoft Active Directory directory services servers will provide the directory services for the messaging and collaboration servers. Three types of Exchange servers will provide the messaging services, and two types of Live Communications Server servers will provide instant messaging services. The representative list of services by functions that will vary by locations may include:
- Anti-virus and anti-spam
- Internet Security and Acceleration (ISA) servers for proxy services
- Active Directory services Domain Controllers and Global Catalogue
- Exchange Bridgehead servers to route SMTP traffic between the geographic data centers
- Exchange Front-end protocol servers to service Outlook Web Access and Outlook Mobile Access clients
- Exchange mailbox servers to host the user mailboxes
- Exchange Public Folder servers
- Live Communications Server Enterprise Edition front-end communications servers
- Live Communications Server 2005 back-end servers
- SAN storage
- Blade server and management server
- Tape backup server
The conceptual models for the solution in the Class A, B, C, D, and E Sites are included here to illustrate the differentiation of the infrastructure for different populations of users. Based on the information provided by Customer XYZ, City XYZ will host the largest centralized population of messaging and collaboration users in one location. The infrastructure in City XYZ includes all of the servers and services listed above. Figure 28 provides a high-level view of this type of messaging and collaboration implementation. The City WXY data center will host the second largest hub-based population of users and should be designed using the same component topology scaled for a smaller population. The other major cities' data centers will host the smallest hub-based populations of messaging and collaboration front-end and back-end services for the Asia Pacific region and therefore the component level solution could vary from the model used at the larger Americas and European hub sites. The multiple layer conceptual model for the Class A Sites in City XYZ, City WXY, and other major cities is illustrated in Figure 23
The proposed conceptual solution for the Shared Services data center locations will follow a different model which will be scaled according to the number of users and services that will be hosted at the site. The Class B Sites will primarily host the backend messaging services with local directory services support. Based on the service level parameter requirements for each site, these sites will also be configured for high availability and use clusters or redundant components.
The other advantage of this type of solution is that Customer XYZ can scale this to fit whatever cost and business constraints they are operating under. These "Pods" can be scaled to fit whatever number of users and configurations are required for the customer. In having a larger overall infrastructure plan such as this one, Customer XYZ can plan for growth while maintaining its current infrastructure needs.
Figure 25. Proposed typical class B shared service center conceptual site solution
As defined for Customer XYZ., the Class C, D, and E sites will host the backend messaging services to provide availability and consistent service levels where network connections are neither robust nor reliable. The proposed conceptual solution for the satellite offices will include a range of solutions based on the size of the population and the number of services that are hosted locally. Where possible the services would be stacked on a single server or a minimum number of servers to reduce costs of operations.
Figure 26. Typical class C, D, E conceptual site solution
The solution as described in the proposal has been generated using high-level information and experience gained in other customer environments. There are a number of factors that should be considered during the recommended architecture, planning, and design activities before proceeding with any infrastructure implementation.
With the established architecture framework (WSSRA) and the projected hardware configurations for data centers the next tier is to document the potential differences for this solution from the reference architecture. This represents the difference architecture mapping the messaging capabilities for this customer. Simply plugging the "messaging Pods" into the larger infrastructure capabilities map allows us to integrate and expand based on the requirements of the specific implementation. In building this out for other infrastructure services we can easily develop a documented logical infrastructure capabilities map.