Export (0) Print
Expand All

3 – Moving to Microsoft Azure Cloud Services

This chapter walks you through the second step Adatum took in migrating their aExpense application to Microsoft Azure. You'll see an example of how to take an existing business application, developed using ASP.NET, and adapt it to run in the cloud using Azure Cloud Services.

The Premise

At the end of the first migration step, Adatum had a version of the aExpense application that ran in the cloud using the IaaS approach. When the team at Adatum developed this version, they kept as much of the original application as possible, changing just what was necessary to make it work in Azure.

During this step in the migration, Adatum wants to evaluate changing to use a PaaS approach instead of using the IaaS approach with virtual machines. Adatum also wants to address some issues not tackled in the previous deployment. For example, Adatum’s developers are investigating adapting the aExpense application to use claims-based authentication instead of Active Directory, which will remove the requirement to connect back to the corporate domain through Azure Connect or Virtual Networks.

Adatum also needs to consider other issues such as how to handle session state when there may be multiple instances of the application running, and how to deploy and use configuration settings required by the application.

Goals and Requirements

In this phase, Adatum has a number of goals for the migration of the aExpense application to use a PaaS approach. Further optimization of the application for the cloud, and exploiting additional features of Azure, will come later.

Adatum identified some specific goals to focus on in this phase. The aExpense application in the cloud must be able to access all the same data as the IaaS hosted version of the application in the previous migration step. This includes the business expense data that the application processes and the user profile data, such as a user's cost center and manager, which it needs in order to enforce the business rules in the application. However, Adatum would also like to remove any requirement for the aExpense application to call back into its on-premises network from the cloud, such as to access the existing Active Directory service.

Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath We want to avoid having to make any calls back into Adatum from the cloud application. This adds significantly to the complexity of the solution.

A second goal is to make sure that operators and administrators have access to the same diagnostic information from the cloud-based version of aExpense as they have from the on-premises and IaaS hosted versions of the application.

The third goal is to continue to automate the deployment process to Azure. As the project moves forward, Adatum wants to be able to deploy versions of aExpense to Azure without needing to manually edit the configuration files, or use the Azure Management Portal. This will make deploying to Azure less error-prone, and easier to perform in an automated build environment.

A significant concern that Adatum has about a cloud-based solution is security, so a fourth goal is to continue to control access to the aExpense application based on identities administered from within Adatum, and to enable users to access the application by using their existing credentials. Adatum does not want the overhead of managing additional security systems for its cloud-based applications.

Adatum also wants to ensure that the aExpense application is scalable so that when demand for the application is high at the end of each month, it can easily scale out the application.

Overall, the goals of this phase are to migrate aExpense to use a PaaS approach while preserving the user experience and the manageability of the application, and to make as few changes as possible to the existing application.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus It would be great if we could continue to use tried and tested code in the cloud version of the application.

Overview of the Solution

This section of the chapter explores the high-level options Adatum had for migrating the aExpense application during this step. It shows how Adatum evaluated moving from an IaaS to a PaaS hosting approach, and how it chose an appropriate mechanism for hosting the application in Azure.

Evaluating the PaaS Approach for Hosting the Application

In the first migration step Adatum’s goal was to avoid having to make any changes to the code so that it could quickly get the aExpense application running in the cloud, evaluate performance and usability, and gauge user acceptance. However, Adatum is also considering whether the IaaS approach they had followed was the best choice for their specific scenario, or whether a PaaS approach would be more cost effective in the long run while still providing all of the capabilities for performance, availability, manageability, and future development.

Ff803371.note(en-us,PandP.10).gifJana Says:
Jana For a comparison of the options and features in Azure for IaaS and PaaS, see the Azure Features Overview.

Adatum carried out some preliminary costing analysis to discover the likely differences in runtime costs for the aExpense application deployed using the IaaS model described in Chapter 2, "Getting to the Cloud," and using the PaaS model described in this chapter.

  • In both the IaaS and PaaS deployments, the costs of data storage and data transfer will be the same: both use SQL Server, and both will transfer the same volumes of data into and out of Azure.
  • At the time of writing, the cost of running a medium sized virtual machine in Azure is almost the same as the cost of running a Cloud Services role instance.
  • The IaaS deployment uses a virtual network to provide connectivity with the on-premises Active Directory. At the time of writing, this costs approximately $ 37.00 per month.
Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath The hourly compute costs for Virtual Machines, Cloud Services roles, and Azure Web Sites Reserved instances (when all are generally available at the end of the discounted trial period) are almost the same, and so the decision on which to choose should be based on application requirements rather than focusing on just the compute cost.


Ff803371.note(en-us,PandP.10).gifNote:
Adatum used the Azure pricing calculator, together with some calculations in an Excel spreadsheet. For more information, see Chapter 6, “Evaluating Cloud Hosting Costs.” The Hands-on Labs associated with this guide include a cost calculation spreadsheet and describe how you can calculate the approximate costs of running your applications in Azure.

Although the PaaS and IaaS deployment models for the aExpense application are likely to incur similar running costs, Adatum also considered the saving it can make in administrative and maintenance cost by adopting the PaaS approach. Using a PaaS hosting model is attractive because it delegates the responsibility for managing both the hardware and the operating system to the partner (Microsoft in this case), reducing the pressure on Adatum’s in-house administration staff and thereby lowering the related costs and enabling them to focus on business critical issues.

Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath The aExpense application has no special operating system configuration or platform service requirements. If it did, Adatum may be forced to stay with an IaaS approach where these special requirements could be applied to the operating system or the additional services could be installed.

Options for Hosting the Application

Having decided on a PaaS approach, Adatum must consider the hosting options available. Azure provides the following features for PaaS deployment:

  • Web Sites. This feature provides the simplest and quickest model for deploying websites and web applications to Azure. The cost-effective Shared mode (and the Free mode available at the time of writing) deploy multiple sites and applications for different Azure customers to each instance of IIS, meaning that there can be some throttling of bandwidth and availability. Alternatively, at increased cost, sites and applications can be configured in Reserved mode to avoid sharing the IIS instance with other Azure customers. A wide range of development languages and deployment methods can be used; and sites and applications can be progressively updated rather than requiring a full redeployment every time. Azure Web Sites can also be automatically provisioned with a wide range of ecommerce, CMA, blog, and forum applications preinstalled.
  • Cloud Services. This feature is designed for applications consisting of one or more hosted roles running within the Azure data centers. Typically there will be at least one web role that is exposed for access by users of the application. The application may contain additional roles, including worker roles that are typically used to perform background processing and support tasks for web roles. Cloud Services provides more control and improved access to service instances than the Azure Web Sites feature, with a cost for each role approximately the same as when using Web Sites Reserved mode. Applications can be staged for final testing before release.
  • A set of associated services that provide additional functionality for PaaS applications. These services include access control, Service Bus relay and messaging, database synchronization, caching, and more.
Ff803371.note(en-us,PandP.10).gifNote:
The MSDN article “Azure Websites, Cloud Services, and VMs: When to use which?” contains information about choosing a hosting option for your applications.

Choosing Between Web Sites and Cloud Services

Adatum considered the two Azure PaaS approaches of using the Web Sites feature and the Cloud Services feature.

The Shared mode for Web Sites offers a hosting model that provides a low cost solution for deploying web applications. However, in enterprise or commercial scenarios the bandwidth limitations due to the shared nature of the deployment may mean that this approach is more suited to proof of concept, development, trials, and testing rather than for business-critical applications. Web Sites can be configured in Reserved mode to remove the bandwidth limitation, although the running cost is then very similar to that of Cloud Services roles. However, different sizes of Reserved mode instance are available and several websites can be deployed to each instance to minimize running costs.

The developers at Adatum realized that Reserved mode Web Sites would provide a useful platform for Adatum’s websites that are less dependent on performing application-related functions. For example, Adatum plans to deploy its corporate identity websites and portals to the cloud in the future, as well as implementing additional gateways to services for mobile devices and a wider spectrum of users. Azure Web Sites will be a good choice for these.

Azure Web Sites can access a database for storing and retrieving data; however, unlike Cloud Services, they do not support the use of dedicated separate background processing role instances. It is possible to simulate this by using separate website instances or asynchronous tasks within a website instance but, as Adatum will require aExpense to carry out quite a lot of background processing tasks, the Cloud Services model that offers individually scalable background roles is a better match to its requirements.

You can also adopt a mixed approach for background processing by deploying the website in Azure Web Sites and one or more separate Cloud Services worker roles. The website and worker roles can communicate using Azure storage queues, or another mechanism such as Service Bus messaging.

Cloud Services make it easy to deploy applications that run on the .NET Framework, and it is possible (through not straightforward) to use other languages. In contrast, Azure Web Sites directly supports a wide range of development languages such as node.js, PHP, Python, as well as applications built using the .NET Framework. However, the aExpense application already runs on the .NET Framework and so it will be easy to adapt it to run in a Azure Cloud Service web role. It will also be possible to add background tasks by deploying one or more Cloud Service worker role instances as and when required.

Azure Web Sites allows developers to use any tools to develop applications, and also supports a wide range of simple deployment and continuous automated update options that includes using FTP, Codeplex, Git, and Microsoft Team Foundation Server (TFS) as well as directly from Microsoft Web Matrix and Visual Studio. This would be a useful capability, especially for fast initial deployment without requiring any refactoring of the code and for deploying minor code and UI updates.

This wide choice of deployment and update capabilities are not available for Cloud Services where, other than changes to the configuration files, only a full deployment is possible. In addition, deployment to Cloud Services roles means that developers at Adatum will typically need to use Visual Studio; though this is already their default working environment.

The requirement of Cloud Services to deploy complete packages rather than individual files could be a disadvantage. However, Adatum wants to be able to deploy to a dedicated staging instance and complete a full application testing cycle before going live, and control the releases through versioning of each complete deployment rather than modifying it by using incremental updates. This means that the Cloud Services model is better suited to Adatum’s needs. Developers and administrators can use scripts or custom applications that interact with the Azure Management API to deploy and manage the application and individual role instances when using Cloud Services.

Adatum would also like to introduce automatic scaling to the aExpense application running in the cloud, scaling out at the end of the month to handle increased demand and then scaling in again. The developers want to use the Autoscaling Application Block from the patterns & practices group at Microsoft, which can enable autoscaling for role instances under specified conditions or based on a predefined schedule, and it can be used only for Cloud Services roles.

Finally, Cloud Services allows Adatum to configure the firewall and endpoints for each role deployed to Azure, and to configure virtual network connections between roles and on-premises networks. The ability to configure the firewall makes it easier to control public access to the website and other roles by defining endpoints and opening ports. The ability to use virtual networks makes it easy to interact with the roles from on-premises management tools, and use other integration services.

After initial investigation, and after considering the advantages and limitations of each approach, Adatum chose to use Cloud Services roles rather than Web Sites.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus Before making this decision, we did create a spike to test the use of Azure Web Sites, and to evaluate the performance and discover the capabilities. Note that, at the time of writing, Web Sites was still a preview release. The features of final version may differ from those described in this guide.
The Hands-on Labs available for this guide include an exercise that shows in detail how we deployed the application to Azure Web Sites during this spike so that you can explore the use of Web Sites yourself.

Service Instances, Fault Domains, and Update Domains

Adatum plans to deploy multiple instances of the aExpense application as a way to scale the application out to meet increased demand. It's easy for Adatum to add or remove instances as and when it needs them, either through the Azure Management Portal or by using PowerShell scripts, so it only pays for the number it actually need at any particular time. Adatum can also use multiple role instances to enable fault tolerant behavior in the aExpense application.

Ff803371.note(en-us,PandP.10).gifNote:
Use multiple instances to scale out your application, and to add fault tolerance. You can also automate the deployment and removal of additional instances based on demand using a framework such as the Enterprise Library Autoscaling Application Block.

In Azure, fault domains are a physical unit of failure. If you have two or more instances, Azure will allocate them to multiple fault domains so that if one fault domain fails there will still be running instances of your application. Azure automatically determines how many fault domains your application uses.

To handle updates to your application if you have two or more instances of a role, Azure organizes them into virtual groupings known as update domains. When you perform an in-place update of your application, Azure updates a single domain at a time; this ensures that the application remains available throughout the process. Azure stops, updates, and restarts all the instances in the update domain before moving on to the next one.

Ff803371.note(en-us,PandP.10).gifNote:
You can also specify how many update domains your application should have in the service configuration file.

Azure also ensures update domains and fault domains are orthogonal, so that the instances in an update domain are spread across different fault domains. For more information about updating Azure applications and using update domains, see “Overview of Updating a Azure Service.”

Options for Authentication and Access Control

Adatum wants to remove the requirement for the application to connect back to Adatum’s on-premises Active Directory for authentication. In the IaaS version of the aExpense application described in the previous chapter, Adatum experimented with Azure Connect and Azure Virtual Network to connect the cloud-hosted virtual machines to its on-premises servers.

While this approach can still be implemented when hosting the application using Cloud Services, Adatum decided to investigate other options. These option include:

  • Run an Active Directory server in the cloud. Adatum could provision and deploy a virtual machine in the cloud that runs Active Directory, and connect this to their on-premises Active Directory. However, this adds hosting cost and complexity, and will require additional management and maintenance. It also means that Adatum must still establish connectivity between the cloud-hosted and on-premises Active Directory servers.
  • Use standard ASP.NET authentication. Adatum could convert the application to use ASP.NET authentication and deploy the ASPNETDB database containing user information to the hosted SQL Server instance that holds the application data. However, this means that users would need to have accounts in the ASP.NET authentication mechanism, and so it would not provide the same seamless sign-in as the existing application does through Windows Authentication against Active Directory.
  • Use a claims-based authentication mechanism. Claims-based authentication is a contemporary solution for applications that must support federated identity and single sign-on. Users are authenticated by an identity provider, which issues encrypted tokens containing claims about the user (such as the user identifier, email address, and perhaps additional information). The application uses these claims to identify each user and allow access to the application. The advantage of this option is that users will continue to enjoy a single sign-on experience using their active directory credentials.
Ff803371.note(en-us,PandP.10).gifNote:
In a future release, Azure Access Control will be renamed to Azure Active Directory (WAAD), and will expose functionality to support Windows Authentication in cloud-hosted applications. This would simplify implementation of authentication for Adatum.

The first alternative Adatum considered was to host Windows Active Directory in the cloud and continue to use the same authentication approach as in the on-premises version of the application. Active Directory can be hosted on a virtual machine and connected to an on-premises Active Directory domain through Azure Connect or by using Azure Virtual Networks.

Adatum will obviously still need to maintain an on-premises Active Directory domain for their internal applications, but the cloud-hosted domain controller will be able to replicate with the on-premises domain controllers. However, this means that Adatum would need to manage and pay for a virtual machine instance running all of the time just for authenticating users of the aExpense application. It would probably only make sense if Adatum planned to deploy a large number of applications to the cloud that use the cloud-hosted Active Directory server.

The second alternative Adatum considered was to use ASP.NET authentication. The developers would need to modify the code and add user information to the authentication database in the cloud-hosted SQL Server. If the application already used this mechanism, then this approach would remove any requirement to adapt the application code other than changing the authentication database connection string. However, as Adatum uses Windows Authentication in the application, this option was not considered to be an ideal solution because users would need a separate set of credentials to access the aExpense application.

The third alternative Adatum considered was to use claims-based authentication, and it has several advantages over the other two approaches. Adatum will configure an on-premises Active Directory Federation Services (ADFS) claims issuer in their data center. When a user tries to access the aExpense application in the cloud, that user will be redirected to this claims issuer. If the user has not already logged on to the Adatum domain, the user will provide his or her Windows credentials and the claims issuer will generate a token that contains a set of claims obtained from Active Directory. These claims will include the user's role membership, cost center, and manager.

This will remove the direct dependency that the current version of the application has on Active Directory because the application will obtain the required user data from the claims issuer (the claims issuer still has to get the data from Active Directory on behalf of the aExpense application). The external claims issuer can integrate with Active Directory, so that application users will continue to have the same single sign-on experience.

Ff803371.note(en-us,PandP.10).gifJana Says:
Jana Using claims can simplify the application by delegating responsibilities to the claims issuer.

Changing to use claims-based authentication will mean that the developers must modify the application. However, as they will need to refactor it as a Cloud Services solution, the additional work required was considered to be acceptable in view of the ability to remove the reliance on a direct connection back to their on-premises Active Directory.

Ff803371.note(en-us,PandP.10).gifNote:
The example application provided with this guide uses a mock issuer so that you can run the example on a single workstation without needing to set up Active Directory and ADFS. The mock issuer is also used by Adatum when testing the application. For more details see the section “Using a Mock Issuer” later in this chapter.

Profile Management

The on-premises aExpense application stores users' preferred reimbursement methods by using the ASP.NET profiles feature. When migrating the application to Azure using virtual machines, Adatum chose to keep this mechanism by creating a suitable ASPNETDB database in the SQL Server hosted in the cloud. This minimized the changes required to the application at that stage.

At a later stage of the migration, the team will use a profile provider implementation that uses Azure table storage. For more details, see Chapter 7, “Moving to Azure Table Storage,” of this guide.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus If we were building a new application that required profile storage, we would consider using the Universal Providers, which are the recommended providers in Visual Studio for many types of data access. The Universal Providers can also be used to store session state.

Session Data Management

The AddExpense.aspx page uses session state to maintain a list of expense items before the user saves the completed business expense submission.

The on-premises aExpense application stores session data in-memory using the standard ASP.NET session mechanism. This works well when there is only a single instance of the application because every request from a user will be handled by this single server. However, if Adatum decides to take advantage of the elasticity feature of Azure to deploy additional instances of the application (a major goal for the migration to Azure), the developers must consider how this will affect session data storage. If Adatum uses more than a single instance of the web application, the session state storage mechanism must be web farm friendly, so that the session state data is accessible from every instance.

There are several options for managing shared session state in Azure.

Storing Session State Data in a Database

The Microsoft ASP.NET Universal Providers enable you to store your session state in either SQL Server or SQL Database. At the time of writing, these providers are available through NuGet. After you have installed the package, you can use the providers by modifying your configuration.

To use this option you must have a SQL Database subscription or have SQL Server installed on a Azure virtual machine. Therefore, it is most cost effective if you are already using either SQL Server or SQL Database.

Adatum does have a SQL Server database available that could be used to store session state, but considered that in future it may wish to explore how it can move away from using a relational database altogether. For this reason, Adatum chose not to adopt this option for storing session state because it would add an additional dependency on SQL Server.

Storing Session State Data in Azure Storage

The Azure ASP.NET Providers enable you to store membership, profile, role, and session state information in Azure table and blob storage. These providers are published as sample code on the Azure developer samples site.

Adatum will have a suitable storage account available for these providers to use. However, Adatum is concerned that the providers are samples, and are still under development. In addition, Adatum is concerned that stored session state may not be removed automatically from blobs when sessions are abandoned. Therefore Adatum chose not to use these providers for storing session state.

Storing Session State Data in a Azure Cache

The third option is to use the ASP.NET 4 Caching Providers for Azure. This option enables you to use Azure Caching to store your application session state. In most scenarios, this option will provide the best performance, and at the time of writing this is the only officially supported option.

The ASP.NET 4 Caching Providers for Azure work with two different types of cache in Azure:

  • Azure Caching. This is a high-performance, distributed, in-memory cache that uses memory from your Cloud Services roles. You configure this cache as part of the deployment of your application so that this cache is private to the deployment. You can specify whether to use dedicated caching roles, where role instances simply provide memory and resources for the cache, or you can allocate a proportion of each role instance that also run application code for caching. There is no separate charge for using this type of cache; you pay the standard rates for running the Cloud Services role instances that host the cache.
  • Azure Shared Caching. This is a high-performance, distributed, in-memory caching service that you provision separately from your other Azure services, and that all of the services in your subscription can use. You pay for this shared caching on monthly based on the size of the cache (for example, at the time of writing, a 128 MB cache costs $ 45.00 per month). For current pricing information, see the Azure Pricing Details.
Ff803371.note(en-us,PandP.10).gifNote:
For more information about Caching in Azure and the differences between Azure Caching and Azure Shared Caching see, Caching in Azure on MSDN.

Adatum decided that the ASP.NET 4 Caching Providers for Azure provide the ideal solution for the aExpense application because they are easy to integrate into the application, they are supported in the compute emulator so that development and testing are simpler (developers can run and debug the application entirely within the emulator instead of setting up the cache in the cloud), and they will not hold extraneous session state data after the session expires.

Adatum will modify the aExpense configuration to use these providers for storing session state, and use a co-located cache in the web role instances of the application in order to minimize the running costs for the application. However, Adatum must ensure that it configures a sufficiently large cache so that session data is not evicted as the cache fills up.

As an alternative to using session state to store the list of expense items before the user submits them, Adatum considered using ASP.NET view state so that the application maintains its state data on the client. This solution would work when the application has multiple web role instances because the application does not store any state data on the server. Because later versions of the aExpense application store scanned images in the state before the application saves the whole expense submission, this means that the state can be quite large. Using view state would be a poor solution in this case because it would need to move the data in the view state over the network, using up bandwidth and adversely affecting the application's performance.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus Using ASP.NET view state is an excellent solution as long as the amount of data involved is small. This is not the case with the aExpense application where the state data will include images.

Data Storage

Adatum also reviewed the choice made during the initial migration to use a hosted SQL Server running in a Azure virtual machine as the data store for the aExpense application. There is no technical requirement to change this at the moment because code running in Cloud Services roles can connect to the database in exactly the same way as in the virtual machine hosted version of the application.

This does not mean that Adatum will not reconsider the choice of database deployment in the future, but it is not mandatory for this step and so the migration risks are reduced by delaying this decision to a later phase.

Ff803371.note(en-us,PandP.10).gifNote:
In Chapter 4, “Moving to Microsoft Azure SQL Database,” of this guide you will discover how Adatum revisited the decision to use a hosted SQL Server, and chose to move to the PaaS equivalent - Azure SQL Database.

However, the IaaS solution used Windows Authentication to connect to SQL Server and Adatum plans to remove the requirement for a virtual network, so the application will now need to use SQL authentication. This will require configuration changes in both the application and SQL Server.

Now that the connection string in the configuration file includes credentials in plain text, you should consider encrypting this section of the file. This will add to the complexity of your application, but it will enhance the security of your data. If your application is likely to run on multiple role instances, you must use an encryption mechanism that uses keys shared by all the role instances.

Ff803371.note(en-us,PandP.10).gifNote:
To encrypt your SQL connection string in the Web.config file, you can use the Pkcs12 Protected Configuration Provider.
For additional background information about using this provider, see the sections “Best Practices on Writing Secure Connection Strings for SQL Database” and “Create and deploy an ASP.NET application with Security in mind” in the post “Azure SQL Database Connection Security.”

Application Configuration

The on-premises version of the aExpense application uses the Web.config file to store configuration settings, such as connection strings and authentication information for the application. When an application is deployed to Azure Cloud Services it’s not easy to edit the Web.config file. You must redeploy the application when values need to be changed. However, it is possible to edit the service configuration file through the portal or by using a PowerShell script to make configuration changes on the fly. Therefore, Adatum would like to move some configuration settings from the Web.config file to the service configuration file (ServiceConfiguration.csfg).

However, some components of an application may not be “cloud aware” in terms of reading configuration settings from the Azure service configuration files. For example, the ASP.NET Profile Provider that Adatum uses to store each user’s preferred reimbursement method will only read the connection string for its database from the Web.config file. The Windows Identity Foundation (WIF) authentication mechanism also depends on settings that are located in the Web.config file.

To resolve this, the developers at Adatum considered implementing code that runs when the application starts to copy values from the ServiceConfiguration.csfg file to the active configuration loaded from the Web.config file. To assess whether this was a viable option, the developers needed to explore how startup tasks can be executed in a Azure Cloud Services role.

Application Startup Processes

The developers at Adatum considered that they may want to execute some code that runs only as the application starts. This prompted them to investigate how to execute tasks when a Azure application starts up. The processes that occur when a Azure web or worker role is started are:

  1. The Azure fabric controller creates and provisions the virtual machine that will run the application and loads the application code.
  2. The fabric controller looks for startup tasks defined in the ServiceConfiguration.cscfg file, and starts any it finds in the order they are defined in the file.
  3. The fabric controller fires the OnStart event in the RoleEntryPoint class when the role starts executing.
  4. The fabric controller fires the Run event in the RoleEntryPoint class when the role is ready to accept requests.
  5. Global application events such as Application_Start in Global.asax are fired in ASP.NET web roles.

Startup tasks are command line executable programs that can be executed either asynchronously or synchronously. Asynchronous execution can be Foreground (the role cannot shut down until the task completes) or Background (“fire and forget”). Startup tasks can also be executed with a Limited permission level (the same as the role instance) or Elevated (with administrative permissions). Startup tasks are typically used for executing code that accesses features outside of the role, installing and configuring software and services, and for tasks that require administrative permission.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus Use startup tasks to run executable programs when your role starts, and use the OnStart and Run events to execute .NET code within a role after it loads and when it is ready to accept requests. In a web role, you can also use the Application_Start method in the Global.asax file

The two events that can be handled in the RoleEntryPoint class of web or worker role occur when the role has completed loading and is about to start (OnStart), and when startup is complete and the role is ready to accept requests (Run). The OnStart and Run events are typically used to execute .NET code that prepares the application for use; for example, by loading data or preparing storage.

If the code needs additional permissions, the role can be started in Elevated mode. The OnStart method will then run with administrative permissions, but the remainder of the role execution will revert to the more limited permission level.

Ff803371.note(en-us,PandP.10).gifNote:
You must be careful not to access the request context or call methods of the RoleManager class from the Application_Start method. For further details, see the RoleManager class documentation on MSDN.

Keep in mind that a role may be restarted after a failure or during an update process, and so the OnStart and Run events may occur more than once for a role instance. Also remember that asynchronous startup tasks may still be executing when the OnStart and Run events occur, which makes it important to ensure that they cannot cause a race or deadlock condition. For more information about using startup tasks see “Real World: Startup Lifecycle of a Azure Role” and “Running Startup Tasks in Azure.”

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus Azure roles also support the OnStop event that fires when a role is about to stop running. You can handle this event to perform cleanup tasks for the role such as flushing queues, releasing resources, forcing completion of running tasks, or executing other processes. Your handler code, and all the processes it executes, must complete within five minutes.

Copying Configuration Values in a Startup Task

There are several workarounds you might consider implementing if you decide to copy values from the service configuration file to the active Web.config configuration. Adatum’s developers carried out tests to evaluate their capabilities.

One approach is to run code in the OnStart event of a web role when the role is started with elevated permissions. The post “Edit and Apply New WIF’s Config Settings in Your Azure Web Role Without Redeploying” describes this approach.

Another is to execute code in a startup task that uses the AppCmd.exe utility to modify the configuration before the role starts. The page “How to: Use AppCmd.exe to Configure IIS at Startup” describes how this approach can be used to set values in configuration.

There is also an issue in the current release of Azure regarding the availability of the Identity Model assembly. See “Unable to Find Assembly 'Microsoft.IdentityModel' When RoleEnvironmentAPIs are Called” for more information.

After considering the options and the drawbacks, the developers at Adatum decided to postpone implementing any of the workarounds in the current version of the aExpense application. It is likely that future releases of Azure will resolve the issues and provide a recommended approach for handling configuration values that would normally reside in the Web.config file.

Solution Summary

Figure 1 shows the whiteboard drawing that the team used to explain the architecture of aExpense after this step of the migration to Azure.

Figure 1 - aExpense as an application hosted in Azure

Figure 1

aExpense as an application hosted in Azure

Inside the Implementation

Now is a good time to walk through the process of migrating aExpense into a cloud-based application in more detail. As you go through this section, you may want to download the Visual Studio solution from http://wag.codeplex.com/. This solution contains an implementation of the aExpense application (in the Azure-CloudService-SQLServer folder) after the migration step described in this chapter. If you are not interested in the mechanics, you should skip to the next section.

The Hands-on Labs that accompany this guide provide a step-by-step walkthrough of parts of the implementation tasks Adatum carried out on the aExpense application at this stage of the migration process.

Creating a Web Role

The developers at Adatum created the Visual Studio solution for the cloud-based version of aExpense by using the Azure Cloud Service template. This template generates the required service configuration and service definition files, and the files for the web and worker roles that the application will need. For more information on this process, see "Creating a Azure Project with Visual Studio."

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus Use the Visual Studio Azure Cloud Service template from the Cloud section in the New Project dialog to get started with your cloud project.

This first cloud-based version of aExpense has a single web role that contains all the code from the original on-premises version of the application.

The service definition file defines the endpoint for the web role. The aExpense application only has a single HTTPS endpoint, which requires a certificate. In this case, it is known as “localhost.” When you deploy the application to Azure, you'll also have to upload the certificate. For more information, see “Configuring SSL for an Application in Azure.”

<ServiceDefinition name="aExpense.Azure" xmlns="">
  <WebRole name="aExpense" vmsize="Medium">
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="https" port="443" 
        certificate="localhost" />
    </Endpoints>
    <Certificates>
      <Certificate name="localhost" storeLocation="LocalMachine" 
        storeName="My" />
    </Certificates>
    <ConfigurationSettings>
      <Setting name="DataConnectionString" />
    </ConfigurationSettings>
    <Imports>
      <Import moduleName="Diagnostics" />
    </Imports>
    <LocalResources>
      <LocalStorage name="DiagnosticStore" 
        cleanOnRoleRecycle="false" sizeInMB="20000" />
    </LocalResources>
  </WebRole>
</ServiceDefinition>
Ff803371.note(en-us,PandP.10).gifNote:
The “localhost” certificate is only used for testing your application.

The service configuration file defines the aExpense web role. It contains the connection strings that the role will use to access storage and details of the certificates used by the application. The application uses the DataConnectionString to connect to the Azure storage holding the profile data, and uses the DiagnosticsConnectionString to connect to the Azure storage for saving logging and performance data. The connection strings will need to change when you deploy the application to the cloud so that the application can use Azure storage.

<ServiceConfiguration serviceName="aExpense.Azure" xmlns="">
  <Role name="aExpense">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name=
    "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" 
               value="DefaultEndpointsProtocol=https;
               AccountName={Azure storage account name};
               AccountKey={Azure storage shared key}" />
      <Setting name="DataConnectionString" 
               value="DefaultEndpointsProtocol=https;
               AccountName={Azure storage account name};
               AccountKey={Azure storage shared key}" />
    </ConfigurationSettings>
    <Certificates>
      <Certificate name="localhost" thumbprint="" 
                   thumbprintAlgorithm="sha1" />
    </Certificates>
  </Role>
</ServiceConfiguration>
Ff803371.note(en-us,PandP.10).gifNote:
The values of "Azure storage account name" and "Azure storage shared key" are specific to your Azure storage account.

Reading Configuration Information

In the original on-premises application, settings such as connection strings are stored in the Web.config file. Configuration settings for Azure Cloud Services (web roles and worker roles) are stored in the ServiceConfiguration.cscfg and ServiceDefinition.csdef files. This allows, amongst other benefits, easy modification of the configuration settings by using the Azure Portal or PowerShell cmdlets; without needing to redeploy the entire application.

To facilitate testing the aExpense application when it runs in the local emulator, the developers at Adatum created the aExpense.Azure project with two service configuration files; one contains a connection string for a local SQL Server instance, and one contains the connection string for the test SQL Server instance hosted in the cloud. This makes it easy to switch between these configurations in Visual Studio without the need to edit the configuration files whenever the deployment target changes.

Ff803371.note(en-us,PandP.10).gifNote:
To find out more about using multiple service configuration files in a Azure project, see “How to: Manage Multiple Service Configurations for a Azure Application.”

The developers at Adatum wanted to include all of the application configuration settings, such as database connection strings and other values, in the service configuration files and not in the Web.config file. However, due to the issues that arise when attempting to achieve this (see the section “Application Configuration” earlier in this chapter) they decided not to implement it in the current version of the aExpense application. They will revisit this decision as new releases of the Azure Cloud Services platform become available.

This means that the application must read values from both the service configuration file and the Web.config file. By using the Azure CloudConfigurationManager class to read configuration settings, the application will automatically look first in the ServiceConfiguration.cscfg file. This means that the application code will read connection strings that specify the location of the Expenses data, and other settings, from the ServiceConfiguration.cscfg file.

Using the Azure CloudConfigurationManager Class

The CloudConfigurationManager class simplifies the process of reading configuration settings in a Azure application because the methods automatically read settings from the appropriate location. If the application is running as a .NET web application they return the setting value from the Web.config or App.config file. If the application is running as Azure Cloud Service or as a Azure Web Site, the methods return the setting value from the ServiceConfiguration.cscfg or ServiceDefinition.csdef file. If the specified setting is not found in the service configuration file, the methods look for it in the Web.config or App.config file.

However, the “fall through” process the CloudConfigurationManager class uses when it cannot find a setting in the ServiceConfiguration.cscfg or ServiceDefinition.csdef file only looks in the <appSettings> section of the Web.config or App.config file. Connection strings, and many other settings, are not located in the <appSettings> section. To resolve this, Adatum created a custom class named CloudConfiguration that uses the CloudConfigurationManager class internally. For example, the following code shows the GetConnectionString method in the custom CloudConfiguration class.

public static string GetConnectionString(string settingName)
{
  // Get connection string from the service configuration file.
  var connString 
       = CloudConfigurationManager.GetSetting(settingName);
  if (string.IsNullOrWhiteSpace(connString))
  {
    // Fall back to connectionStrings section in Web.config.
    return ConfigurationManager.ConnectionStrings[
                                settingName].ConnectionString;
  }
  return connString;
}
Ff803371.note(en-us,PandP.10).gifNote:
The CloudConfiguration class is located in the Shared\aExpense folder of the examples available for this guide.

Implementing Claims-based Authentication

Before this step of Adatum’s migration process, aExpense used Windows Authentication to authenticate users. This is configured in the Web.config file of the application. In this step, the aExpense application delegates the process of validating credentials to an external claims issuer instead of using Windows Authentication. You make this configuration change in the Web.config file.

Ff803371.note(en-us,PandP.10).gifNote:
To find out more about claims-based Identity, the FedUtil tool, and Windows Identity Foundation (WIF) take a look at the book “A Guide to Claims-Based Identity and Access Control. You can download a .pdf copy of this book.

The first thing that you'll notice in the Web.config file is that the authentication mode is set to None, while the requirement for all users to be authenticated has been left in place.

<authorization>
  <deny users="?" />
</authorization>
<authentication mode="None" />

The WSFederationAutheticationModule (FAM) and SessionAuthenticationModule (SAM) modules now handle the authentication process. You can see how these modules are loaded in the system.webServer section of the Web.config file.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus You can make these changes to the Web.config file by running the FedUtil tool.

<system.webServer>
  …
  <add name="WSFederationAuthenticationModule" 
       type="Microsoft.IdentityModel.Web.
           WSFederationAuthenticationModule, …" />
  <add name="SessionAuthenticationModule" 
       type="Microsoft.IdentityModel.Web.
           SessionAuthenticationModule, …" />
</system.webServer>

When the modules are loaded, they're inserted into the ASP.NET processing pipeline in order to redirect the unauthenticated requests to the claims issuer, handle the reply posted by the claims issuer, and transform the security token sent by the claims issuer into a ClaimsPrincipal object. The modules also set the value of the HttpContext.User property to the ClaimsPrincipal object so that the application has access to it.

More specifically, the WSFederationAuthenticationModule redirects the user to the issuer's logon page. It also parses and validates the security token that is posted back. This module also writes an encrypted cookie to avoid repeating the logon process. The SessionAuthenticationModule detects the logon cookie, decrypts it, and repopulates the ClaimsPrincipal object. After the claim issuer authenticates the user, the aExpense application can access the authenticated user's name.

The Web.config file contains a new section for the Microsoft.IdentityModel that initializes the Windows Identity Foundation (WIF) environment.

<microsoft.identityModel>
  <service></service>
</microsoft.identityModel>

You can also use a standard control to handle the user logout process from the application. The following code example from the Site.Master file shows a part of the definition of the standard page header.

<div id="toolbar">
    Logged in as:
    <i>
      <%= Microsoft.Security.Application.Encoder.HtmlEncode 
          (this.Context.User.Identity.Name) %>
    </i> |
    <idfx:FederatedPassiveSignInStatus 
          ID="FederatedPassiveSignInStatus1" 
          runat="server" 
          OnSignedOut="FederatedPassiveSignInStatus1SignedOut" 
          SignOutText="Logout" FederatedPassiveSignOut="true" 
          SignOutAction="FederatedPassiveSignOut" />
</div>

You'll also notice a small change in the way that aExpense handles authorization. Because the authentication mode is now set to None in the Web.config file, the authorization rules in the Web.config file now explicitly deny access to all users as well as allowing access for the designated role.

<location path="Approve.aspx">
  <system.web>
    <authorization>
      <allow roles="Manager" />
      <deny users="*"/>
    </authorization>
  </system.web>
</location>

The claim issuer now replaces the ASP.NET role management feature as the provider of role membership information to the application.

There is one further change to the application that potentially affects the authentication process. If you were to run the aExpense application on more than one web role instance in Azure, the default cookie encryption mechanism (which uses DPAPI) is not appropriate because each instance has a different key. This would mean that a cookie created by one web role instance would not be readable by another web role instance. To solve this problem you should use a cookie encryption mechanism that uses a key shared by all the web role instances. The following code from the Global.asax file shows how to replace the default SessionSecurityHandler object and configure it to use the RsaEncryptionCookieTransform class.

Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath Although the initial deployment of aExpense to Azure will only use a single web role, we need to make sure that it will continue to work correctly when we scale up the application. That is why we use RSA with a certificate to encrypt the session cookie.

private void OnServiceConfigurationCreated(object sender, 
    ServiceConfigurationCreatedEventArgs e)
{
    // Use the <serviceCertificate> to protect the cookies that 
    // are sent to the client.
    List<CookieTransform> sessionTransforms =
        new List<CookieTransform>(
            new CookieTransform[] 
            {
                new DeflateCookieTransform(), 
                new RsaEncryptionCookieTransform(
                    e.ServiceConfiguration.ServiceCertificate),
                new RsaSignatureCookieTransform(
                    e.ServiceConfiguration.ServiceCertificate)  
            });
   SessionSecurityTokenHandler sessionHandler = 
    new
     SessionSecurityTokenHandler(sessionTransforms.AsReadOnly());

    e.ServiceConfiguration.SecurityTokenHandlers.AddOrReplace(
        sessionHandler);
}

Managing User Data

Before the migration, aExpense used an LDAP query to retrieve Cost Center, Manager, and Display Name information from Active Directory. It used the ASP.NET Role provider to retrieve the role membership of the user, and the ASP.NET Profile Provider to retrieve the application specific data for the application—in this case, the preferred reimbursement method. The following table summarizes how aExpense accesses user data, and where the data is stored before the migration:

User Data

Access Mechanism

Storage

Role Membership

ASP.NET Role Provider

SQL Server

Cost Center

LDAP

Active Directory

Manager

LDAP

Active Directory

Display Name

LDAP

Active Directory

User Name

ASP.NET Membership Provider

SQL Server

Preferred Reimbursement Method

ASP.NET Profile Provider

SQL Server

After the migration, aExpense continues to use the same user data, but it accesses the data differently. The following table summarizes how aExpense accesses user data, and where the data is stored after the migration:

User Data

Access Mechanism

Storage

Role Membership

ADFS

Active Directory

Cost Center

ADFS

Active Directory

Manager

ADFS

Active Directory

Display Name

ADFS

Active Directory

User Name

ADFS

Active Directory

Preferred Reimbursement Method

ASP.NET Profile Provider

SQL Server

The external issuer delivers the claim data to the aExpense application after it authenticates the application user. The aExpense application uses the claim data for the duration of the session and does not need to store it.

The external issuer delivers the claim data to the aExpense application after it authenticates the application user.

The application can read the values of individual claims whenever it needs to access claim data. You can see how to do this if you look in the ClaimHelper class.

Managing Session Data

To switch from using the default, in-memory session provider to the web farm friendly ASP.NET 4 Caching Providers for Azure, the developers made two configuration changes.

The first change is in the ServiceConfiguration.cscfg file to specify the size and the location of the class. The following snippet shows how Adatum allocates 15% of the memory of each role instance to use as a distributed cache. The value for NamedCaches is the default set by the SDK; it allows you to change the cache settings while the application is running simply by editing the configuration file.

<ConfigurationSettings>
  ...
  <Setting 
    name="Microsoft.WindowsAzure.Plugins.Caching.NamedCaches" 
    value="{&quot;caches&quot;:[{&quot;name&quot;:&quot;default
            &quot;,&quot;policy&quot;:{&quot;eviction&quot;
           :{&quot;type&quot;:0},&quot;expiration&quot;
           :{&quot;defaultTTL&quot;:10,&quot;isExpirable&quot;
           :true,&quot;type&quot;:1},&quot;
            serverNotification&quot;:{&quot;isEnabled&quot;
           :false}},&quot;secondaries&quot;:0}]}" />
  <Setting
    name="Microsoft.WindowsAzure.Plugins.Caching.DiagnosticLevel"
    value="1" />
  <Setting
    name="Microsoft.WindowsAzure.Plugins.Caching.Loglevel" 
    value="" />
  <Setting name=
    "Microsoft.WindowsAzure.Plugins.Caching.CacheSizePercentage" 
    value="15" />
  <Setting name=
    "Microsoft.WindowsAzure.Plugins.Caching
    .ConfigStoreConnectionString"   
    value="UseDevelopmentStorage=true" />
</ConfigurationSettings>

In Visual Studio, you can use the property sheet for the role to set these values using a GUI rather than editing the values directly. When you deploy to Azure you should replace the use of development storage with your Azure storage account. For more information, see Configuration Model on MSDN.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus Remember that you can use the multiple service configuration feature in Visual Studio to maintain multiple versions of your service configuration file. For example, you can have one configuration for local testing and one configuration for testing your application in Azure.

The second configuration change is in the Web.config file to enable the ASP.NET 4 Caching Providers for Azure. The following snippet shows how Adatum configured the session provider for the aExpense application.

<sessionState mode="Custom" customProvider="NamedCacheBProvider">
  <providers>
    <add cacheName="default" name="NamedCacheBProvider" 
      dataCacheClientName="default" applicationName="aExpense" 
      type="Microsoft.Web.DistributedCache
      .DistributedCacheSessionStateStoreProvider, 
      Microsoft.Web.DistributedCache" />
  </providers>
</sessionState>

For more information, see ASP.NET Session State Provider Configuration Settings on MSDN.

Testing, Deployment, Management, and Monitoring

This section contains topics that describe the way that Adatum needed to review its testing, deployment, management, and monitoring techniques when the aExpense application was deployed to Azure using the PaaS approach and Cloud Services.

Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath Moving to using Azure Cloud Services meant that we needed to review our existing processes for testing, deploying, managing, and monitoring applications.

The techniques that Adatum needed to review are how pre-deployment testing is carried out, how the application is packaged and deployed, and the ways that Adatum’s administrators can manage and monitor the application when it is hosted in Azure Cloud Services.

Testing Cloud Services Applications

When you're developing a Cloud Services application for Azure, it's best to do as much development and testing as possible by using the local compute emulator and storage emulator. At Adatum, developers run unit tests and ad-hoc tests in the compute emulator and storage emulator running on their local computers. Although the emulators are not identical to the cloud environment, they are suitable for developers to run tests on their own code. The build server also runs a suite of tests as a part of the standard build process. This is no different from the normal development practices for on-premises applications.

Most testing can be performed using the compute emulator and storage emulator.

The testing team performs the majority of its tests using the local compute emulator as well. They only deploy the application to the Azure test environment to check the final version of the application before it is passed to the administration team for deployment to production. This way, they can minimize the costs associated with the test environment by limiting the time that they have a test application deployed in the cloud.

Ff803371.note(en-us,PandP.10).gifJana Says:
Jana You can deploy an application to your Azure test environment just while you run the tests, but don't forget that any time you do something in Azure–even if it's only testing–it costs money. You should remove test instances when you are not using them.

However, there are some differences between the compute and storage emulators and the Azure runtime environment, and so local testing can only provide the platform for the first level of testing. The final pre-production testing must be done in a real Azure environment. This is accomplished by using the separate test and production environments in the cloud.

Ff803371.note(en-us,PandP.10).gifNote:
For details of the differences between the local emulator environment and Azure, see Differences Between the Compute Emulator and Azure and Differences Between the Storage Emulator and Azure Storage Services on MSDN.

Cloud Services Staging and Production Areas

Adatum wants to be able to deploy an application to either a staging or a production area. Azure Cloud Services provides both a staging and a production area for roles you deploy; you can deploy an application to either a staging or a production environment within the same Cloud Service. A common scenario is to deploy first to the staging environment and then, at the appropriate time, move the new version to the production environment. The only difference is in the URL you use to access them.

In the staging environment the URL to access the aExpense web role will be something obscure like http://aExpenseTestsjy6920asgd09.cloudapp.net,while in the production environment you will have a friendly URL such as http://aExpense.cloudapp.net. This allows you to test new and updated applications in a private environment that others don't know about before deploying them to production. You can also swap the contents of the production and staging areas, which makes it easy to deploy or roll back the application to the previous version without needing to redeploy the role.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe You can quickly deploy or roll back a change in production by using the swap operation. The swap is almost instantaneous because it just involves Azure changing the DNS entries for the two areas.

This feature is useful for Adatum to perform no downtime upgrades in the production environment. The operations staff can deploy a new version of the aExpense application to the staging deployment slot, perform some final tests, and then swap the production and staging slots to make the new version of the application available to users.

However, Adatum wants to control access to the live production environment so that only administrators can deploy applications there. To achieve this, Adatum uses separate Azure subscriptions.

Separate Test and Live Subscriptions

To ensure that only administrators have access to the live production environment, Adatum has two Azure subscriptions. One is used for testing, and one is the live production subscription. The two subscriptions are standard Azure subscriptions, and so provide identical environments. Adatum can be confident that the application will run in the same way in both of them. Adatum can also be sure that deployment of the application will work in the same way in both environments because it can use the same package to deploy to both test and production.

Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath All Azure environments in the cloud are the same; there's nothing special about the staging area or a separate test subscription. You can be sure that different subscriptions have identical environments—this is something that's very difficult to guarantee in your on-premises environments.

The test application connects to a test database server running in the test subscription so that developers and testers can manage the data used for testing. The live application running in the production environment uses a different server name in the connection string to connect to the live database that also runs in the production environment.

Because each subscription has its own Microsoft account and its own set of API keys, Adatum can limit access to each environment to a particular set of personnel. Members of the testing team and key members of the development team have access to only the testing account. Only two key people in the Adatum operations department have access to the production account.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe Because development and testing staff don't have access to the production environment, there's no risk of accidentally deploying to the live environment.

Microsoft bills Adatum separately for the two environments, which makes it easy for Adatum to separate the running costs for the live environment from the costs associated with development and test. This allows Adatum to manage the product development budget separately from the operational budget, in a manner similar to the way it manages budgets for on-premises applications.

Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath Having separate Azure subscriptions for production and test helps Adatum to track the costs of running the application in production separately from the costs associated with development and test.

Figure 2 summarizes the application life cycle management approach at Adatum.

Figure 2 - Adatum's application life cycle management environment

Figure 2

Adatum's application life cycle management environment

Figure 2 shows the two separate Azure subscriptions that Adatum uses for test and production, as well as the on-premises environment that consists of development computers, test computers, a build server, and a source code management tool.

Managing Azure Services

There are three ways to access a Azure environment to perform management tasks such as deploying and removing roles and managing other services. The first is through the Azure Management Portal, where a single Microsoft account has access to everything in the portal. The second is by using the Azure Service Management API, where API certificates are used to access to all the functionality exposed by the API. The third is to use the Azure Management PowerShell cmdlets. In all three cases there is currently no way to restrict users to only be able to manage a subset of the available functionality. Within Adatum, almost all operations that affect the test or production environment are performed using scripts based on the PowerShell cmdlets instead of the management portal.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe We use scripts because it means that we can reliably repeat operations. We're also planning to enhance some of our scripts to record when they were run, by whom, and what parameters were used. See the section “Using Deployment Scripts” later in this chapter for more details.

Setup and Deployment

This section describes how Adatum configures, packages, and deploys the aExpense application. To deploy an application to Azure Cloud Services you must upload two files: the service package file that contains all your application’s files, and the service configuration file (.cscfg) that contains your application’s configuration data. You can generate the service package file either by using the Cspack.exe command-line utility or by using Visual Studio if you have installed the Azure SDK.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe After you have deployed your application to Azure, it is possible to edit the settings in the service configuration file while the application is running to make changes on the fly. You should think carefully about what configuration settings you put in the .cscfg file for your application.

Managing Different Local, Test, and Live Configurations

Adatum uses different configuration files when deploying to the local Compute Emulator, test, and live (production) environments. For the aExpense application, the key difference between the contents of these configuration files is the storage connection strings. In Azure storage, this information is unique to each Azure subscription and uses randomly generated access keys; for SQL Server or SQL Database the connection string includes the user name and password to connect to the database.

The developers and testers at Adatum make use of the feature in Visual Studio that allows you to maintain multiple service configuration files. Adatum includes one service configuration for testing on the local Compute and Storage Emulators, and another configuration for deploying to the test subscription in Azure, as shown in Figure 3.

Figure 3 - Using separate configuration files for local and cloud configurations

Figure 3

Using separate configuration files for local and cloud configurations

This is what the relevant section of the “Local” service configuration file looks like. It is used when the application is running in the local Compute Emulator and using the local Storage Emulator.

<ConfigurationSettings>
  <Setting name="DataConnectionString"
    value="UseDevelopmentStorage=true" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Diagnostics.ConnectionString" 
    value="UseDevelopmentStorage=true" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.NamedCaches" 
    value="" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.Loglevel" 
    value="" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.CacheSizePercentage" 
    value="15" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.ConfigStoreConnectionString"
    value="UseDevelopmentStorage=true" />
  <Setting name="aExpense" 
    value="Data Source=LocalTestSQLServer;
           Initial Catalog=aExpense;
           Integrated Security=True" />
</ConfigurationSettings>

This is what the same section of the “CloudTest” service configuration file looks like. It specifies the resources within Adatum’s Azure test subscription that the application will use when deployed there.

<ConfigurationSettings>
  <Setting name="DataConnectionString"
     value="DefaultEndpointsProtocol=https;
            AccountName={StorageAccountName};
            AccountKey={StorageAccountKey}" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Diagnostics.ConnectionString" 
     value="DefaultEndpointsProtocol=https;
            AccountName={StorageAccountName};
            AccountKey={StorageAccountKey}" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.NamedCaches" 
    value="" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.Loglevel" 
    value="" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.CacheSizePercentage" 
    value="15" />
  <Setting name="Microsoft.WindowsAzure.Plugins
                 .Caching.ConfigStoreConnectionString"
     value="DefaultEndpointsProtocol=https;
            AccountName={StorageAccountName};
            AccountKey={StorageAccountKey}" />
  <Setting name="aExpense" 
     value="Data Source=CloudTestSQLServer;
            Initial Catalog=aExpenseTest;
            UId={UserName};Pwd={Password};" />
</ConfigurationSettings>

Notice that the project does not contain a configuration for the live (production) environment. Only two key people in the operations department have access to the storage access keys and SQL credentials for the production environment, which makes it impossible for anyone else to use production storage accidentally during testing.

Figure 4 summarizes the approach Adatum uses for managing configuration and deployment.

Figure 4 - Overview of the configuration and deployment approach Adatum uses

Figure 4

Overview of the configuration and deployment approach Adatum uses

Developers and testers can run the application in the local compute and storage emulators, and publish it directly to the test environment using the Publish wizard in Visual Studio. They can also deploy it by using a script that updates the test configuration file, creates the package, and deploys the application to the test environment.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe We use the command-line utility on the build server to automate the generation of our deployment package and configuration file. For testing on the compute emulator or in the test environment we use the Publish option in Visual Studio that packages and deploys the application in a single step.

When testing is complete and the application is ready to go live, the administrators who have permission to deploy to the live environment run a series of scripts that automate updating the configuration, package creation, and deployment. The scripts create the deployment package (which is the same for both the test and live environments) and then update the configuration so that it includes the storage account and SQL database settings that allow the application to use the live storage and database. The scripts then deploy the package and the updated configuration to the live production environment.

Preparing for Deployment to Azure

There are a number of steps that Adatum performed to prepare the environments in both the test and production subscriptions before deploying the aExpense application for the first time. Initially, Adatum performed these steps using the Azure Management Portal. However, it plans to automate many of these steps, along with the deployment itself, using the Azure PowerShell cmdlets.

Before deploying the aExpense application to a subscription, Adatum created a Cloud Service to host the application. The Cloud Service determines the Azure datacenter where Adatum will deploy the application and the URL where users will access the application.

The aExpense application uses an HTTPS endpoint and therefore requires a certificate to use for SSL encryption. Adatum uploaded a certificate to the Cloud Service in the portal. For more information about how to use HTTPS in Cloud Services, see Configuring SSL for an Application in Azure.

Developers, testers, and operations staff can use the Azure Management Portal to configure and deploy the aExpense application, and to access the portal to perform these steps they use Adatum’s Microsoft account credentials. However, Adatum scripts many of these operations using the Azure PowerShell cmdlets. For information about how to install and configure the Azure PowerShell cmdlets to use with your subscription, see Getting Started with Azure PowerShell.

Ff803371.note(en-us,PandP.10).gifNote:
The Azure PowerShell cmdlets use the Azure Service Management REST API to communicate with Azure. The communication is secured with a management certificate, which is downloaded and installed on the client machine as part of the Azure PowerShell cmdlets installation. This means you are not prompted for credentials when you use these cmdlets.
If you use the Publish Azure Application wizard in Visual Studio to deploy directly to Azure, it also uses a management certificate from Azure. For more information, see Publishing Azure Applications to Azure from Visual Studio.

Deploying to Cloud Services in Azure

When you deploy a Cloud Services application to Azure, you upload the service package and configuration files to a Cloud Service in your Azure subscription, specifying whether you are deploying to the production or staging environment within that Cloud Service.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe To help troubleshoot deployments, you can enable Remote Desktop Access to the role when you perform the deployment.

You can deploy an application by uploading the files using the Azure Management Portal, by using the Publish Azure Application wizard in Visual Studio, or by using the Azure PowerShell cmdlets. Both the Visual Studio wizard and the PowerShell cmdlets authenticate with your subscription by using a management certificate instead of a Microsoft account.

For deploying the aExpense application to the test subscription, Adatum sometimes uses the wizard in Visual Studio, but typically uses a PowerShell script. Adatum always deploys to the production subscription using a PowerShell script. Adatum also needs to update the Web.config file for each deployment because some values cannot be placed in the appropriate service configuration file (for more details see the section “Application Configuration” earlier in this chapter).

Using Deployment Scripts

Manually modifying your application's configuration files before you deploy the application to Azure is an error-prone exercise that introduces unnecessary risk in a production environment. The developers at Adatum have developed a set of deployment scripts that automatically update the configuration files, package the application files, and upload the application to Azure.

The automated deployment of the aExpense application in production is handled in two stages. The first stage uses an MSBuild script to compile and package the application for deployment to Azure. This build script uses a custom MSBuild task to edit the configuration files for a cloud deployment, adding the production storage connection details. The second stage uses a Windows PowerShell script with some custom cmdlets to perform the deployment to Azure.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe Automating the deployment of applications to Azure using scripts will make it much easier to manage applications running in the cloud.

The MSBuild script for the aExpense application uses a custom build task named RegexReplace to make the changes during the build. The example shown here replaces the development storage connection strings with the Azure storage connection strings.

Ff803371.note(en-us,PandP.10).gifMarkus Says:
Markus You should also have a target that resets the development connection strings for local testing.

<Target Name="SetConnectionStrings">
  <RegexReplace
    Pattern='Setting name="Microsoft.WindowsAzure.Plugins
                           .Diagnostics.ConnectionString"
    value="UseDevelopmentStorage=true"'
    Replacement='Setting name="Microsoft.WindowsAzure.Plugins
                               .Diagnostics.ConnectionString"
    value="DefaultEndpointsProtocol=https;
           AccountName=$(StorageAccountName);
           AccountKey=$(StorageAccountKey)"'
    Files='$(AzureProjectPath)\$(ServiceConfigName)'/>
  <RegexReplace
    Pattern='Setting name="DataConnectionString"
    value="UseDevelopmentStorage=true"'
    Replacement='Setting name="DataConnectionString"
    value="DefaultEndpointsProtocol=https;
           AccountName=$(StorageAccountName);
           AccountKey=$(StorageAccountKey)"'
    Files='$(AzureProjectPath)\$(ServiceConfigName)'/>
  <RegexReplace
    Pattern='Setting name="Microsoft.WindowsAzure.Plugins
                       .Caching.ConfigStoreConnectionString"
    value="UseDevelopmentStorage=true"'
    Replacement='Setting name="Microsoft.WindowsAzure.Plugins
                       .Caching.ConfigStoreConnectionString"
    value="DefaultEndpointsProtocol=https;
           AccountName=$(StorageAccountName);
           AccountKey=$(StorageAccountKey)"'
    Files='$(AzureProjectPath)\$(ServiceConfigName)'/>
  <RegexReplace
    Pattern='connectionString="Data Source=LocalTestSQLServer;
           Initial Catalog=aExpense;Integrated Security=True"'
    Replacement='connectionString="Data Source=
                           $(DatabaseServer);
                           Initial Catalog=$(DatabaseName);
                           UId=$(UserName);
                           Pwd=$(Password);"'
    Files='$(WebProjectConfig)'/>    
</Target>
Ff803371.note(en-us,PandP.10).gifNote:
The source code for the RegexReplace custom build task is available in the download for this phase. Note that the values used in the example scripts do not exactly match those shown above.

The team at Adatum then developed a Windows PowerShell script (deploy.ps1) that will deploy the packaged application to Azure, and can invoke this script from an MSBuild task. The script needs the following pieces of information to connect to Azure. You must replace the values for thumbprint and subscription ID with the values for your own Azure account:

  • Build path. This parameter identifies the folder where you build your deployment package. For example: C:\AExpenseBuildPath.
  • Package name. This parameter identifies the package to upload to Azure. For example: aExpense.Azure.cspkg.
  • Service config. This parameter identifies the service configuration file for your project. For example: ServiceConfiguration.cscfg.
  • Service name. This parameter identifies the name of your Azure hosted service. For example: aExpense.
  • Thumbprint. This is the thumbprint of the service management certificate.
  • Subscription ID (sub). This parameter must be set to the name of your Azure subscription. You can find your Subscription ID in the properties pane of the Management Certificates page in the original Azure Management Portal.
  • Slot. This parameter identifies the environment were you will deploy (Production or Staging)
  • Storage account key (storage). This parameter is the key for your storage account, which can be obtained from the Azure Management Portal.
Ff803371.note(en-us,PandP.10).gifNote:
The scripts described in this section use a Azure Management Certificate to authenticate with the Azure subscription. This certificate was installed from the publishing settings downloaded when the operator at Adatum installed Azure PowerShell cmdlets. By default, this certificate is stored in the personal certificate store of the person who installed the cmdlets. You should ensure that only authorized users have access to this certificate because it grants full access to the Azure subscription it came from. This is not the same certificate as the SSL certificate used by the HTTPS endpoint in the aExpense application.

$buildPath = $args[0]
$packagename = $args[1]
$serviceconfig = $args[2]
$servicename = $args[3]
$mgmtcertthumbprint = $args[4]
$cert = Get-Item cert:\CurrentUser\My\$mgmtcertthumbprint
$sub = $args[5]
$slot = $args[6]
$storage = $args[7]
$package = join-path $buildPath $packageName
$config = join-path $buildPath $serviceconfig
$a = Get-Date
$buildLabel = $a.ToShortDateString() + "-" + $a.ToShortTimeString()

#Important!  When using file based packages (non-http paths), 
#the cmdlets will attempt to upload the package to blob storage
#for you automatically.  If you do not specify a 
#-StorageServiceName option, it will attempt to upload a 
#storage account with the same name as $servicename.  If that
#account does not exist, it will fail.  This only applies to
#file-based package paths.

#Check for 32-bit or 64-bit operating system
$env = Get-Item Env:\ProgramFiles*x86*  
if ($env -ne $null) {
  $PFilesPath = $env.value
} else {
  $env = Get-Item Env:\ProgramFiles
  $PFilesPath = $env.value
}

$ImportModulePath = Join-Path $PFilesPath "Microsoft SDKs\Windows Azure\PowerShell\Microsoft.WindowsAzure.Management.psd1"
Import-Module $ImportModulePath

Set-AzureSubscription -SubscriptionName Adatum -Certificate $cert 
           -SubscriptionId $sub -CurrentStorageAccount $storage

$hostedService = Get-AzureService $servicename | 
                 Get-AzureDeployment -Slot $slot

if ($hostedService.Status -ne $null)
{
  $hostedService |
    Set-AzureDeployment –Status –NewStatus "Suspended"
  $hostedService | 
    Remove-AzureDeployment -Force
}
Get-AzureService -ServiceName $servicename |
  New-AzureDeployment -Slot $slot -Package $package 
        -Configuration $config -Label $buildLabel 

Get-AzureService -ServiceName $servicename | 
    Get-AzureDeployment -Slot $slot | 
    Set-AzureDeployment –Status –NewStatus "Running"

The script first verifies that the Azure PowerShell cmdlets are loaded. Then, if there is an existing service, the script suspends and removes it. The script then deploys and starts the new version of the service.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe The examples here deploy aExpense to the staging environment. You can easily modify the scripts to deploy to production. You can also script in-place upgrades when you have multiple role instances.

MSBuild can invoke the Windows PowerShell script in a task and pass all the necessary parameter values:

<Target Name="Deploy" 
  DependsOnTargets="SetConnectionStrings;Build;DeplotCert">
  <MSBuild
    Projects="$(AzureProjectPath)\$(AzureProjectName)"
    Targets="CorePublish"
    Properties="Configuration=$(BuildType)"/>
  <Exec WorkingDirectory="$(MSBuildProjectDirectory)"
    Command=
    "$(windir)\system32\WindowsPowerShell\v1.0\powershell.exe
    -NoProfile -f deploy.ps1 $(PackageLocation) $(PackageName)
    $(ServiceConfigName) $(HostedServiceName) 
    $(ApiCertThumbprint) $(SubscriptionKey) $(HostSlot)
    $(StorageAccountName)" />
</Target>
Ff803371.note(en-us,PandP.10).gifNote:
See the release notes provided with the examples for information on using the Windows PowerShell scripts.

The aExpense application uses an HTTPS endpoint, so as part of the automatic deployment, Adatum needed to upload the necessary certificate. The developers created a Windows PowerShell script named deploycert.ps1 that performs this operation.

This script needs the following pieces of information to connect to Azure. You must specify the values for thumbprint and subscription ID with the values for your own Azure account:

  • Service name. This parameter identifies the name of your Azure hosted service. For example: aExpense.
  • Certificate to deploy: This parameter specifies the certificate you will deploy. It’s the one the application is using. Specify the full path to the .pfx file that contains the certificate
  • Certificate password: This parameter specifies password of the certificate you will deploy.
  • Thumbprint. This is the thumbprint of the service management certificate.
  • Subscription ID (sub). This parameter must be set to the name of your Azure subscription. You can find your Subscription ID in the properties pane of the Management Certificates page in the original Azure Management Portal.
  • Algorithm: This parameter specifies the algorithm used to create the certificate thumbprint
  • Certificate thumbprint: This parameter must be set to the value of the thumbprint for the certificate in the .pfx file that you will deploy.
$servicename = $args[0]
$certToDeploy = $args[1]
$certPassword = $args[2]
$mgmtcertthumbprint = $args[3]
$cert = Get-Item cert:\CurrentUser\My\$mgmtcertthumbprint
$sub = $args[4]
$algo = $args[5]
$certThumb = $args[6]

$env = Get-Item Env:\ProgramFiles*x86*
if ($env -ne $null)
{
  $PFilesPath = $env.value
}
else
{
  $env = Get-Item Env:\ProgramFiles
  $PFilesPath = $env.value
}

$ImportModulePath = Join-Path $PFilesPath "Microsoft SDKs\Windows Azure\PowerShell\Microsoft.WindowsAzure.Management.psd1"
Import-Module $ImportModulePath

Set-AzureSubscription -SubscriptionName Adatum -Certificate $cert 
                      -SubscriptionId $sub

try
{
  Remove-AzureCertificate -ServiceName $servicename 
          -ThumbprintAlgorithm $algo -Thumbprint $certThumb
}
catch {}

Add-AzureCertificate -ServiceName $servicename -CertToDeploy $certToDeploy -Password $certPassword

An MSBuild file can invoke this script and pass the necessary parameters. The following code is an example target from an MSBuild file.

<Target Name="DeployCert">
  <Exec WorkingDirectory="$(MSBuildProjectDirectory)" 
    Command=
      "$(windir)\system32\WindowsPowerShell\v1.0\powershell.exe 
      -f deploycert.ps1 $(HostedServiceName) $(CertLocation) 
      $(CertPassword) $(ApiCertThumbprint) $(SubscriptionKey) 
      $(DeployCertAlgorithm) $(DeployCertThumbprint)" />
</Target>

Continuous Delivery

Adatum’s development team use Microsoft Team Foundation Server (TFS) to manage development projects. Azure can integrate with both an on-premises TFS installation and with the Azure hosted Team Foundation Services to automate deployment when each check-in occurs. Adatum is considering using this feature to automate deployment to the test environment.

The integration provides a package build process that is equivalent to the Package command in Visual Studio, and a publishing process that is equivalent to the Publish command in Visual Studio. This would allow developers to automatically create packages and deploy them to Azure after every code check-in.

Ff803371.note(en-us,PandP.10).gifNote:
For more information about implementing Continuous Delivery using TFS, see “Continuous Delivery for Cloud Applications in Azure” and “Announcing Continuous Deployment to Azure with Team Foundation Service.”

Using a Mock Issuer

By default, the downloadable version of aExpense is set up to run on a standalone development workstation. This is similar to the way you might develop your own applications. It's generally easier to start with a single development computer.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe Using a simple, developer-created claims issuer is good practice during development and unit testing.

To make this work, the developers of aExpense wrote a small stub implementation of an issuer. You can find this code in the downloadable Visual Studio solution. The project is in the Dependencies folder and is named Adatum.SimulatedIssuer.

When you first run the aExpense application, you'll find that it communicates with the stand-in issuer. The issuer issues predetermined claims. It's not very difficult to write this type of component, and you can reuse the downloadable sample, or you can start with the template included in the Windows Identity Foundation (WIF) SDK.

Ff803371.note(en-us,PandP.10).gifNote:
You can download the WIF SDK from the Microsoft Download Center. The guide “A Guide to Claims-Based Identity and Access Control” describes several ways you can create a claims issuer.

Converting to a Production Issuer

When you are ready to deploy to a production environment, you'll need to migrate from the simulated issuer that runs on your development workstation to a component such as ADFS 2.0.

Making this change requires two steps. First, you need to configure the issuer so that it recognizes requests from your Web application and provides the appropriate claims. You need do this only once unless you change the claims required by the application.

Then, each time you deploy the solution from test to production, you need to modify the Web application's Web.config file using the FedUtil utility such that it points to the production issuer. You may be able to automate this change by using deployment scripts, or by adding code that copies values from your service configuration files to the live web.config file at runtime as described in the section “Application Configuration” earlier in this chapter.

Ff803371.note(en-us,PandP.10).gifNote:
To learn more about FedUtil and configuring applications to issue claims, take a look at the guide “A Guide to Claims-Based Identity and Access Control.”

You can refer to documentation provided by your production issuer for instructions about how to add a relying party and how to add claims rules.

When you forward a request to a claim issuer, you must include a wreply parameter that tells the claim issuer to return the claims. If you are testing your application locally and in the cloud, you don't want to hard code this URL because it must reflect the real address of the application. The following code shows how the aExpense application generates the wreply value dynamically in the Global.asax.cs file.

Building the wreply parameter dynamically simplifies testing the application in different environments.

private void
  WSFederationAuthenticationModule_RedirectingToIdentityProvider
  (object sender, RedirectingToIdentityProviderEventArgs e)
{
    // In the Azure environment, build a wreply parameter 
    // for  the SignIn request that reflects the real
    // address of the application.
    HttpRequest request = HttpContext.Current.Request;
    Uri requestUrl = request.Url;
    StringBuilder wreply = new StringBuilder();

    wreply.Append(requestUrl.Scheme); // e.g. "http" or "https"
    wreply.Append("://");
    wreply.Append(request.Headers["Host"] ??
        requestUrl.Authority);
    wreply.Append(request.ApplicationPath);

    if (!request.ApplicationPath.EndsWith("/"))
    {
        wreply.Append("/");
    }

    e.SignInRequestMessage.Reply = wreply.ToString();
}

Accessing Diagnostics Log Files

The aExpense application uses the Logging Application Block and the Exception Handling Application Block to capture information from the application and write it to the Windows event log. The Cloud Services version of the application continues to use the same application blocks, and through a configuration change, it is able to write log data to the Azure logging system.

Ff803371.note(en-us,PandP.10).gifJana Says:
Jana The Logging Application Block and the Exception Handling Application Block are part of the Enterprise Library. We use them in a number of applications within Adatum.

For aExpense to write logging information to Azure logs, Adatum made a change to the Web.config file to make the Logging Application Block use the Azure trace listener.

Ff803371.note(en-us,PandP.10).gifPoe Says:
Poe We want to have access to the same diagnostic data when we move to the cloud.

<listeners>
<add listenerDataType="Microsoft.Practices.EnterpriseLibrary.
     Logging.Configuration.SystemDiagnosticsTraceListenerData, 
     Microsoft.Practices.EnterpriseLibrary.Logging,
     Version=5.0.414.0, Culture=neutral,
     PublicKeyToken=31bf3856ad364e35"
  type="Microsoft.WindowsAzure.Diagnostics
        .DiagnosticMonitorTraceListener, 
        Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0,
        Culture=neutral,PublicKeyToken=31bf3856ad364e35"
  traceOutputOptions="Timestamp, ProcessId" 
  name="System Diagnostics Trace Listener" />
</listeners>

If you create a new Azure Project in Visual Studio, the Web.config file will contain the configuration for the Azure trace listener. The following code example from the Web.config file shows the trace listener configuration you must add if you are migrating an existing ASP.NET web application.

<system.diagnostics>
    <trace>
      <listeners>
        <add type="Microsoft.WindowsAzure.Diagnostics
            .DiagnosticMonitorTraceListener,
            Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0,
            Culture=neutral, PublicKeyToken=31bf3856ad364e35"
            name="AzureDiagnostics">
          <filter type="" />
        </add>
      </listeners>
    </trace>
  </system.diagnostics>

By default in Azure, diagnostic data is not automatically persisted to storage; instead, it is held in a memory buffer. In order to access the diagnostic data, you must either add some code to your application that transfers the data to Azure storage, or add a diagnostics configuration file to your project. You can either schedule Azure to transfer log data to storage at timed intervals, or perform this task on-demand.

Adatum decided to use a diagnostics configuration file to control how the log data is transferred to persistent storage; the advantage of using a configuration file is that it enables Adatum to collect trace data from the Application_Start method where the aExpense application performs its initialization routines. The following snippet shows the sample diagnostics.wadcfg file from the solution.

<?xml version="1.0" encoding="utf-8" ?>
<DiagnosticMonitorConfiguration
    xmlns="http://schemas.microsoft.com/ServiceHosting/
           2010/10/DiagnosticsConfiguration"
      configurationChangePollInterval="PT1M"
      overallQuotaInMB="5120">
  <DiagnosticInfrastructureLogs bufferQuotaInMB="1024"
     scheduledTransferLogLevelFilter="Verbose"
     scheduledTransferPeriod="PT1M" />
  <Logs bufferQuotaInMB="1024"
     scheduledTransferLogLevelFilter="Verbose"
     scheduledTransferPeriod="PT1M" />
  <Directories bufferQuotaInMB="1024"
     scheduledTransferPeriod="PT1M">

    <!-- These three elements specify the special directories 
           that are set up for the log types -->
    <CrashDumps container="wad-crash-dumps"
                directoryQuotaInMB="256" />
    <FailedRequestLogs container="wad-frq"
                directoryQuotaInMB="256" />
    <IISLogs container="wad-iis" directoryQuotaInMB="256" />

  </Directories>
  <PerformanceCounters bufferQuotaInMB="512"
                       scheduledTransferPeriod="PT1M">
    <!-- The counter specifier is in the same format as the 
         imperative diagnostics configuration API -->
    <PerformanceCounterConfiguration
       counterSpecifier="\Processor(_Total)\% Processor Time"
       sampleRate="PT5S" />
  </PerformanceCounters>
  <WindowsEventLog bufferQuotaInMB="512"
     scheduledTransferLogLevelFilter="Verbose"
     scheduledTransferPeriod="PT1M">
    <!-- The event log name is in the same format as the 
         imperative diagnostics configuration API -->
    <DataSource name="System!*" />
  </WindowsEventLog>
</DiagnosticMonitorConfiguration>

The value of the overallQuotaInMB must be more than the sum of the bufferQuotaInMB values in the diagnostics.wadcfg file, and you must configure a local storage resource in the Web role named “DiagnosticsStore” that is at least the size of the overallQuotaInMB value in the diagnostics configuration file. In this example, the log files are transferred to storage every minute and you can then view them with any storage browsing tool such as the Server Explorer window in Visual Studio.

Ff803371.note(en-us,PandP.10).gifNote:
You must also configure the trace listener for the Azure IISHost process separately in an App.config file. For more detailed information about using Azure diagnostics, see Collecting Logging Data by Using Azure Diagnostics on MSDN. The blog post "Configuring WAD via the diagnostics.wadcfg Config File" also contains some useful tips.

As an alternative to using the diagnostics.wadcfg file Adatum could have used code in the OnStart event of the role. However, this means that changes to the configuration will require the application to be redeployed.

Ff803371.note(en-us,PandP.10).gifBharath Says:
Bharath Because persisting diagnostic data to Azure storage costs money, we will need to plan how long to keep the diagnostic data in Azure and how we are going to download it for offline analysis.

More Information

The Azure Developer Center contains links to plenty of resources to help you learn about developing applications for Azure.

MSDN is a good starting point for learning more about Azure and Azure SQL Database.

You can download the latest versions of Azure tools for developing applications using .NET and other languages from the Azure Developer Center Downloads page.

Managing Azure SQL Database using SQL Server Management Studio” contains steps for connecting to and managing Azure SQL Database by using an on-premises instance of SQL Server Management Studio.

About the Service Management API” contains an overview of managing Azure services using scripts and code.

You can download the Azure PowerShell cmdlets and other management tools from the developer tools page .

For more information about caching and the ASP.NET 4 Caching Providers for Azure, see "Caching in Azure."

For more information about the Universal providers for profile data, see “Microsoft ASP.NET Universal Providers” and the Hands-on Labs that accompany this guide.



Show:
© 2014 Microsoft