Everywhere you turn nowadays, you hear about the cloud—that it’s a major step in the evolution of the Web and will change the way you develop, deploy and manage applications. But not everyone has figured out how the cloud really applies to them. This is especially true for those with medium-to-large infrastructures and relatively flat usage consumption—where the capitalized cost is beneficial compared to the operational cost of the cloud. However, if your infrastructure is on the small side or you have a dynamic consumption model, the cloud—Azure—is a no-brainer. Moreover, for shops heavy in process, where standing up a development environment is like sending a bill to Capitol Hill, Azure can provide a great platform for rapid prototyping.
It’s with those thoughts in mind that I want to point out some things about Azure that I hope might spur you into putting the magazine down and putting some Azure up.
For Azure development, the tooling and the integration with Visual Studio has been pretty good—and is quickly evolving into great. You can find the latest set of tools in the Azure Developer Center at bit.ly/xh1CAE.
As Figure 1 shows, you can select the type of roles and language you want for a new project. No matter what you choose, you can immediately take advantage of the tools integration. In my experience, the three features you’ll find most helpful are the development emulators, runtime debugging and integrated deployment.
Figure 1 Creating a New Project
The Azure development environment consists of two emulators that allow you to easily run and debug your applications on your development machine before deployment (see Figure 2). The Azure Compute Emulator is what lets you run your service locally for testing and debugging. With the Storage Emulator, you can test the storage locally.
Figure 2 The Azure Compute Emulator and Storage Emulator Running
When the time is right, deployment to the Staging or Production environment is just a right-click away. The tools take care of packaging, moving and deploying the roles within your solution, and progress is reported back via Visual Studio, as Figure 3 shows.
Figure 3 Deployment Progress for Azure as Reported Back Through Visual Studio
Early on, a big problem with Azure was that you could’ve developed some code that worked perfectly locally, but failed or had terrible performance once deployed. Luckily, the introduction of IntelliTrace and profiling helped alleviate these issues. You can enable these features when you publish your solution, as shown in Figure 4.
Figure 4 IntelliTrace and Profiling Settings in Azure
For debugging hard-to-reproduce errors, especially those that seem to show up only in the production environment, there’s nothing quite as good as IntelliTrace. IntelliTrace essentially records the execution of your application, which you can then play back. For example, once you deploy the roles with IntelliTrace enabled, you can view the IntelliTrace logs and step through exactly what happened at what time (see Figure 5).
Figure 5 Debugging Azure with IntelliTrace in Visual Studio
Once you’ve stepped into the thread, you can walk though any existing code to see what was changing during execution. When your site is bug-free (or as bug-free as it’s going to get) and you’re ready to try to identify performance issues, you can turn off IntelliTrace and turn on profiling. As you saw in Figure 4, you can select the type of profiling to do. For example, if you’re wondering what the call time is on individual methods, you might select Instrumentation. This method collects detailed timing data that’s useful for focused analysis and for analyzing input/output performance issues. It’s also useful for gathering detailed timing information about a section of your code and for understanding the impact of input and output operations on application performance. You can then walk through the site to execute the code base until you’re satisfied, at which point you’ll choose to “View Profiling Report” to see an instance in Server Explorer. Visual Studio will fetch the information and put together a report like the one depicted in Figure 6.
Figure 6 A Profiling Report
The report shows CPU usage over time, as well as the “Hot Path,” which alone might help you focus your efforts. If you’d like to dig a little further, however, a direct link in the Hot Path section lets you see the individual timing for each function. The main page also displays a nice graph indicating the functions that do the most individual work. Clearly, having IntelliTrace and profiling available directly from Visual Studio is a huge benefit, not only for productivity but also for product quality.
If you’ve been paying even marginal attention over the past few years, you know that one of the key promises of the cloud is the ability to scale on demand. For compute virtual machines (VMs), you can often just pay more for a larger role and get more resources. For Microsoft SQL Azure, though, the optimizations are a little more … well … manual.
It’s great to know that deploying to the cloud gives you the ability to scale the farm, but a more immediate question is often, “What size role do I need?” The answer is that it depends on traffic and what you’re doing. You can take an educated guess based on your past experience and on the specifications of the role sizes, as shown in Figure 7.
Figure 7 Virtual Machine Size Specifications
(6,144MB is reserved for system files)
One of these configurations is likely to meet your needs, especially in combination with the rest of the role instances in the farm. Take note that all attributes increase, including the available network bandwidth, which is often a secondary consideration for folks. Note also that you don’t really have to guess. Instead you can turn on profiling as discussed previously and collect actual metrics across the instances to assess performance. Based on profiling results, you can adjust the VM size and collect profiling information again until the sweet spot is reached. For edge conditions, you make a best-fit choice or find an alternative solution. For example, if your site serves a lot of content and isn’t very dynamic, you might choose one of the higher role specs or move to the Azure Content Delivery Network.
Now for some mixed news: SQL Azure does not always give the performance you might get with your own private instance. You will, however, get consistent performance. There are a few things you can do to get the best possible performance and runtime behavior:
Over the years, one of the biggest mistakes I’ve seen people make when optimizing a site is to just increase the size of the hardware without doing anything else. Sometimes this helped a little, but as soon as load really spiked, the problem would come back with symptoms worse than ever, because the additional horsepower had the effect of making more things conflict more quickly and not actually resolving or mitigating the real issue. So, when I suggest repeating step 2, I’m not kidding. You can’t just throw more hardware at the problem and hope it isn’t a deadlocking issue. The SQL Azure Profiler tool can help you with this effort. I suggest you start with the optimization on your local instance prior to deploying to the cloud, and then use SQL Azure Profiler to help identify and make any adjustments needed once in the cloud.
As a final note, one strategy for increasing scale or size of a SQL Azure database is federation, commonly referred to as “data sharding,” which is a technique of horizontally partitioning data across multiple physical servers to provide application scale-out. This reduces individual query times, but adds the complexity of scattering the queries to target instances and gathering the results together once they’re complete. For example, you’ll get the benefit of running Create, Read, Update, Delete (CRUD) operations and smaller datasets, and in parallel. The tax you’ll pay is having to broker the access across the shards. That being said, some of the largest sites employ sharding, prefetching and caching to manage queries, and you use those sites every day without much complaint about performance.
Early on it was not always easy to know what was going on in a Azure deployment, but those days are long gone. Not only does Microsoft provide an ever-evolving management portal, but a management pack for System Center Operations Manager (SCOM) brings the management of your entire infrastructure into one place.
Azure writes all of its diagnostics data out to a storage container. You can consume the logs directly and generate reports or take custom actions. However, you can also use SCOM to monitor Azure applications. Those responsible for managing the infrastructure of an enterprise are inclined to be conservative and want full-featured tools for monitoring. Using a familiar solution like SCOM will help address the reservations that the infrastructure management team might have about deploying a cloud solution. SCOM lets you monitor the health of all Azure deployments and enables you to drill down into Hosted Services, Roles and Role Instances. Built into the pack are alerts for services and performance, but a key benefit is that you can create your own rules and alerts relating to your deployments and the data being collected. An additional nicety is that rules for grooming the logs are built-in. As usual, if the logs aren’t pruned along the way, they can grow to be unmanageable. To help with that, the management pack comes with predefined rules:
• .NET Trace Grooming
• Performance Counter Grooming
• Event Log Grooming
These can be enabled to make sure your space usage doesn’t get out of hand, but you’ll need to balance that against the number of transactions to perform the tasks. You can download the System Center Monitoring Pack for Azure Applications at bit.ly/o5MW4a.
Very often when a new technology comes along, you have to go through a fair amount of education and experience to become proficient: think about moving to Windows Presentation Foundation/Silverlight from Windows Forms, choosing whether to use ASP.NET or SharePoint, or even something more foundational such as deciding between procedural and object-oriented development. This is the very thing about the cloud, especially with the tooling available: If you’re already writing sites and services, you can continue to make the most of your .NET skills and investments and move straight to the cloud.
This doesn’t mean there aren’t some best practices to learn, but not much more than you’d already be doing to be thorough in your design and development. And, when you’re ready, the platform provides many additional features that can be learned and leveraged to make your solution secure and robust, and have the best performance without having to write the features or frameworks yourself.
Start now. That’s my advice. Go to azure.com, get the tools and get started. Use Azure in your projects to prototype. Use it in your projects to provide otherwise hard-to-requisition resources. Use it for whatever you want, but use it. The cloud is the future we will all live in, and it will be as ubiquitous as running water and electricity. Cloud technologies are evolving to expand computing beyond the conventional to a model that will deliver computing power where it’s needed, when it’s needed and the way it’s needed. That’s something you want to be a part of.
Joseph Fultz is a software architect at Hewlett-Packard Co., working as part of the HP.com Global IT group. Previously he was a software architect for Microsoft, working with its top-tier enterprise and ISV customers defining architecture and designing solutions.
Thanks to the following technical expert for reviewing this article: Bruno Terkaly
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Subscribe to MSDN Flash newsletter
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.