Foreword by Mark Aggar
This content and the technology described is outdated and is no longer being maintained. For more information, see Transient Fault Handling.
Energy use in the IT sector is growing faster than in any other industry as society becomes ever more dependent on the computational and storage capabilities provided by data centers. Unfortunately, a combination of inefficient equipment, outdated operating practices, and lack of incentives means that much of the energy used in traditional data centers is wasted.
Most IT energy efficiency efforts have focused on physical infrastructure—deploying more energy-efficient computer hardware and cooling systems, using operating system power management features, and reducing the number of servers in data centers through hardware virtualization.
But a significant amount of this wasted energy stems from how applications are designed and operated. Most applications are provisioned with far more IT resources than they need, as a buffer to ensure acceptable performance and to protect against hardware failure. Most often, the actual needs of the application are simply never measured, analyzed, or reviewed.
Once the application is deployed with more resources than it typically needs, there is very little incentive for the application developers to instrument their application to make capacity planning easier. And when users start complaining that the application is performing slowly, it's often easier (and cheaper) to simply assign more resources to the application. Very rarely are these resources ever removed, even after demand for the application subsides.
Cloud computing has the potential to break this dynamic of over-provisioning applications. Because cloud platforms like Microsoft Azure charge for resource use in small increments (compute-hours) on a pay-as-you-go basis, developers can now have a direct and controllable impact on IT costs and associated resource use.
Applications that are designed to dynamically grow and shrink their resource use in response to actual and anticipated demand are not only less expensive to operate, but are significantly more efficient with their use of IT resources than traditional applications. Developers can also reduce hosting costs by scheduling background tasks to run during less busy periods when the minimum amount of resources are assigned to the application.
While the cloud provides great opportunities for saving money on hosting costs, developing a cloud application that relies on other cloud services is not without its challenges. One particular problem that developers have to deal with is "transient faults." Although infrequent, applications have to be tolerant of intermittent connectivity and responsiveness problems in order to be considered reliable and provide a good user experience.
Until now, developers on Azure had to develop these capabilities on their own. With the release of the Enterprise Library Integration Pack for Microsoft Azure, developers can now easily build robust and resource efficient applications that can be intelligently scaled, and throttled. In addition, these applications can handle transient faults.
The first major component contained within the Integration Pack is the Autoscaling Application Block, otherwise known as "WASABi." This application block helps developers improve responsiveness and control Azure costs by automatically scaling the number of web and worker roles in Azure through dynamic provisioning and decommissioning of role instances across multiple hosted services. WASABi also provides mechanisms to help control resource use without scaling role instances through application throttling. Developers can use this application block to intelligently schedule or defer background processing to keep the number of role instances within certain boundaries and take advantage of idle periods.
One of the major advantages of WASABi is its extensibility, which makes your solutions much more flexible. Staying true to the design principles of other application blocks, WASABi provides a mechanism for plugging in your own custom metrics and calling custom actions. With these, you can design a rule set that takes into account your business scenarios and not just standard performance counters available through the Azure Diagnostics.
The optimizing stabilizer will ensure that you do not end up scaling too quickly. It can also make sure scale actions correspond to the most optimal compute hour pricing charges. For applications that expect significant usage beyond more than a few instances, this application block will help developers save money on hosting costs while improving the "green credentials" of their application. It will also help your application meet target SLAs.
The other major component is the Transient Fault Handling Application Block (also known as "Topaz") that helps developers make their applications more robust by providing the logic for detecting and handling transient fault conditions for a number of common cloud-based services.
More than ever before, developers have an important role to play in controlling IT costs and improving IT energy efficiency, without sacrificing reliability. The Enterprise Library Integration Pack for Microsoft Azure can assist them in rapidly building Azure-based applications that are reliable, resource efficient, and cost effective.
The Developer's Guide you are holding in your hands is written by the engineering team who designed and produced this integration pack. It is full of useful guidance and tips to help you learn quickly. Importantly, the coverage includes not only conceptual topics, but the concrete steps taken to make the accompanying reference implementation (Tailspin Surveys) more elastic, robust, and resilient.
Moreover, the guidance from the Microsoft patterns & practices team is not only encapsulated in the Developer's Guide and the reference implementation. Since the pack ships its source code and all its unit tests, a lot can be learned by examining those artifacts.
I highly recommend both the Enterprise Library Integration Pack for Microsoft Azure and this Developer's Guide to architects, software developers, administrators, and product owners who design new or migrate existing applications to Azure. The practical advice contained in this book will help make your applications highly scalable and robust.
Mark Aggar, Senior Director
Last built: June 7, 2012