{ End Bracket }

Rich and Reach Applications

Terry Crowley

The difference between rich and reach applications used to be clear. Rich applications ran natively on a PC, leveraging local graphics, CPU, memory, and storage capabilities. Reach applications were written in HTML and deployed in the browser, minimizing system dependencies. Now we're seeing a blurring of the boundaries. AJAX applications download significant amounts of code and can deliver polished, responsive applications. Runtimes such as Flash, Silverlight™, and Google Gears allow you to create applications that can leverage local CPU and graphics capabilities, local storage, or that can run offline.

On the rich applications side, features, such as ClickOnce deployment for managed applications, or application virtualization and streaming tools, such as Microsoft® SoftGrid®, greatly reduce the deployment hassles for native rich applications. Even more important is the increasing capabilities and sophistication of rich clients in interacting with remote data. Microsoft OneNote® is a great example of this. While often thought of as a personal note-taking application, OneNote provides great "it just works" sharing of notebooks stored on a server or service. Transparent caching of remote data in the background allows the app to stay always responsive, even in the face of varying network connectivity. Local caching supports offline access. Seamless merging of changes provides wiki-like collaboration without having any awkward manual overhead.

HTML reach applications initially realized benefits in speed of development by "walking away" from complexity and focusing on key business problems. As you add back features, you add complexity. The asynchronous nature of AJAX applications is critical for a responsive user experience, but managing the state of multiple outstanding asynchronous interactions and their impact on the user experience can be complex.

Check it Out

See Terry Crowley's "Behind the Code" interview at channel9.msdn.com/Shows/Behind_The_Code.

The Model View Controller design pattern is great for providing a robust programming model, but the boundary between model and view gets stretched when it gets distributed between server and browser. Providing local responsiveness and offline capabilities argues for moving more of the model into the local client, but that complicates the client/server boundary and how state and the underlying application model is managed across this boundary. There is inherent stress when attempting to minimize the amount of data that needs to be downloaded to the client and then managing how that partial model is manipulated locally.

While scripting languages like JavaScript can be incredibly productive, as the code running in these reach applications becomes larger and more complex, languages like ECMAScript 4 or C# in Silverlight that support "programming in the large" become critical for managing this complexity. I have great respect for teams that have built applications with tens of thousands of lines of JavaScript, but JavaScript simply does not have the critical language features for structuring large applications that we have dealt with for decades.

As we consider providing offline capabilities to reach applications, I am struck by how trivial most commentators assume this is. Give me access to a local key-value store or a simple database and I'm done. Not likely. Most interesting applications have a rich semantic model. The richer the model, the more complicated the local state and the more complicated the process (including the user model) of what merging that local state back to the server means.

For rich applications interacting with remote data, I always keep in mind a quote from Pat Helland of Microsoft: "We live in an Einsteinian universe; there is no such thing as simultaneity." Database techniques like two-phase commit essentially try to provide the illusion of simultaneity, but at the cost of responsiveness and the ability to scale out gracefully.

A better approach is to accept that distributed components in a system are always going to have different views of the "world" and to shoot for a looser target of eventual consistency. This means that instead of struggling to keep the underlying model of distributed components of the system consistent at all times (and building fragile, unresponsive systems as a result), you should accept the inconsistency and focus your effort on how to robustly resolve those inevitable inconsistencies. This model of loose coupling ends up being a good architectural strategy in many places; it's useful to recognize that it has a core basis in the physics of our universe.

Ultimately, as the boundaries and capabilities of reach and rich applications converge, the problems that developers building these apps face will converge as well. If you're solving a difficult problem, you need a great design strategy and great tools as well.

Terry Crowley is a Microsoft Technical Fellow and Director of Development for the Office Division. He has been building rich Internet-enabled applications for the past 25 years at Microsoft, Vermeer, Beyond, and BBN.