February 2014

Volume 29 Number 2

.NET Framework : Explore the Microsoft .NET Framework 4.5.1

Gaye Oncul | February 2014

The Microsoft .NET Framework 4.5.1 release, along with Visual Studio 2013, introduces innovative features to increase developer productivity and application performance. Additionally, it provides new features for improving the UX of consuming .NET NuGet packages, which is important because NuGet is a primary delivery vehicle for .NET Framework libraries.

The previous product, the .NET Framework 4.5, was a big release with many new features. It has been installed on more than 200 million machines. The .NET Framework 4.5.1 was released about 14 months later in October 2013, and despite the short time frame, it comes packed with many features requested by customers. In this article, I’ll review the new features in the .NET Framework 4.5.1, and for more details, you can refer to .NET Framework 4.5.1 RTM (bit.ly/1bBlEPN) and .NET Framework 4.5.1 Preview (bit.ly/10Vr2ft) posts on the .NET Framework Blog.

The .NET Framework 4.5.1 is only a part of what the .NET team (of which I’m a member) has been working on over the past year. We also shipped several libraries on NuGet to fill platform gaps and to enable new scenarios. I’ll provide an overview of our .NET NuGet libraries and also highlight one of our deep investments, the new .NET just-in-time (JIT) compiler, which shipped as a Community Technology Preview (CTP) release around the same time as the .NET Framework 4.5.1.

More Productive Development

I’ll start with new debugging features delivered with the .NET Framework 4.5.1 to improve developer productivity.

Async Debugging Improvements After setting up a solid and easy-to-use base for the asynchronous programming model in the previous Framework releases, we wanted to smooth out some remaining aspects for the overall developer experience with the .NET Framework 4.5.1. Two questions are essential for debugging asynchronous code: “How did I get into this async method?” and “What is the state of all the tasks in my application?” Visual Studio 2013 introduces enhancements to the Call Stack and Tasks windows to help you find answers to these questions in a much more intuitive way. These improvements are supported for desktop, Web and Windows Store apps on Windows 8.1 and are available for C++ and JavaScript as well.

It’s common to have nested async method calls within an app or library, which rely on the await keyword to manage the flow of execution. Previously, Visual Studio didn’t show the chain of async calls when stopped at a breakpoint within a Task. Visual Studio 2013 provides a logical and sequential view of methods in a nested chain of calls for both asynchronous and synchronous methods. This makes it easier to understand how the program reached a location inside an asynchronous call.

Figure 1 shows an asynchronous code sample. Figure 2 and Figure 3 demonstrate the difference between the call stack views of Visual Studio 2012 and Visual Studio 2013 for that code. More details of this feature can be found in the “Debugging Asynchronous Code in Visual Studio 2013—Call Stack enhancements” blog post at bit.ly/19NTNez.

Figure 1 Asynchronous Code Sample

private async void ShowSampleImg_Click(object sender, 
    RoutedEventArgs e)
{
  string imgUri = "https://example.com/sample.jpg";
  BitmapImage bitmap = new BitmapImage();
  bitmap.BeginInit();
  bitmap.StreamSource = await GetSampleImgMemStream(imgUri);
  bitmap.EndInit();
  sampleImg.Source = bitmap;
}
private async Task<MemoryStream> GetSampleImgMemStream(string srcUri)
{
  Stream stream = await GetSampleImage(srcUri);
  var memStream = new MemoryStream();
  await stream.CopyToAsync(memStream);
  memStream.Position = 0;
  return memStream;
}
private async Task<Stream> GetSampleImage(string srcUri)
{
  HttpClient client = new HttpClient();
  Stream stream = await client.GetStreamAsync(srcUri);
  return stream;
}

Visual Studio 2012 Call Stack Window
Figure 2 Visual Studio 2012 Call Stack Window

Visual Studio 2013 Call Stack Window
Figure 3 Visual Studio 2013 Call Stack Window

The Tasks window in Visual Studio 2013 is designed to help you understand the state of async tasks in your apps by displaying all the currently running and scheduled tasks. It’s a replacement for the Parallel Tasks window that was available in previous Visual Studio versions. Figure 4 shows a snapshot of a Visual Studio 2013 Tasks window for the sample code given in Figure 1.

Visual Studio 2013 Tasks Window
Figure 4 Visual Studio 2013 Tasks Window

x64 Edit and Continue This was a popular debugger feature request, with more than 2,600 votes on the Visual Studio UserVoice site where users can request new features (bit.ly/14YIM8X). Developers have loved using the Edit and Continue feature since it was intro­duced with Visual Studio 2005 and the .NET Framework 2.0 release, for x86 projects. Edit and Continue makes it easier to write the correct code by letting you change the source code during a debugging session, while app state is available. You can even move the instruction pointer so you can replay code after making a change. It provides a more productive development experience because you don’t have to stop and restart the session to validate your changes.

x64 support for Edit and Continue is now enabled with Visual Studio 2013 and the .NET Framework 4.5.1 release. You can use this feature for debugging desktop applications (Windows Presentation Foundation, Windows Forms and so on), Windows Store apps, ASP.NET Web applications and Windows Azure Cloud Services projects targeting x64, AnyCPU or x86 architectures.

Managed Return Value Inspection Debugger support for managed return values is another popular request with more than 1,000 votes on the UserVoice site. The Visual C++ debugger has an existing feature that allows you to observe the return values of methods, and we wanted the same capability for .NET as well. This feature is useful for many code patterns. However, you can really see its value with nested methods, as demonstrated in Figure 5. With this feature, you no longer have to worry about storing the results of your methods in locals solely to make debugging easier. When you step over a method call, both direct return values and the return values of the embedded methods will be displayed in the Autos window along with the parameter values passed to the functions. You can also use the Immediate window to access the last return value through the use of the new $ReturnValue pseudo-variable.

Visual Studio 2013 Autos and Intermediate Windows
Figure 5 Visual Studio 2013 Autos and Intermediate Windows

Windows Store Development Enhancements We responded to feedback and provided .NET support for new Windows Runtime (WinRT) features to improve the .NET Windows Store app development experience.

One of the pain points was converting a .NET Stream to a WinRT IRandomAccessStream. In the .NET Framework 4.5.1, we added a new extension method, AsRandomAccessStream, for System.IO.Stream to solve this problem. You can now write the following code, which allows you to easily provide an IRandomAccessStream:

// EXAMPLE: Get image from URL via networking I/O
var client = new HttpClient();
Stream stream = await client.GetStreamAsync(imageUrl);
var memStream = new MemoryStream();
await stream.CopyToAsync(memStream);
memStream.Position = 0;
var bitmap = new BitmapImage();
bitmap.SetSource(memStream.AsRandomAccessStream());
image.Source = bitmap;

This example code reads an image from the Web and displays it in a XAML Image control (represented by the “image” variable).

Another improvement is error propagation in the Windows Runtime. The Windows Runtime, in Windows 8.1, enables exceptions to pass between WinRT components. With this support, an exception can be thrown from a C++ WinRT component and be caught in C# (or vice versa). Additional information for the exception is now available via the Message and StackTrace properties on System.Exception.

The Windows Runtime also added support for nullable value types in structures. You can build managed WinRT components that expose structs with this new feature, such as in this sample code:

public struct PatientRecord
{
  public string Name;
  public int Age;
  public string HomeAddress;
  // InsuranceID is nullable
  public int? InsuranceId;
}

Better Application Performance

Application performance is a constant focus area for the .NET Framework team. In this release, we responded to feedback on the garbage collector and significantly improved ASP.NET app startup.

ASP.NET App Suspension This feature is one of the top highlights of the .NET Framework 4.5.1 due to the significant performance gain it provides, particularly for shared hosting scenarios where site density and startup latency are critical. ASP.NET App Suspension will enable shared hosters—either commercial Web hosting companies or enterprise IT systems—to host many more ASP.NET Web sites on a server with faster app startup time.

ASP.NET App Suspension depends on IIS Idle Worker Process Page-Out, which is a new IIS feature in Windows Server 2012 R2. IIS Idle Worker Process Page-Out introduces a new “suspended” state in addition to the existing “inactive” and “active” states for Web sites. This new “suspended” state releases critical resources used by the site for other sites to use, specifically CPU and memory, while still enabling the site to be resumed quickly.

Figure 6 shows the state transitions of ASP.NET sites using App Suspension. A Web site starts in the inactive state. It’s loaded into memory and transitions to active with the first page request. After a period of idle time, the site will be suspended, per application pool configuration (bit.ly/1aajEeL). Upon subsequent requests to the site, it can quickly return to the active state. This cycle can happen many times. Up until now, sites would get terminated and become inactive after a certain amount of idle time.

The State Transitions of ASP.NET Web Sites
Figure 6 The State Transitions of ASP.NET Web Sites

No code change is required to use this new feature. ASP.NET App Suspend is enabled automatically by configuring an IIS application pool for “Suspend” on Windows Server 2012 R2.

Earlier I touted a “significant performance gain” achieved with this feature, and I’d like to back this up with some numbers coming from our performance labs. We conducted extensive performance experiments to measure the startup time gain for “resume from suspend” compared to “start after terminate.” We did these experiments on a machine under significant request load, accessing a large number of appli­cation pools, with the intent of recreating a “shared hosting” environment. The results showed a 90 percent reduction in the startup time for sites that were accessed after suspension. We also measured the improvement to site density. We were able to host about seven times more ASP.NET sites on Windows Server 2012 R2 when ASP.NET App Suspension was enabled. Figure 7 shows the results of these experiments. More insights into these experiments can be found in the “ASP.NET App Suspend – responsive shared .NET Web hosting” blog post at bit.ly/17fI6dM.

ASP.NET App Suspension Performance Numbers Seen in the .NET Lab
Figure 7 ASP.NET App Suspension Performance Numbers Seen in the .NET Lab

Multi-Core JIT Compilation Enhancements Multi-core JIT compilation is now enabled by default for ASP.NET apps. Perfor­mance measurements show up to 40 percent reductions in cold startup time with multi-core JIT enabled. It provides startup benefits by performing JIT compilation on multiple cores, in parallel to code execution. Under the covers, multi-core JIT was extended to support dynamically loaded assemblies, which are common in ASP.NET apps. The additional support also benefits client apps, where multi-core JIT remains an opt-in feature. More details about the multi-core JIT feature can be found in the related .NET Framework Blog post, “An easy solution for improving app launch performance,” at bit.ly/RDZ4eE.

On-Demand Large Object Heap (LOH) Compaction LOH compaction is an important requirement for some scenarios, and it’s now available in this release. First, a little background information, as LOH might not be familiar to you. The garbage collector stores objects larger than 85,000 bytes in the LOH. The LOH can get fragmented, and in some cases this might lead to relatively large heap sizes or even OutOfMemoryException exceptions. These situations, although rare, occur because there aren’t enough contiguous memory blocks available in the LOH to satisfy an allocation request, even though there might be enough space in total.

With LOH compaction, you can reclaim and merge smaller unused memory blocks, making them available for larger allocations, which makes better overall use of machine memory. Although this idea sounds appealing, the feature isn’t intended for common use. Compacting LOH is an expensive process and can cause long pauses in an application, so it should only be deployed into production after analysis and testing.

Easier Use of .NET Framework NuGet Libraries

We intend to deliver .NET Framework versions more frequently to make new features and fixes available sooner. In fact, that’s already started with the .NET Framework 4.5.1. Additionally, we use NuGet as a release vehicle to deliver our library features and fixes faster in response to customer feedback.

NuGet is a relatively new package format for the .NET Framework. It provides a standard format for packaging libraries that target one or more .NET profiles and can be consistently consumed by developer tools such as Visual Studio. NuGet.org is the primary NuGet repository and the only one the .NET team uses. Visual Studio comes with an integrated NuGet client for referencing and using NuGet packages in your projects.

We’ve been shipping .NET libraries on NuGet for the past few years. We’ve found NuGet is a great way to deliver libraries to a large number of developers and to multiple .NET platforms at the same time. We’ve improved the NuGet UX in Visual Studio 2013 based on broad feedback, particularly for enterprise scenarios.

Better Discoverability and Official Support The Microsoft and .NET NuGet feed was created to improve the discoverability of Microsoft packages. NuGet.org hosts thousands of packages, which could make it challenging to discover the new .NET packages among all the others. This new curated feed provides you with a scoped view of the official Microsoft and .NET packages on NuGet.org. We intend to only add packages to this feed that meet the same quality and support requirements as the .NET Framework. Therefore, you can use these packages in all the same places you use .NET APIs. We’ve also created a Web view of this feed on the “Microsoft .NET Framework NuGet Packages” page (bit.ly/19D5QLE), hosted on the .NET Framework Blog.

The NuGet team helped us enable this experience by updating their client in Visual Studio to include filtering by curated feeds. Figure 8 shows the NuGet client in Visual Studio 2013.

The NuGet Client in Visual Studio 2013
Figure 8 The NuGet Client in Visual Studio 2013

Serviceability Some enterprise customers told us they were waiting to adopt our NuGet packages until central servicing was offered for these libraries through Microsoft Update. We’ve added this update capability in the .NET Framework 4.5.1, enabling apps to take advantage of the new feature. Microsoft Update will be an additional release vehicle for .NET NuGet libraries in the unlikely case that we need to quickly and broadly update a library for a critical security issue. Even with this new option in place, we’ll continue to use NuGet as a primary vehicle for library updates and fixes.

Automatic Resolution of Version Conflicts Apps can reference more than one version of a NuGet package. For desktop and Web apps, you needed to manually resolve version conflicts to ensure that a consistent set of libraries is loaded at run time, which may be challenging and inconvenient. To address that, Visual Studio 2013 automatically configures apps to use the highest referenced version of each library, which solves the issue through a straightforward policy. It also matches the policy already used for Windows Phone and Windows Store apps.

Visual Studio 2013 will automatically generate binding redirects in app.config at build time if version conflicts are found within the app. These binding redirects map each of the versions found for a given library to the highest version found. At run time, your app will use a single version—the highest one referenced—of each library. The main motivation behind this feature was to provide a better experience for consuming NuGet libraries; however, it works for any library. The “How to: Enable and Disable Automatic Binding Redirection” topic in the MSDN Library (bit.ly/1eOi3zW) provides more details about this feature.

And Much More ...

Up to this point, I’ve summarized what was delivered in the .NET Framework 4.5.1 release. In the same time frame, we delivered some important new components and features through other release vehicles as well.

HTTP Client Libraries NuGet PackageThe HTTP client library provides a consistent and modern networking .NET API. It lets you write intuitive and asynchronous code (using the await keyword) to access services exposed through HTTP with method names that directly correspond to the HTTP primitives, such as GET, PUT, POST and DELETE. It also provides direct access to HTTP headers and the response body as any of the String, Stream or Byte[] types.

At first, HttpClient was only available for the .NET Framework 4.5 desktop and Windows Store apps. Portable library and Windows Phone app developers had to use HttpWeb­Request and HttpWebResponse, with their non-Task-based Asynchronous Pattern (TAP) model. Based on popular demand for portable library and Windows Phone support, we shipped the portable version of the HttpClient library on NuGet to fill the platform gap. As a result, all .NET developers have access to HttpClient, with its TAP-async API.

After the first few versions of the HttpClient NuGet package were released, we added automatic decompression functionality (bit.ly/13xWATe) in response to feedback. Automatic decompression of HTTP responses helps minimize data requirements, which is useful not only on mobile devices, but also helps with the perception of performance on the desktop.

Microsoft HTTP Client Libraries on NuGet (bit.ly/1a2DPNY) has had great adoption with more than 1.3 million downloads. You can use this package in apps targeting Windows Phone 7.5 and higher, Silverlight 4 and higher, .NET Framework 4 and higher, Windows Store, and Portable Class Libraries (PCL).

Microsoft Immutable Collections NuGet Package This is another popular .NET package, which provides easy-to-use, high-­performance immutable collections, such as ImmutableList<T> and ImmutableDictionary<TKey, TValue>. Immutable collections, once constructed, don’t allow modification. This enables passing immutable types across threads or async contexts without concern about concurrent operations. Even the original creator of the collection can’t add or remove items.

The .NET Framework has read-only collection types, such as ReadOnlyCollection<T> and IReadOnlyList<T>. These types guarantee the consumer can’t change the data. However, there’s no similar guarantee for the provider. This might cause data corruption if the provider and consumer are operating concurrently on different threads. With immutable collection types, you’re guaranteed a given instance never changes.

The Microsoft Immutable Collections NuGet package (bit.ly/18xhE5W) is available as a portable library and can be used in desktop and Windows Store apps targeting the .NET Framework 4.5 and higher, PCL, and Windows Phone 8 apps. For more insights and details, I encourage you to start with the “Immutable collections ready for prime time” post (bit.ly/18Y3xp8) on the .NET Framework Blog and the MSDN documentation at bit.ly/189XR9U.

The New .NET JIT Compiler, RyuJIT The JIT compiler is one of our key investment areas to improve app performance. The .NET team recently announced the CTP release of the next-generation x64 JIT compiler, code-named “RyuJIT.” RyuJIT is twice as fast in compiling code relative to the existing x64 JIT compiler, meaning apps using RyuJIT start up to 30 percent faster depending on the percentage of startup time that’s spent in JIT compilation. (Note that time spent in the JIT compiler is only one component of startup time among others, thus the app doesn’t start twice as fast because the JIT is twice as fast.) At the same time, RyuJIT doesn’t compromise on code quality, and the modern JIT compiler opens up more avenues for future code quality optimizations.

Beyond the performance gains, RyuJIT highlights the .NET team’s commitment to customer engagement. Less than a month after the CTP was released, we released an updated version incorporating customer feedback. We’ll continue the deep customer engagement and quick cadence of improvements.

We started RyuJIT with a focus on x64 as part of building a first-class cloud platform. As the team moves forward, we’ll build support for other architectures. You can get more details about the RyuJIT project and how to download and use the CTP in the “RyuJIT: The next-generation JIT compiler for .NET” post at bit.ly/19RvBHf. I encourage you to try it out and send us feedback.

Looking for Feedback

In this article, I provided an overview of the new features in the .NET Framework 4.5.1 release. The .NET team delivered many important customer-requested features along with some innovative surprises such as ASP.NET App Suspension and async-aware debugging.

We’re shaping the future of .NET with projects that often span multiple .NET releases, in key areas such as the JIT, garbage collection and libraries. In this article, I also provided insights into one of these deep investments, the new .NET JIT compiler, RyuJIT, which was recently shipped as a CTP release.

Note that the .NET team is actively listening for feedback. You can follow .NET news and give the team feedback through the following channels:


Gaye Oncul Kok is a program manager for the CLR and the .NET Framework at Microsoft, where she works on the .NET Ecosystem team.

Thanks to the following Microsoft technical experts for reviewing this article: Habib Heydarian, Richard Lander, Immo Landwerth, Andrew Pardoe, Subramanian Ramaswamy and Alok Shriram
Richard Lander has worked as a program manager on the .NET team since .NET 2. His favorite .NET features are generics and lambdas.

Immo Landwerth is a program manager on the CLR team at Microsoft, where he works on the Microsoft .NET Framework base class library (BCL), API design and Portable Class Libraries.

Andrew Pardoe is a program manager on the .NET Runtime team. His team is responsible for all aspects of the .NET Framework’s virtual execution environment.

Subramanian Ramaswamy is a senior program manager on the .NET CLR team. He joined Microsoft in 2008 and currently works on code execution strategies in the runtime. He holds a Ph.D. in Electrical and Computer Engineering from the Georgia Institute of Technology and has authored several conference papers and MSDN Magazine articles.

Alok Shriram is a program manager on the .NET Framework team at Microsoft, before which he worked as a developer on the Office 365 team. He works on the Managed Extensibility Framework (MEF), the DotNet framework, NuGet packages and other developer goodness.