May 2016

Volume 31 Number 5

[Visual Studio]

Nurturing Lean UX Practices

By Karl Melder | May 2016

In Visual Studio 2015, Microsoft delivered several new debugging and diagnostic features that are detailed by Andrew Hall in the March 2016 MSDN Magazine article, “Debugging Improve­ments in Visual Studio 2015” (msdn.com/magazine/mt683794). For the features that involve substantial changes to the UX, Microsoft adopted a “Lean UX” approach using iterative experiments with direct user feedback to inform the design.

I want to share with you the process that was used to design one of those features, PerfTips, along with the best practices, tips and tricks picked up along the way. The goal is to inspire and enable you and your teams to effectively fold customer feedback directly into your development processes.

Lean UX

Lean UX is a complement to lean development practices that are trending in our industry. Eric Ries defined lean as the practice of “Build, Measure and Learn” in his 2011 book, “The Lean Startup” (Crown Business), where he describes a “business-hypothesis-driven experimentation” approach. Similarly, Lean UX is a set of principles and processes that focuses on very early and ongoing customer validation, where you conduct experiments to validate your user and product design hypothesis in extremely short cycles. Design iterations are done quickly with a focus on solving real user problems. A good reference is Jeff Gothelf’s 2013 book, “Lean UX” (O’Reilly Media), where he provides guidance and worksheets to help teams bring clarity to what they believe or hope to achieve.

For the team delivering the debugging experience in Visual Studio, Lean UX is a highly collaborative approach where the entire team, including program managers, UX researchers, developers and UX designers, were involved in generating ideas, generating hypotheses, and interpreting what was heard and seen from our customers.

This article is about fully embracing customer feedback in the product development process. It’s about failing forward faster. It’s about getting feedback on your ideas without any working bits. It’s about not just one team in the developer tools division doing this, but lots of teams fundamentally changing how features are designed in a lean development process.

The Design Challenge

Microsoft technology has a rich source of data that can provide developers with an expedient way to diagnose issues. However, in the UX labs, users would repeatedly fall back to manually walking the execution of code despite the advantage that tools such as the Profiler provided. The instrumentation data bore out the low usage of the Visual Studio Profiler despite the conviction that it can make finding performance issues a far more efficient process. For a tool such as Visual Studio, where a user spends eight or more hours a day working, asking a user to change his workstyle can be a tricky business. So, the team wanted to leverage the user’s natural workstyle when debugging performance issues and deliver an ambient experience.

Had a more traditional waterfall approach been taken, focus groups might have been conducted to get some early feedback, a detailed spec written and when coding was near complete, usability studies would have been scheduled. A user would have been given tasks that exercised the new features and triaged the issues found like is done with bugs. For Visual Studio 2015, a very different approach was taken.

The Research Process

Instead of scheduling usability studies when working bits were available, two users were prescheduled every Friday for the major­ity of the product cycle. These days were informally referred to as “Quick Pulse Fridays.” Users came in for about two hours, where their time was typically split across two to four experiments. For each experiment, a best guess was made as to how much time should be dedicated. Each experiment was about either helping Microsoft learn more about its users and how they work, or about trying out an idea. Design ideas had to survive at least three weeks of getting positive results in order to move forward with them. A positive result meant users either felt strongly it had value for them, increased discoverability, made it easier to use or could demonstrate improvements to key scenarios.

UX research is often categorized into quantitative and qualitative, where a combination of instrumentation/analytics and customer feedback guides business and product development. In the early qualitative research, feedback meant getting the users’ reaction to ideas. The team took into account not only what they said, but their physical reaction, facial expressions and tone of voice. Users were also given a real task, like fixing a performance bug in an application without any assistance from the research team as they were observed, as shown in the photo in Figure 1. That meant letting the users struggle. The team would take video of them for future review and take notes on both what was heard and seen. Watching the users helped the team to understand their workstyle and identify unarticulated needs a user might not know to ask for, but could provide a dramatic improvement to the product.

A Research Session with a User
Figure 1 A Research Session with a User

What was critical to the team’s success was not spending any time trying to convince customers to like an idea. The users were simply showed the idea in terms of what it would be like using it. Then the team stepped back and just listened, watched and asked questions that helped the team understand the users’ viewpoints. The key to the team’s success was the ability to detach itself from the idea or design that it might have felt strongly about.

Every week different participants were recruited for a steady flow of new perspectives. There was both an internal team and a vendor who recruited, screened and scheduled users. The team did not look for users who had specific expertise with diagnostics; rather, the recruiting profile was simply active users of Visual Studio. This meant that each week there were users with different skills, experiences and work contexts. This gave the team the opportunity to learn something new each week and let it identify the consistencies. The team could also evolve its ideas to succeed with a wider audience.

Equally important was balancing how the team interacted with the users. How a question was asked could dramatically affect the outcome and bias the conversation. The team developed the habit of always asking open-ended questions—where the probing questions were derived from what the user said or did. For example, if a user told the team they didn’t like something, they were simply asked, “Tell us more about that.” The team tried not to assume anything and challenged its assumptions and hypotheses at every opportunity. These skills are basic to the UX field and were adopted by everyone on the team. If you want to learn more about these interviewing techniques, I recommend Cindy Alvarez’s 2014 book, “Lean Customer Development” (O’Reilly Media).

Early Quick-Pulse Sessions and the Unshakeable Workstyle

Early in the product cycle, the team started with an idea for helping users monitor performance of their code. The team created a mockup and got it in front of the Quick Pulse Friday users. What was consistently heard, even after three weeks of design alterations, was that they weren’t sure what it was for and that they “would probably turn it off!” It wasn’t necessarily what the team wanted to hear, but it needed to hear.

However, while also watching users diagnose application issues, it became clear that the team needed a UX that was more directly part of the code navigation experience. Even though there were several debugger windows that provide additional information, it was difficult for users to pay attention to several windows at a time. The team observed many users keeping their focus in the code, often mentally “code walking” the execution. This may seem obvious to any developer reading this article, but what was fascinating was how unshakable that workstyle is despite the availability of additional tools that are meant to make that task more efficient.

The team started out envisioning ideas using Photoshop, where it would take an extremely experienced designer upward of a day to generate a mockup that could be used for feedback. Photoshop tends to lend itself to creating a UI with high fidelity. Instead, the team started using Microsoft PowerPoint and a storyboard add-in (aka.ms/jz35cp) that let everyone on the team quickly create medium-­fidelity representations of their ideas. These storyboards gave users a sense of what it might look like, but they were rough enough for users to tell it was an in-progress design and that their input had direct impact. The net effect is that it was much easier to throw away a 30-minute investment when an experiment failed. Also, ideas could be tested that the team knew wouldn’t work in practice, but the feedback from users would help generate new ideas.

To get feedback on the user interaction model, each slide in the PowerPoint decks represented either a user action or a system response to that action. When drafting the interaction, a cursor icon image to show where the user would click would be included. This was useful when sharing ideas and working out the details. However, the cursor icon would be removed before showing it to the users. This allowed the team to ask the users what they would do next, providing a discount way of identifying possible discoverability issues. For each system-response slide the team would also ask if the users felt they were making progress, which let the team know if it was providing adequate feedback. This feedback technique is called a “cognitive walkthrough process” in UX research and can help you identify some issues at the very earliest stages of designing your interaction, while giving you an early sense of areas of concern that will require further iteration and experimentation to get right.

To gauge the potential impact of an idea, the team relied on the user’s ability to articulate specifically how he might use the idea in his day-to-day work environment and what he perceived might be the direct benefits and drawbacks. The user had to provide detailed and plausible examples for the team to become confident the idea was worth pursuing. The team also looked to see if the user started to pay extra attention, get more animated and express excitement. The team was looking for ideas that would excite users and potentially have a very positive impact on their diagnostic experience.

“Wow, This Is Amazing!”

The team needed a way to show performance information in the code that would not affect code readability and would give users an ambient in-code debugging experience. Code Lens, a feature in Visual Studio that lets you see information about the edit history, bugs, unit testing and references, provided inspiration about a potential interaction model and visual design. The team experimented with mockups of several ideas showing customers how, as a developer steps through code, it would show performance numbers in milliseconds (see Figure 2).

An Early Mockup Showing Performance Data in a Debugging Session
Figure 2 An Early Mockup Showing Performance Data in a Debugging Session

The earliest indication that the team was on to something was when a participant, who was a development manager, got very excited when he was walked through the experience. I should emphasize he was just showed the proposed experience without any background information. As he realized what he was seeing, he started asking detailed questions and got quite animated as he spoke. He said it would be a solution to a problem he was having with his novice developers making poor coding decisions that resulted in poor application performance. In his current process, performance issues were resolved through a labor-intensive code review process, which was a heavy tax on him and his team. He felt this idea could help his novice developers learn how to write performant code while they were first crafting their code. He made comments such as, “Can this [PerfTip] be policy [in Visual Studio]?” Another user, after recognizing its value, remarked, “What makes Visual Studio remarkable is the capabilities when you’re on a line of code!”

This early feedback also got the team excited about this potential feature being an entry point for the diagnostic tools, solving some discoverability issues. The team hypothesized that these PerfTips could be a trigger for the users to venture into our richer tool set.

Designing the Details

Everything done up to this point involved only mockups—with no investment in coding. If ideas got traction, greater levels of details were created in the PowerPoint “click-thru,” as well as lots of design alternatives to experiment with weekly. However, the limit of what could be done with mockups was reached when several research issues remained:

  • Validation that that the design for PerfTips when debugging common logic issues wasn’t a distraction, but remained discoverable when dealing with performance issues.
  • The team wanted the users to correctly interpret the performance numbers, which were timed from the last break in execution.
  • Users had suggested only showing the values when performance was worrisome, but no one could confidently suggest a default threshold.
  • There was concern the overhead of the debugger, which could add several milliseconds, might diminish its value to customers.

The team implemented a very minimal version of the feature that worked under specific conditions. Then an application with performance and logic issues for users to diagnose was created. Users were asked to identify the specific cause of the problem. If they weren’t successful, the team could determine why the users weren’t successful by what was heard and seen. The design could then be altered and tried again the following week. Also, during this time an external CTP version was delivered that was instrumented, where the PerfTip was linked to the properties window so users could easily change the threshold if they wanted. The team concluded:

  • PerfTips were not a distraction when users were fixing logic issues. In fact, PerfTips needed to be tweaked with subtle animation to make them more noticeable when users were dealing with performance issues.
  • Some simple phrasing changes, like adding the word “elapsed,” cleared up any confusion users had about interpreting the timing data.
  • Thresholds only confused users when they didn’t show up consistently and a simple value that would work in most circumstances could not be identified. Some users said because they knew their code best, they would be the best judge of what would be reasonable performance times.
  • Users recognized that the values would not be exact because of debugger overhead, but they said repeatedly they were fine with it as they would be looking at gross differences.

Overall, over the several weeks of iterations, the team got consistently positive results when tasking users to identify the source of performance issues. Without any prompting, users also gave enthusiastic feedback with comments such as, “Fantastic,” and, “Wow, this is amazing!”

Taking Notes

When taking notes, the team learned to avoid drawing any conclusions until after the session when there was time to sit down together to discuss what happened. What was more useful was to take very raw notes in real time, trying to write down verbatim everything users said and what they did. Grammar and spelling was of no concern. These notes became the team’s reference when refreshing itself on what happened and let it draw insights from the patterns that were seen over several weeks. 

Microsoft OneNote became a very handy tool to track what the team was planning to test, capture raw notes and draft quick summaries. There was always a designated note-taker who captured what was heard and seen. This gave the other team members breathing room to completely focus on the user. For those who could not attend, the live sessions were shared with the team using Skype; everyone on the team was invited to watch and learn. Also, the sessions were recorded for team members who had meeting conflicts and wanted to watch later. The video recordings also let the team review areas that needed extra attention. The team discussion about the results each week informed what would be done the following week, where writing a formal report was unnecessary and would have just slowed everything down.

Wrapping Up

The design and development of PerfTips was only a slice of what was done in the weekly experiments. Many ideas were explored, with as many as four experiments per user each week. The Breakpoint Settings redesign is another example of the experiments that were run week to week to iterate toward providing a more useful and usable experience. By applying Lean UX the team was able to mitigate risk, while finding inspiration from what was heard and seen during the experiments. These experiments took the guesswork out of the equation when designing the features. Ideas came from many sources and were inspired by watching how developers naturally worked.

If users couldn’t see the value in an idea, the low cost to create a mockup made it easy to start over. Also, failures sparked new ideas. I hope the examples and tips for Lean UX will inspire you to give it a try. The “Lean” series of books referenced in this article will serve you well as a guide and a framework for adopting this approach.

Participate in the Program

The Microsoft UX research team is looking for all types of developers to give direct feedback, as well as participate in this ongoing experiment. To sign up, include a few things about your technical background and how to best contact you at aka.ms/VSUxResearch.

I wish to give special thanks to all the folks who were involved in one way or another with this project. You can only describe the Quick Pulse Fridays as “crowded,” with the team watching, learning and thinking very hard about delivering a well-thought-out and purposeful addition to Visual Studio. Special thanks need to go to Dan Taylor who had to stay ahead of the development team and who navigated the technological challenges with aplomb. Andrew Hall kept the team moving forward with his deep technical knowledge and pragmatic approach. Frank Wu kept the design ideas coming and had an uncanny ability to boil down an idea and find a way to keep it simple.


Karl Melder is a senior UX researcher who has been steadily applying his education and experience in UX research, computer science, UI and human factors to design UXes. For the past 15-plus years he’s been working to enhance the development experience in Visual Studio for a wide variety of customers.

Thanks to the following Microsoft technical experts for reviewing this article: Andrew Hall, Dan Taylor and Frank Wu
Andrew Hall is a senior program manager on the Visual Studio Debugger team. After graduating from college he wrote line of business applications before returning to school for this Master’s degree in computer science. After completing his Master's degree, he joined the diagnostic tools team in Visual Studio. In his time at Microsoft he has worked on the debugger, profiler and code analysis tools in Visual Studio.

Dan Taylor is a program manager on the Visual Studio Diagnostics team and has been working on profiling and diagnostic tools for the past two years. Before joining the Visual Studio Diagnostics team, Taylor was a program manager on the .NET Framework team and contributed to numerous performance improvements to the .NET Framework and the CLR.

Frank Wu is senior user experience designer currently focusing on designing and delivering the best editing and diagnostics experience for all developers. He has worked on security software, Windows Server products and developer tools for the past 10 years after getting his Master’s degree in HCI.


Discuss this article in the MSDN Magazine forum