November 2016

Volume 31 Number 11

[Editor's Note]

We Come from the Future

By Michael Desmond | November 2016

Michael DesmondThe future ain’t what it used to be. The dreams we had of flying cars, moon bases and robot servants may not have panned out (at least, not yet), but we’ve seen our share of miracles. Like a world-spanning network that delivers instant knowledge, communication and entertainment, and ubiquitous handheld computers that let us reach anything digital—anywhere—with the tap of a finger.

In this issue of MSDN Magazine, we explore the tools, technologies and techniques that enable developers to build our next future. From augmented reality environments that mingle the real and the digital, to intelligent services that replace hundreds of brittle apps with contextualized data flows embedded within familiar UIs, to advanced software that interprets facial expressions, drives natural-­language interfaces and analyzes brain activity. The sky’s the limit, and this month’s issue provides a timely glimpse of how to get there.

Things kick off with a feature articles that dives into Microsoft Cognitive Services, the family of intelligence and knowledge APIs that, to quote Microsoft, “allow systems to see, hear, speak, understand and interpret needs using natural methods of communication.” Frequent MSDN Magazine author Alessandro Del Sole shows how these services can be used to recognize facial characteristics and expressions, either from a digital photo or a smartphone camera input, within cross-platform apps based on Xamarin.Forms.

Del Sole says Cognitive Services opens the stage to an incredible range of development scenarios. He notes as an example the Seeing AI project (bit.ly/2ddzhgQ), which provides machine-generated verbal guidance to visually impaired users, based on visual inputs from wearable cameras, smartphones and other devices. The system can describe surroundings, read out documents, and even interpret and describe the facial expressions of those nearby.

“Cognitive Services are exciting because they offer an opportunity to describe the world in an auto-generated, natural-language description that’s available on any platform, on any device, and basically to any development environment supporting REST,” Del Sole says. “This opens the field to building apps that offer experiences tailored for the feelings and needs of customers at specific moments in their lives.”

Also on display are three intriguing features. Benjamin Perkins shows how the Emotiv brain-computer interface can capture brain waves and present them for analysis on the Azure IoT Hub via Stream Analytics. Microsoft HoloLens takes center stage as Adam Tuliper explores the three primary ways developers can set up interaction in augmented reality applications—gaze, gesture and voice. And don’t miss Srikantan Sankaran’s first in a two-part exploration of development for the new Microsoft Bot Framework. The framework promises to free information and services locked up within piecemeal mobile apps to enable data access and interaction wherever users are—whether it’s SMS, Skype, Slack, Facebook Messenger, Office 365 mail or other popular services.

Emerging technologies like the Bot Framework, HoloLens and Cognitive Services are heralding a sea change in software development, as intelligent services and adaptable, transformative UIs begin to redefine the way people engage and interact with software. Today, the fascinating work of enabling those interactions is just beginning.


Michael Desmond is the Editor-in-Chief of MSDN Magazine.


Discuss this article in the MSDN Magazine forum