Vision Based Tracking Sample

The Follower sample service shows how to write an orchestration service for a mobile robot that can follow a user wearing a colored shirt. The robot is equipped with a webcam and a laser range finder. This sample demonstrates how to use multiple partner services such as SimpleVision and TextToSpeech and SpeechRecognizer to implement a human robot interaction.

This sample is provided in the C# language. You can find the project files for this sample at the following location under the Microsoft Robotics Developer Studio installation folder:

 Samples\Misc\Follower

Contents

  • Running the Follower Service
  • Service Orchestration
  • Follower Behavior

Prerequisites

Hardware

The Follower sample is designed for use with a mobile robot that has a two-wheel differential/skid drive, front and rear contact sensors, a forward facing 180 degree SICK Laser Range Finder device and a webcam. This sample service can be used with real robot or a simulated robot. The necessary manifest files (.manifest.xml) are provided for with this sample. If the robot platform that is used in the manifest is not available to you, you can change the manifests to adapt them to your platforms. You can also run the sample in the simulation environment using the provided manifest.

Software

This sample is designed for use with Microsoft Visual C#. You can use:

  • Microsoft Visual C# Express Edition
  • Microsoft Visual Studio Standard, Professional, or Team Edition.

You will also need Microsoft Internet Explorer (or web browser of your choice) and the .NET framework version 3.0 or later for speech recognition.

1 Running the Follower Service

Start the DSS Command Prompt from the Start -> All Programs menu.

To run Follower with a real robot start a DssHost node and create an instance of the Follower service by typing the following command (you may have to change the manifest to adapt it to your particular robot):

 dsshost /port:50000 /t:50001 /manifest:"samples\config\Follower.manifest.xml"

This starts the service and you should see an output like the following:

 *   Service uri:  [05/31/2007 20:08:26][https://p3dx:50000/directory]
*   Service uri:  [05/31/2007 20:08:28][https://p3dx:50000/constructor/dfa1ddbf-b7c-4bb4-b43c-b592003152ff]
*   Starting manifest load: file:///c:/msrs/Follower.manifest.xml [05/31/2007 2:08:30]
      [https://p3dx:50000/manifestloaderclient]
*   Manifest load complete [05/31/2007 20:08:39][https://p3dx:50000/manifestloaderclient]
*   Service uri:  [05/31/2007 20:08:45][https://p3dx:50000/commandspeechrecognizer]
*   Service uri:  [05/31/2007 20:08:46][https://p3dx:50000/simplevision]
*   SaveState [05/31/2007 20:08:50][https://p3dx:50000/speechrecognizer]
*   Service uri:  [05/31/2007 20:08:51][https://p3dx:50000/follower]
*   Subscribe request from: dssp.tcp://p3dx:50001/follower/NotificationTarget/00003fb-0000-0000-0000-000000000000
      [05/31/2007 20:08:51][https://p3dx:50000/simplevision]

Depending on the capabilities of your platform you may want to adapt the resolution of the webcam. This can be achieved by navigating to https://localhost:50000/webcam and changing the resolution (for example to 160x120) using the dropdown list. This change will save the webcam state to WebCam.xml and reduce CPU usage.

The windows of the Follower and SimpleVision user interfaces will appear (optionally, you can also manage the speech recognizer visually by starting the SpeechRecognizerGui service through the node's web control panel and then accessing the SpeechRecognizerGui's web interface):

Figure 1

Figure 1 - Graphical User Interface of Follower and Vision services

To run Follower with a simulated robot start a DssHost node and create an instance of the Follower service by typing the following command:

 dsshost /port:50000 /t:50001 /manifest:"samples\config\FollowerSim.manifest.xml"

The Microsoft Visual Simulation Environment and the window of the Follower and SimpleVision user interfaces will appear:

Figure 2

Figure 2 - A Simulated robot in the Microsoft Visual Simulation Environment

Figure 3

Figure 3 - Graphical User Interface (GUI)of Follower and Vision services (simulation)

2 Service Orchestration

The Follower service sample provides user tracking and human-robot interaction. The service implements basic skills for a robot such as obstacle avoidance, object tracking, and speech and gesture based human robot interaction.

The Follower services orchestrates several partner services:

  • Drive (used for robot movements)
  • Contact Sensors (obstacle avoidance)
  • Sick-LRF (obstacle avoidance, simple navigation)
  • SimpleVision (object tracking, gesture recognition)
  • TextToSpeech (TTS) (speech output)
  • SpeechRecognizer (speech recognition)

 

The service uses the SpeechRecognizer service to recognize speech commands and TTS to speak sentences. The SimpleVision service can detect a colored object, a face region and hand gestures. In this sample the SimpleVision services uses color tracking to track a user's shirt. The Follower service uses the Drive service to control movement of the robot.

3 Follower Behavior

In the service, there are two types of robot states. One is logical state that is used for a basic robot movement. The other is behavioral state; the robots high level behavior, which is triggered by speech commands. The behaviorial state is a set of logical states. It also includes automatic obstacle avoidance, object tracking, gesture driven movements.

The Logical states are:

  • Stop – Stop moving,
  • Move – Move forward,
  • Turn - rotate the specified degrees,
  • Translate – Move forward with the specified distance,
  • Adjusted Move – Move forward with the specified direction.

 

The behavioral state of the robot can be controlled with spoken commands:

  • Robot - get attention, start listening,
  • TurnLeft - rotate left 45 degrees,
  • TurnRight - rotate right 45 degrees,
  • TurnAround - rotate 180 degrees,
  • GoForward - begin moving forward with avoiding obstacles,
  • Backup - translate backward,
  • Stop - stop current behavior.

For example "Robot, go forward!" causes the robot to move forward.

The behavior can also be controlled using a combination of spoken commands and vision based object tracking and gesture recognition:

  • GoThere - rotate after recognizing a user's pointing gesture
  • FollowMe - start looking for and tracking the specified color (i.e. the user's shirt)

 

Some behavior is triggered autonomously without user interaction.

  • Avoid - avoid obstacle using LRF and return to the previous behavioral state

Some of the behavioral states have higher priorities than others. For example the robot will not execute spoken commands while in the Avoid state.

Summary

This sample explains:

  • Running the Follower Service
  • Service Orchestration
  • Follower Behavior

 

 

© 2010 Microsoft Corporation. All Rights Reserved.