VPL Lab 6 - Sensing and Simple Behaviors

Microsoft Robotics

Glossary Item Box

Microsoft Robotics Developer StudioSend feedback on this topic

VPL Lab 6 - Sensing and Simple Behaviors

In this lab you will a create robot behavior to head towards a goal while avoiding obstacles. In doing so, you will learn about some of the basics of behavior architectures. You will also learn more about how to work with the Create robot and VPL.

This lab is provided in the language. You can find the project files for this lab at the following location under the Microsoft Robotics Developer Studio installation folder:

Sample location

This lab teaches you how to:



This lab can be completed both in simulation and on board an IRobot Create. The Create needs to move around, so the computer device running your code must be able to move with the Create. Alternatively you could communicate with the Create over Bluetooth.


This lab requires Microsoft Robotics Developer Studio, in particular it uses Microsoft Visual Programming Language (VPL).


A robot typically has multiple sensors that it can use to collect information about its environment. The robot can use this information to decide how to behave. The iRobot Create has many sensors, including:

  • multiple cliff sensors, which it can use to detect drops;
  • a wall sensor at the front right, useful for detecting walls and obstacles;
  • contact sensors on the front bumper.

Many robots also have cameras and laser range finders. Depending on the sensor and the domain, the robot can manage its sensors in different ways. For instance, it may want to receive notifications from a contact sensor, but poll a camera at regular intervals.

Reactive Behaviors

There are three main approaches to robot behavior:

  • reactive - where the robot tries to respond directly to the changing environment;
  • deliberative - where the robot thinks ahead and reasons about possible actions before deciding which one to execute;
  • hybrid - which is a combination of reactive and deliberative.

In this lab we will look at two methods of combining reactive behaviors. Reactive behaviors do not involve the use of memory, they map input from the robot's sensors directly to actions. This means reactive behaviors can be very responsive to changes in the robot's environment. However, because reactive behaviors do not use any memory, a reactive behavior can't perform different actions from the same state. In order to perform complex tasks using reactive behaviors, atomic reactive behaviors need to be combined into a larger behavior system. There are a number of ways in which reactive behaviors can be combined including blending and competition (which we discuss in this lab) and sequencing where reactive behaviors are scheduled by a higher-level controller.

Blending and Competition

In order to blend reactive behaviors we can represent each behavior as a mapping from some sensor input to a force on the robot. Recall that a force has both a magnitude and a direction. The direction represents the robot's next motion and the magnitude represents the weight. To blend two reactive behaviors for the robot's drive system we can simply perform vector addition. In competition reactive behaviors compete to control the robot. This is an approach where the winning behavior decides the robot's actions.

In this lab we consider the task where the robot has a goal point it wishes to reach, but there are obstacles it must also avoid. The robot has two reactive behaviors. The go-to-goal behavior simply directs the robot towards the goal, and the avoid-obstacle-behavior directs the robot away from obstacles when they are detected. The go-to-goal behavior can be represented as a force vector which points in the direction of the goal. When the robot is near an obstacle, the avoid-obstacle-behavior maps the senor input to a force vector pointing away from the obstacle. The magnitude of this force depends on how close the robot currently is to hitting the object according to the sensor reading. If these two behaviors are combined through competition the behavior with the force of largest magnitude will win, such that the robot will switch between the two behaviors according to its sensor readings until it reaches the goal. If these two behaviors are combined through blending the vectors will be added and this will hopefully combine to move the robot gradually towards the goal while avoiding obstacles.


BlendingCompetitionPicExample - using blending and competition to avoid obstacles while approaching a goal.

In this lab we will go through the details of implementing competition for obstacle avoidance using the Create's wall sensor. We will leave the details of implementing the blending approach as an exercise.

Problems with Blending and Competition

Both approaches to controlling behavior have some drawbacks. Firstly it may not be easy to describe the behaviors as forces with a direction and magnitude. A particular drawback of blending is that forces with equal magnitude but opposing direction can cancel each other out. Suppose for instance, there is an obstacle directly in the robot's path to the goal. This would produce forces in opposite directions (one towards the goal and one away from the obstacle). If these forces had equal magnitudes and were added, the resultant force would be zero and the robot would not move unless we did something to handle the problem. Competition does not have this particular drawback; however oscillations between behaviors can occur when using competition if the forces of the behaviors have similar magnitudes. This is a problem because it can be very inefficient for the robot to oscillate between two behaviors. If the robot switches constantly between avoiding an obstacle and moving towards the goal it could make very slow progress. Without additional logic, it is also possible for a robot using competition to get stuck oscillating between behaviors.

Sequencing is an approach which only runs a single reactive behavior at a time and avoids some of the drawbacks we have discussed. A finite state machine is often used to switch between behaviors based on changes in the robot's state or the environment. We will learn more about this approach in later labs.

Experiment with the Create's Wall Sensor

The Create has an IR (infrared) "Wall Sensor" on its front right (at about 2 o'clock). This sensor emits an infrared signal and looks at how it bounces back to the sensor to detect how close the robot is to a wall. We are going to try and use this sensor to avoid obstacles. Given the position and range of the sensor this is not always going to be possible - it is designed for following a wall on the robot's right, not for avoiding obstacles. We will start out by experimenting with the sensor to get an idea of its range and the values it returns. The IRobot Create Open Interface Guide lists two ways to get information from the Wall Sensor. The Wall packet and the Wall Signal packet. It is the Wall Signal packet we are interested in. The guide lists the range of the Wall Signal as 0-4095. In practice however, we expect you will not see numbers close to the upper limit.

We are going to start by inspecting some of the Wall Signal values returned by the Create when we place it in different positions. We will then write a simple program that drives forward, but stops if the Wall Signal is greater than some value.

To get started, open the Visual Programming Language application.

Step 1: Add an IRobot Create

Search for IRobot in the Services Panel and add a IRobot Create / Roomba to your diagram. In order to use the Wall Sensor of the IRobot you need either an IRobot Create / Roomba or an IRobot Create Lite in your diagram.

Step 2: Get the Wall Signal

Add a Calculate box from Basic Activities Panel and connect the round output of the IRobotCreateRoomba to the incoming connection of the Calculate. Recall that a round output in VPL is used for notifications. By connection to the round output of the iRobotCreateRoomba, you can select a notification to receive from the service. When you make the connection, a dialog will open. Inspect the dialog to see the options that are available. We want to get an update for the Wall Signal. One of the options is UpdateBumpsCliffsAndWalls. Select this option. Now place the cursor in the Calculate box. A drop down menu of the available values should appear. As you can see, the Wall update is available. This is shown in the following figure.


UpdateBumpsCliffsAndWalls - Information available from UpdateBumpsCliffsAndWalls

If you hover your mouse over the Wall variable in the drop-down menu, you will see that this variable has a Boolean type. This is the Create's bool Wall sensor, not the integer valued Wall Signal which we are looking for. We can get the Wall Signal from UpdateCliffDetails .

Right click on the connection between the IRobotCreateRoomba and the Calculate . Select Connections from the menu that appears. You can now select UpdateCliffDetails instead of UpdateBumpsCliffsAndWalls . Now when you put your cursor in the Calculate box you will see WallSignal of type int in the drop down menu. Select this variable, or enter the text WallSignal. Your diagram should now look like this:


StepWallSignal - Diagram with WallSignal update

Step 3: Set a Manifest for the Create and Connect your Robot

Double click on the IRobotCreateRoomba . In the Set Configuration Drop Down select Use a manifest . More options will appear, click on Import Manifest and then select the IRobot.Manifest.xml . When the diagram is run the IRobot service will now be started. If you have not already done so, now is the time to connect your robot to the computer. The earlier labs contain details on setting up your robot.

Step 4: Inspect the Wall Signal through the Web Interface

Start by running the diagram. The web interface for connecting your robot will pop up. Make sure the ports and connection method etc, are correct and then hit the connect button. Now you can experiment with the WallSignal . Start with the robot well away from the wall. Hit your browser's refresh button. Now scroll down to look at all the sensor readings. Note the value of the Wall Signal - it should be zero. Now pick your robot up and place its sensor about an inch away from a wall. Hit refresh and take another look at the Wall Signal. The value should be higher. Experiment like this to get an idea of the range of values.

Some things you may want to check:

  • How close does the sensor have to be from the wall to get a non-zero reading?
  • If the robot is directly facing a wall (front on) what is the value of the Wall Signal?
  • If the sensor is right up against the wall, what is the value of the Wall Signal?
  • Does the color of the wall have a large effect?
  • What sort of Wall Signal readings does the robot get from objects with rounded edges?

Note that if you do not have a wall handy you can place an object in front of the sensor instead.

Step 5: Add Logic To Drive Forwards Unless a Wall is Detected

From the Activities panel add an If Statement to the diagram. Connect the output of the Calculate to the input of the If . Make the If condition value > 5 or some other integer of your choosing. This will cause the condition to evaluate to true if WallSignal is greater than 5.

From the Services panel add a Generic Differential Drive . Connect the If branch to the drive and choose SetDrivePower . In Data Connections , select the checkbox Edit values directly at the bottom left (see below), and enter the power for the left and right wheels as -0.1 . This will cause the robot to back-up.


DrivePowerBack - Directly edit the Drive Power in Data Connections .

Our diagram will now tell the robot to drive backwards if the WallSignal is greater than 5. We now want to tell the robot to drive forward if this is not the case. We want to use the same differential drive as before (not a new independent service). One way to do this is to select the GenericDifferentialDrive already in our diagram and hit Ctrl-c, and then Ctrl-v to paste a copy. Notice if we select the copied drive and go to the its properties in the bottom right pane it has a name field. This name is listed as GenericDifferentialDrive . If you change this name to something else, say MyDrive, both the GenericDifferentialDrive boxes will be updated.

Connect the second GenericDifferentialDrive box to the else branch and select SetDrivePower . In Data Connections , again check Edit values directly , and set the power to the left and right wheels to be 0.1 . This will drive the robot forwards. Your diagram should now look like the one below.


SenseDriveNoTimer - This is what your current diagram should look like.

We can simplify this diagram by removing the Calculate box. To achieve this, while making the fewest changes to the current diagram, start by clicking on the connection between the Calculate and the If statement, then hit delete. We will now disconnect the Calculate from the IRobotCreateRoomba and move that end of the connection to the If. To do this, select the connection between the two blocks. Then drag the end connected to the Calculate over to the If statement. This allows us to connect the IRobotCreateRoomba to the If without having to reselect the notification UpdateCliffDetail. Now change value in the If to WallSignal and you are done.


SenseDriveNoTimerSimplified - Your simplified diagram should now look like this.

Now we need to set the manifest for the GenericDifferentialDrive. Double click on one of the boxes (it does not matter which - remember they refer to the same drive). Select Use a Manifest and then import the IRobot.Drive.Manifest.xml .

Step 6: Running your program with sensor polling

It is now time to run your program! When the web-interface loads, notice that the polling interval has been automatically set to 201 milliseconds. For now, you can leave it at this value. The polling interval specifies the frequency with which the Create examines its sensors and sends notifications. In some instances it can be sufficient to poll sensors, in others cases we may want to receive an update whenever the sensor reading changes. For instance if we poll a bumper sensor we might miss it being pressed, and similarly with a button.

When you run your robot remember that the sensor is to the right side of the Create. So position your Create with its right side angled towards the wall before you start your program so it has a better chance of detecting the wall. Hopefully you will see the Create approach the wall, back-up, and approach again continuously.

Step 7: Use the Manifest Editor to configure the Create to receive Sensor Notifications

We are now going to learn how to use the Manifest Editor to request specific notifications from the Create instead of polling its sensors.

In VPL before you ran your program you selected a manifest for GenericDifferentialDrive . This manifest is an XML file that provides information about the services that are relevant for running the drive. The Manifest editor can be used to configure these services, specify relationships between the services, add new service relationships and so forth. In this step you will see how it can be used to configure a service.

From the Start Menu , open the Microsoft Dss Manifest Editor . From the File menu, select Open and navigate to the Samples\Config directory and select iRobot.Drive.manifest . Click on the irobot service so that its properties display in the right pane.


ManifestEditorIRobot - This is what the Manifest Editor should currently look like.

We are now going to configure the IRobot service. We will start by changing the name property to IRobotWallNotify . Next, change the name of the robot, we called ours MyRobot , make sure the SerialPort is appropriately set for your system, that the BaudRate is 57600 and that the IRobotModel is set to Create . Make sure WaitForConnect is not checked. If this box is checked, then whenever your program starts it must load the web-interface and wait for the user to hit the Connect button. When this option is not checked, the robot will be automatically connected if all the settings are correct, and the web-interface will only load if there is a problem.

To turn off sensor polling, set the PollingInterval to -1 . We now need to specify which sensor updates we want to receive (or we won't receive any). Click on the plus sign beside CreateNotifications . This will add a drop-down menu. Since we are interested in the WallSignal , from the drop-down menu select AllCliffDetail . We are now finished configuring the service!


Configuration - Your configuration should look like this.

We want to save our new manifest setup, but we don't want to overwrite the existing iRobot drive manifest, so go to the File menu and select Save As and save your new manifest as iRobot.Drive.WallNotify.manifest.xml in a new directory under samples\config. This will create a new directory with all the files in it you need. In windows, navigate to this directory. You will see a few files. The irobot.config.xml file contains the configuration information for the IRobot which you just set. The file iRobot.Drive.WallNotify.manifest.xml is the new manifest file you just created. You will now be able to select this manifest in VPL. Now go up one directory to samples\config . There is already an irobot.config.xml file in this directory. If you open it, you will notice it is the original configuration, with a PollingInterval of 201 etc.

Step 8: Try running your diagram with the sensor notifications

First, set the GenericDifferentialDrive to use the iRobot.Drive.WallNotify.manifest we just made.

When you edited the manifest you will recall that you configured the iRobot service through the drive manifest. The drive manifest we are using in fact starts an iRobot service. Currently we have the iRobotCreateRoomba starting a separate iRobot service. To fix this, click on the iRobotCreateRoomba , and in the Properties pane select IRobotWallNotify in iRobot.Drive.WallNotify.manifest from the drop-down menu Manifest menu. Now the iRobotCreateRoomba is set to use the iRobot service started by the drive.

Run your program - let it go for a while. The first thing you will notice is that the robot starts up without you having to hit connect! Now observe what is different about the robot's behavior from when it was polling.

You will likely observe some problems with the robot's behavior. You may notice that the robot oscillates very quickly between the two behaviors. This happens because we are now getting sensor update notifications much more frequently than every 201 milliseconds (the default polling interval). In fact, it is likely the sensors will be sending notifications faster than your program can handle. If you notice the Create run into the wall, it is probably because the sensor notifications have become backed-up. Every time we receive a sensor notification we are sending a command to the robot's differential drive. If this process is slower than the rate at which new sensor notifications arrive, the notifications can become backed-up causing the Create to be acting on old information. This makes it very easy for the Create to run into the Wall!

In the next step we will see how to ignore some of the sensor notifications so we do not get a back-log.

Step 9: Using a timer to ignore some sensor notifications

Our aim is to create a timer that fires periodically and increases a counter. Whenever there is a sensor update the counter is checked. If the counter is at some threshold we will process the sensor update and reset the counter. Otherwise we will ignore the sensor update. You can imagine variations on this outline, for instance you may also want to do a fast evaluation on the sensor update incase it was indicating something you would never want to ignore.

First we need to create a counter variable and initialize its value to zero.


Counter - initialize the counter variable.

Secondly we need to initialize the timer. From the Services pane drag in a Timer . We want to set the timer to fire after a 50mms interval (you can also experiment with other intervals). To do this add a Data box with the int 50 . Connect it using SetTimer , making sure to select value in Data Connections .


Timer - initialize the timer.


DataConnectionsForTimer - select value for the Interval

When the timer completes, we need to increment the Counter variable. To do this, copy-and-paste the timer and connect it to a Calculate , selecting TimerComplete . Inside the Calculate access the value of the Counter variable and add 1 to it, by typing state.Counter + 1 . Use the result of the Calculate to set the Counter as in the following figure.


IncrementCount - when the timer fires the counter is increased by 1

When the timer completes, we also need to set it to fire again. Use a Data block and pass its value to the Timer (to get a box representing the timer you can use copy and paste, or drag in a Timer, but be sure to select the existing Timer option from the dialog that will pop up) using SetTimer . Now connect this Data block to the TimerComplete event as you did before.


ResetTimer - when the timer fires the timer is reset

We now need to use the variable Counter to decide whether we should ignore the sensor update or evaluate the WallSignal and send the appropriate command to the GenericDifferentialDrive . Add an If statement to your diagram. The condition we want to evaluate is if state.Counter >= 4 (you can experiment with numbers other than 4). You want to evaluate this condition when the UpdateCliffDetail notification is received. Note, we use the condition >= instead of > in case it takes some before the robot receives its first sensor notification. If the condition evaluates to true, one thing we need to do is reset the counter. Add logic to your diagram to do this. One branch of the control is now complete.


CounterCondition - when the counter is >= 4 one thing we need to do is reset the counter

If the counter condition evaluates to true we also need to act on the sensor notification. We want the control flow to pass from this condition into the If statement for the WallSignal . If we connect the If statement for the WallSignal to the output of the If statement for the Counter the CounterIf statement will forward the UpdateCliffDetail to the second If. Thus, we will still be able to access the WallSignal as required. Make this change so that your diagram looks like the one that follows.


ManagingSensorNotificationsWithTimers - your diagram should now look like this.

Step 10: Run your diagram and experiment

Run your diagram and observe the robot. The robot should cope with the sensor notifications better than it did before. However, you will probably need to experiment with the timer periods and the condition on the Counter.

Implementing Competition

We are now going to start working on a diagram to drive the robot to a goal point while avoiding obstacles. In order to implement this behavior we will need to start the robot from a given position and orientation. We will need to track the robot's position as it drives towards the goal and moves away from obstacles. We will consider an orientation/rotation of zero degrees to be the orientation where the front of the create is pointing along the x-axis. The coordinate system will be measured in meters.


RobotAxis - the robot is rotated zero degrees when it is pointing down the x-axis.

Since the competition diagram is going to be large it is worthwhile planning the overall control flow first. Each time a Timer fires we will check if the robot has reached the goal. If it has we will stop the robot's drive. If not, we will compare the magnitude of the go-to-goal behavior's conceptual force vector with that of the avoid-obstacle behavior. The winning behavior will decide how to drive the robot - either towards the goal, or away from the detected obstacle. During this process we need to keep track of the robot's current position. Note that it is possible to work out the new position of the robot, from the old position and the current rotation using basic trigonometry.

Step 1: Start a new VPL diagram and make an Activity

The competition behavior logically breaks down into a number of parts. We need code to keep track of our current position and decide if we have reached the goal, code to move the robot to towards the goal and code to move the robot away from obstacles. We can break our code up in VPL by making our own Activities.

Start a new VPL diagram so we have a clean slate to work with. To make your own Activity in drag an Activity block into the empty main Diagram from the Basic Activities pane. In the Properties pane on the right-hand side, change the name of your activity in both the fields to CompeteHelper.

By default VPL creates an Activity with a Start page and one Action. Any code you put on the start page will be run the first time some action in the Activity is called. This is a good place to initialize the variables the actions in your Activity will need. When you connect to an Activity in VPL you select what action you want called. You have seen this before! When you connected to the GenericDifferentialDrive for instance you selected the action - e.g. SetDrivePower, and then you set the input. When you make your own activities you specify the actions and their inputs and outputs.

Double click on the CompeteHelper box to open the activity. By default VPL opens the activity to the Action page. You can select the Start page from the drop down menu at the top. To edit the actions and their inputs and outputs, click on the button beside the drop-down menu (see the following screenshot).


ActivityOptions - you can edit the actions and their inputs and outputs here.

Step 2: Initialize variables on the Start page

Go to the Start page of your CompeteHelper activity and initialize the variables shown in the following diagram. These are all the variables we will need for tracking the robot's current position. Tracking the robot's current position using information from the robot about how far its wheels or other actuators of have moved is called dead-reckoning. Dead-reckoning is not in general one-hundred percent accurate. In applications where it is very important to know the exact location of the robot is generally combined with other methods of locating the robot, such as land-mark based localization.


StartPage - initialize the variables shown here.

Step 3: Add an action to update the robot's current position and evaluate if it has reached the goal

We are going to create a number of actions in CompeteHelper. The aim of our first action is to update the robot's current position and evaluate if it has reached the goal.

Click on the icon beside the drop down menu to open the Actions and Notifications dialog again. Rename the action called Action, UpdatePosEvalDone. Next add an input variable of type int. The input will be the distance returned by the Create's Pose sensor (this sensor reports how far the Create has travelled), so we will call the int, PoseDistance. Finally add a bool output variable Done since we want to return whether or not the robot has reached the goal.


ActionsNotificationsUpdatePos - the dialog should now look like this.

Step 4: Create a separate activity to calculate the robot's current position

We can encapsulate the part of our action that will calculate the robot's current position in another activity. The input of this activity will be the previously stored (x,y) position for the robot, its current heading in degrees and its displacement in meters.

Add an Activity box to your action and call it CalcCurrentPosition. Double click on the activity to edit it. We need to add the following input variables of type double: XPrev, YPrev, CurrentHeading and DistanceMoved, and output variables of type double: XNew and YNew.


ActionsNotificationsCalcCurr - the dialog should now look like this.

To calculate the robot's current position we just need to use simple trigonometry. There is a slightly different case for each of the different quadrants. The following diagram shows an example of the calculation in the second quadrant, i.e., when the robot's current heading is greater than or equal to 90 degrees, but less than 180. The distance h in the diagram corresponds to the input DistanceMoved in our Activity. You should take a moment to work out how to compute XNew and YNew in each of the cases.


ExamplePositionCalculation - you can work out the calculations for the other quadrants on paper.

We will now help you get started writing the VPL code to do the calculations. You will need to use the input values multiple times, so it is useful to store them in variables. The inputs are value, label, pairs and are accessed by connecting to the square on the left-hand side of the action. We want to assign each of the input values to a variable. To extract the value of one of the inputs connect the square to a calculate box and then type the label/name of the desired input. Do this for each input, and assign the value calculated to a variable, until your diagram looks like the one following.

Assign input values to variables

Assign input values to variables - use calculate blocks so you can access the values.

Once all of these assignments have been made, you can use an If block to evaluate which quadrant the Heading is in so you can make your calculation. To do this add a join combining the data flow from each of the variable assignments and make this the input to your If block.


CalcCurrPosStart - your current diagram should like like this.

Now, for each branch of the If statement, try to write VPL code to calculate XNew and YNew and set these variables. Then use a merge to combine the control from all the branches of the If statement. Once you have done that, connect the output of the merge to the result square of the action. You will need to edit the values passed as the result. You should pass state.XNew and state.YNew. To calculate sine and cosine you can use MathFunctions. Note that Sine and Cosine expect their input to be in radians, so you will also need to use the function to convert degrees to radians. Try to write your VPL code before looking at the next screenshot. It is the best way to learn!

The final diagram for CalculateCurrPosition is given below. If you need to see exactly what values are getting passed on each of the links you can open the .mvpl file for this lab and examine the code. This diagram is somewhat large. If you choose you could try and break part of each If condition into a separate activity.


CalculateCurrentPosition - one way to write this code in VPL.

Step 5: Add a separate activity to evaluate if the robot has reached the goal

Back in the UpdatePosEvalDone action of the CompeteHelper activity we will now make another helper activity. We want this activity to evaluate whether the robot has reached the goal (within some error bound). The input to this activity will be the current x and y position of the robot and the goal x and y position. The output will be a Boolean indicating whether or not the goal has been reached.

Add a new activity to your diagram and call it EvalAtGoal. Double-click on the activity box to edit it. You will need to add the inputs and the output, similar to the screenshot below.


InputOutput - variables for the action in EvalAtGoal.

Now it is time to write the VPL code for your activity. Be sure to attempt this on your own before looking at our screenshot below.


EvalAtGoal - VPL diagram.

The proceeding diagram works as follows:

  • the distance between the current position and the goal in the x-dimension and the y-dimension is calculated.
  • if the distance in either dimension is greater than 0.1 the merge will be reach and the line will carry the value false to the merge.
  • if the distance in both dimensions is less than 0.1 the join will be reached and the value true will be passed to the merge.
  • the value incoming to the merge is passed on by the merge, and the Result Done is set to this value.

When writing an activity action it is important to ensure that all logical paths return. Check your action carefully, to make sure this is the case.

Step 6: Finish the CompeteHelper action - UpdatePositionEvalDone

Now that you have written the two helper activities you are ready to finish the UpdatePositionEvalDone action in the CompeteHelper activity.

Recall that the input to this activity was the Distance value from the Create's Pose sensor. The iRobot service provided by RDS uses the information returned by the Create to track how far the Create has travelled in millimeters. If you read the Create manual, you may think that every time you use the GetSensors action this value will be reset. However, the iRobot service does not reset the value every time you make a call, instead it allows the value to accumulate. Thus, the input value PoseDistance actually represents how far the robot has travelled since the program started.

The first thing we want to do is convert PoseDistance to meters, since we are using meters as coordinates. We then need to compute how far the robot has driven since we last recorded its position. We can do this using the variable we initialized earlier - OldDistanceTravelled.


DisplacementCalculation - convert to meters and calculate the displacement

We do the conversion to meters simply by dividing by 1000. When the control flow reaches the join we have all the information required to compute the current position of the robot.

Connect the output of the join to the input of CalcCurrentPosition and select Edit values directly in the properties of the Data Connection. You want to pass in what was previously recorded in the state as the current x and y position, the heading of the robot which is also recorded in the state (every time we change the robot's heading we will record its current heading), and the displacement that you just computed. The Data Connection properties should look like the screenshot below.


DataConnectionCalcCurr - input for CalcCurrentPosition

Once you have set the input, it is time to deal with the output. You want to extract the value with the label XNew and the value with the label YNew, which you can do using Calculate blocks, and assign these values to the state variables XCurr and YCurr. These two control flows should then be joined.


SetXCurrAndYCurr - your diagram should now look like this.

Next set OldDistanceTravelled to NewDistanceTravelled and call EvalAtGoal. The input to EvalAtGoal is just the state variables state.XCurr, state.YCurr, state.XGoal, state.YGoal. The output Done of EvalAtGoal should be set to the output, Done, of the action.


FinalUpdatePosEvalGoal - your diagram should now look like this.

The action UpdatePositionEvalDone in CompeteHelper is now complete!

Step 7: Avoid obstacle action

We will now add a new action to our CompeteHelper activity - an avoid obstacle action. This action should be initiated when the avoid-obstacle behavior "wins" against the go-to-goal behavior. The aim of this action is to cause the robot to drive away from an obstacle.

To add a new action to the CompeteHelper activity open the Actions and Notifications dialog and on the Actions side of the Actions pane, click the add button. Call your action AvoidObstacle. This action requires no inputs and no outputs. It simply needs to calculate the direction to rotate the robot, so that it is pointing away from the obstacle, and start the robot driving in that direction.

The avoid-obstacle behavior should move the robot in the opposite direction of any obstacles. To calculate what direction this should be we need to consider the location of WallSignal on the robot and the robot's current rotation. Inspect the robot to try and guess the number of degrees from the center line to the wall sensor. It looks to be something like 50 degrees - but if you can measure it accurately that is great! We will work with our guess of 50 degrees.


AvoidObstacle - illustration of the calculation for when the robot's current heading is zero degrees.

When the robot's current heading is zero degrees, we just need to make the direction calculation shown in the diagram. If the robot is already rotated in some direction however, we need to add 130 degrees to the robot's current rotation and take the result modulo 360 degrees. This new rotation should be set to the variable CurrentRotation and the old value of CurrentRotation should be recorded in the variable OldRotation. Write the VPL code to make these calculations and set the variable.


AvoidObsCalc - to calculate the direction to rotate away from the obstacle and set the CurrentRotation.

Now that we have calculated the direction we want to drive on, we want to instruct the robot to drive on that direction. We will make a new activity to do this, so that we can re-use the code when we write our go-to-goal code.

Step 8: Make an activity to drive on a specified direction

Add a new Activity box to the diagram and call it DriveOnDirection. The activity only requires one action and we need to give it as input both the new desired rotation of the robot, and the current rotation of the robot. It does not require any outputs. Add the two inputs shown to your activities action.


ActionsNotificationsDriveOn - add these two inputs.

The action is very simple. It needs to call RotateDegrees on the GenericDifferentialDrive passing in DesiredHeading-OldHeading as the Degrees and a low number such as 0.1 as the Power. It can then simply call SetDrivePower in the GenericDifferentialDrive and set both wheels to the same Power value. Again, it is good to use a low number such as 0.1 since we are trying to move carefully and detect obstacles.


DriveOnDirection - your current diagram should look like this.

Step 9: Finish AvoidObstacle action

We can now finish coding the AvoidObstacle action by connecting the existing control flow to our new DriveOnDirection block and connecting its output to the result square.

The Data Connections properties for the input to DriveOnDirection must specify state.CurrentRotation as the DesiredHeading and state.OldRotation as the OldHeading. Your final diagram should look similar to one following.


AvoidObsFinal - your current diagram should look like this.

Now when the avoid-obstacle behavior wins we have the code to drive the robot.

Step 10: Start GoToGoal action

The GoToGoal action will be called on to drive the robot when the go-to-goal behavior wins. It is the final action we need in the CompeteHelper activity. GoToGoal requires no inputs or outputs. Add this action in VPL as you did the previous two.

The first thing we need to do is write an activity that will calculate the angle the robot needs to drive on to reach the goal from its current position. We will call this activity CalculateGoalRotation. Add an Activity block to the GoToGoal action, and give it this name.

Step 11: Calculate the rotation to the goal

This activity's action has 4 inputs - the current X,Y positions and the goal X,Y positions and one output - the rotation. Create these inputs output as you have done previously.


ActionsNotificationsCalcGoalRotate - create these inputs and output.

We can use basic trigonometry to calculate the desired heading of the robot (measured as a rotation from the x-axis). The following diagram shows an example of the calculation in the second quadrant.


SecondQuadGoalRotate - desired heading to reach the goal.

Write the VPL code to compute the theta' from the diagram using the ArcTangent function in MathFunctions and then convert the result to degrees. We can perform the same calculation for theta' for each quadrant, provided we take the absolute value of the difference in y-coordinates and the difference in x-coordinates.


CalculateThetaDash - to calculate theta'.

We then need an If block to determine which quadrant we are working in, so we can calculate theta. We can also use the If block to handle the boundary cases. Add an If to your diagram and a condition for each case. Connect a Calculate block to each case to calculate the correct value of theta. Finally, merge the outputs of all the calculate blocks and use the value on the output link of the merge to set the Rotation variable in the result.

After we had finished, our code appeared as follows.


CalculateGoalRotation - for the action in CalculateGoalRotation.

Step 12: Finish GoToGoal action

Now that we have made the CalculateGoalRotation activity, coding the GoToGoal action becomes very simple. We simply need to record state.CurrentRotation in the state variable OldRotation, use CalculateGoalRotation to calculate the rotation to drive on, store this as our new CurrentRotation, and then use the DriveOnDirection activity.


GoToGoalDiagram - for the GoToGoal action in CompeteHelper.

Notice that we were able to keep the diagram for GoToGoal very clean and simple by using activities.

You have now written three actions for the CompeteHelper activity: UpdatePosEvalDone, AvoidObstacle, and GoToGoal. You have now finished with writing the CompeteHelper activity. In fact, you have now written enough helper code to go back to the main diagram and put it all together!

Step 13: Add code to check if the create is ready

In the main diagram we will now add some code to check if the Create is ready to be driven. The Create has a number of modes which you can read about in its documentation. We generally instruct the iRobot service to keep the Create in the Full mode where it can be best controlled. However the Create will sometimes automatically switch out of this mode if for instance it detects a cliff. When this happens the iRobot service attempts to switch the mode back. When we configured the iRobot service in the manifest editor we selected the mode to be Full. The robot is ready to be driven, and the sensors ready to receive queries, when the iRobot service has successfully switched the mode to this state. Add the following VPL code to your diagram to test if the Create is ready.


TestCreateReady - to test if the Create is ready.

Step 14: Plan the control flow

We now need to plan the control flow for the main part of our diagram. Here is one possible outline.

  • When a timer fires: update the robot's position using the sensor information and evaluate if at the goal.
  • If at the goal: stop driving
    • Else - determine which behavior gets to drive the robot:
      • if WallSignal >= threshold then we will say the magnitude of the avoid-obstacle force exceeds that of the go-to-goal force. Hence, call AvoidObstacle to drive the robot in the correct direction, then set the timer to fire again.
      • else, the go-to-goal behavior wins.
        • if the robot is not currently going towards the goal, then call GoToGoal to drive the robot in the correct direction, then set the timer to fire again.
        • else, the robot is already going towards the goal, so just set the timer to fire again.

Step 15: Make the main diagram

The first thing we will do is add some more initialization. In our outline of the control flow we saw that it would be useful to record what behavior the robot was currently executing. Add a bool variable CurrBehaviorGoal to your diagram and initialize it to false (since the robot starts out executing no behavior).

We also need to set the timer to fire initially. We should do this after RobotReady is set to true. Add this code to your diagram also.

You are now ready to start working on the main control flow.

When the Timer fires we want to call GetSensors on the iRobotCreateRoomba. Make a copy of the Timer you just set and connect its notifications port (round connector) to a copy of the iRobotCreateRoomba block. The first sensor reading we want is the distance the Create has moved. We get this from AllPose. So the input to GetSensors should be the enumeration type CreateSensorPacket.AllPose. This tells GetSensors which sensor reading we want. We also want the WallSignal, so make another copy of the block iRobotCreateRoomba and connect it to the notifications port of the timer. This time you want the input to be CreateSensorPacket.AllCliffDetail. Now when you connect, say a Calculate block, to the ouptut of the iRobotCreateRoomba, the information you want will be available.

We can connect the result of the AllPoseGetSensors request to the CompeteHelper action UpdatePositionEvalDone. To do this, connect the appropriate iRobotCreateRoomba to a CompeteHelper block and select the action. In Connections, select the Get Sensors option that returns the Pose information, and in Data Connections pass in value.Distance. The CompeteHelper activity will then take care of updating the current position and return whether or not the goal has been reached.

Now connect the ouput of the CompeteHelper block to an If statement. If Done, call SetDrivePower in the GenericDifferentialDrive. Finally, from the Services panel add a SimpleDialog and connect it to the success output of the GenericDifferentialDrive. In Data Connections edit the value directly so that you can display the string"Done!" in the dialog.


EvalDoneControl - to stop the robot if the goal is reached.

Next, add a join to combine the control flow from the else condition of the If statement you just made, and the result of the GetSensors request for the WallSignal.


NotDoneJoin - this part of your diagram should now look like this.

From the output of the join connect to a new If statement that evaluates whether to call AvoidObstacle in CompeteHelper, GoToGoal in CompeteHelper, or to continue as is. Update the variable CurrBehaviorGoal appropriately and use a merge to combine the control flow from each branch.


CompeteDecision - this part of your diagram should now look like this.

The final step is to reset the timer after the merge. Once you have done this your program is complete!


ControlFlowMainDiagram - this part of your diagram should now look like this.

Step 16: Set the manifests and test your program

You can now set the manifest for the GenericDifferentialDrive and the iRobotCreateRoomba. We will not use the notifications manifest we made earlier, rather we will use the iRobot.Drive.Manifest.xml. When notifications are turned on, the Create returns much smaller Pose values than it should. It helps to turn on polling, but your dead-reckoning will be much more accurate if you turn off all streaming notifications. In the web-interface that pops up to let you connect to the robot, you can decrease the polling interval to 50ms. Note that this is the minimum value you should use if you are connecting to the robot wirelessly.

Once you have set the manifests, you can test out your program! You should experiment with different thresholds for the WallSignal and possibly different values for the Timer.

Since the Create's wall sensor is only on the right hand-side, the Create can still bump into things! You may want to add in some code that also responds to the bumper sensor!

Debugging your program

There are plenty of mistakes you can make in a diagram this large. If you code does not work as desired the first time you run it, there are some obvious things you can check.

One of the first things to check is that you are not accidently passing the wrong value, or a in particular a default value, on any of the links. It can be easy to forget to set the value in Data Connections. To check for this just click on all your links, particularly the links to the output squares of activities.

Something else to check is that your code is not getting stuck anywhere. The best way to do this is to use the debugger feature. The debugger also allows you to see the values of the variables, which is useful for finding other errors as well. To run the debugger click on the modified arrow beside the run arrow. This will load a web-browser containing a graphical representation of your code running. The debugger is intuitive and easy to use, if you experiment with it you will quickly learn how to use it.

Note that if your code gets stuck it may be because the next control flow it wants to execute is exclusive with the current control flow. We will discuss how concurrency and exclusivity is handled in VPL in more detail in lab 7. Your code may also get stuck if some path in an activity fails to return.


VPL Hands On Labs: VPL Lab 7 - Task Learning via Speech Recognition

VPL User Guide: Getting Started



© 2012 Microsoft Corporation. All Rights Reserved.