Glossary Item Box
Microsoft Robotics Developer Studio (RDS) targets a wide audience in an attempt to accelerate robotics development and adoption. An important part of this effort is the Visual Simulation Environment (VSE). It is immediately obvious that PC and Console gaming paved the way when it comes to affordable, widely usable, robotics simulation. Games rely on photo-realistic visualization with advanced physics simulation running within real time constraints. This is a perfect starting point for our effort.
The VSE is designed to be used in a variety of advanced scenarios with high demands for fidelity, visualization, and scaling. At the same time, a novice user with little to no coding experience can use simulation; developing interesting applications in a game-like environment. The integration of the NVIDIA™ PhysX™Technologies enables leveraging a very strong physics simulation product that is mature and constantly evolving towards features that will be invaluable to robotics. The rendering engine is based on Microsoft XNA Framework.
The following are discussed in this document:
- Challenges Posed by Robotics Development
- Benefits of Simulation
- Simulation Drawbacks and Limitations
- Overview of Visual Simulation Environment
- Simulation Programming
- Simulation Screen shots
For more details on how to use the Visual Simulation Environment please refer to the Simulation Tutorials Overview.
Robotics Hardware Can be Expensive and Hard to Find
Modular robotics platforms, like the LEGO® MINDSTORMS™ and fischertechnik®, have made robotics affordable to a wide consumer audience. These platforms are an excellent starting point for the educational and hobby markets. But if the programmer wants to scale up in terms of complexity of the robot or the number of individual robots, cost prohibits most from going further.
Difficulty of Hardware Troubleshooting
Troubleshooting hardware, even wide-spread consumer hardware like a DVD player or a TV is difficult. Consumer electronics just happens to be extremely reliable so most consumers don't have to worry about things going wrong. However, when putting together a robot, especially a custom robot based on a modular platform with off-the-shelf parts, significant skill, time and effort is expended debugging the physical setup.
Difficulty of Concurrent Use
Developing an advanced robot, like the vehicles that competed in the Defense Advanced Research Projects Agency (DARPA) competitions, with a team of people is becoming a common occurrence. One of the challenges is that often, the robot being developed is expensive and there is only one. These two issues make it difficult to try things concurrently with others and with no danger of destroying the robot. This forces developing components in isolation, making integration harder and introducing hard-to-find bugs.
Benefits of Simulation
Low Barrier to Entry
Simulation enables individuals using a personal computer to develop very interesting robots or robot swarms with the primary limiting factors being time and imagination. At the same time, it constrains them in ways similar to physical robots so they can focus efforts in something that can be realized.
RDS approaches simulation in stages, allowing developers to deal with complexity at the right time. This means the programmer can debug the simulated robot starting with basic primitives and requiring only basic knowledge. It is extremely concise to add such a virtual robot in an environment, with some simple shapes to interact with. This means debugging, even in the simulation, is simpler.
Physical models for a robot and the simulation services that use them can be developed concurrently by many individuals, and just like many software development communities, create a platform, that many can use and modify without worrying about breaking expensive, unique robots.
Simulation can be an extremely useful instructional aid. the programmer can choose what to focus on, build up complexity, and control the environment. The programmer can also introduce components that are purely virtual, concepts that cannot be easily realized, but still useful for learning.
Another interesting aspect of simulation is that it can be used while the robot is running, as a predictive tool or supervised learning module. For quite some time, developers have used simulation running concurrently with an active robot to try things out in the simulation world that is updated real-time with sensory data. Then the simulation can tell them, probabilistically, if something is a good idea. Virtually looking ahead in the various possibilities.
Simulation Drawbacks and Limitations
Essentially, this is trying to turn a hardware problem into a software one. However, developing software and a physics model has its own challenges and end up with a different set of challenges and limitations. Usually this means there is a sweet spot; a range of applications where simulation is very appropriate, and then a range of applications or stages in development, where using the real robot is essential or easier. As the simulation environment improves, the range where application is appropriate expands. The increase in processing power plus the concurrent and distributed nature of the RDS should help address some of the issues.
Lack of Noisy Data
People involved in the large robotics challenges will tell programmers they must spent serious time with the real robot no matter how good the simulation is. This is partially because there is a lot of work left in making simulation more usable and more realistic. But it is also because the real world is unpredictable and complex with lots of noise being picked up by sensors.
Incomplete and Inaccurate Models
A large number of effects in the real world are still unexplained or very hard to model. This means the programmer may not be able to model everything accurately, especially in real time. For certain domains, like wheeled vehicles, motion at low speeds is still a big challenge for simulation engines. Modeling sonar is another.
Lots of Time for Tuning
In the simulation environment, it's actually very easy to get a robot in the virtual world running around interacting with other objects. However, it still requires significant effort to tune the simulated hardware, called entities, to behave like their real world counter parts. By using NVIDIA™ PhysX™ Technology, the programmer already has a very good starting point. However, more effort is required in developing automated tools for tuning simulation parameters.
Overview of Simulation Environment
The Simulation Environment is composed of the following components:
- The Simulation Engine Service - is responsible for rendering entities and progressing the simulation time for the physics engine. It tracks of the entire simulation world state and provides the service/distributed front end to the simulation.
- The Managed Physics Engine Wrapper - abstracts the user from the low level physics engine API, provides a more concise, managed interface to the physics simulation.
- The Native Physics Engine Library - enables hardware acceleration through NVIDIA™ PhysX™ Technology, which supports hardware acceleration through the NVIDIA™ PhysX™ Technology processor. This is available in PhysX™ Accelerator add-in cards for PCs.
- Entities - represent hardware and physical objects in the simulation world. A number of entities come predefined with the RDS and enable users to quickly assemble them and build rich simulated robot platforms in various virtual environments.
The programmer can choose to interact only with the managed physics engine API if they don't want any visualization. However, it is strongly recommended that the programmer always use the simulation engine service and define custom entities that disable rendering. This greatly simplifies persistence of state, inspection and debugging of simulation code.
The rendering engine uses the programmable pipeline of graphics accelerator cards, conforming to Directx9 Pixel/Vertex Shader standards. In the simulation tutorials show how the simulation environment makes it easy to supply individually chosen effects and have the engine manage loading, rendering, updates to effect state, etc.
The Simulation Tutorials Overview show the programmer how to simulate a wheeled robot with a couple of onboard sensor devices. Two software components are usually involved for simulating a physical component and its service:
- An Entity - is the software component that interfaces with the physics engine and the rendering engine. It exposes the appropriate high level interfaces to emulate hardware and hide the specific use of physics APIs.
- A Service - that uses the same types and operations as the service it is simulating and provides the distributed front end to the entity, just like robotics services provide the front end to robot hardware.
For a walk-through of interacting with the simulation environment, please refer to the simulation tutorials. Samples of simulation services that emulate sensors, actuators and higher level services can be found in the Samples folder of the RDS installation directory.
Five sample environments are provided with VSE:
- Apartment Model
- Factory Model
- Modern House Model
- Outdoor Model
- Urban Model
These models illustrate the level of sophistication that is possible using the Simulator. They are shown below.
Simulation Screen Shots
Modular robot base with differential drive, laser range finder and bumper array. The second image is of the physics primitive view, which shows how the table and robot are approximated by solid shapes.
Close-up of a multi shape environment entity and its physics model.
Simple Dashboard monitoring simulated laser in physics view.
Complex file based mesh entity, with run time generated simplified convex mesh.
Friends - Three MobileRobots, Pioneer3DX robots with lasers plus the LEGO® MINDSTORMS™ NXT. There is something wrong here yes... See if you can spot it. The mesh for the LEGO robot has 172,000 triangles...
Uneven ground surface. Example of using a height field entity for the ground, with random height samples, every one meter. A different material can be supplied per sample allowing for advanced outdoor simulations. The red dots in the first image are the visualization of the laser impact points from the laser range finder entities.
© 2012 Microsoft Corporation. All Rights Reserved.