SSC Pacific
 
Man-Portable Robotic Systems (MPRS) 

INTRO SENSORS OBSTACLES SIMULATION VEHICLE

CURRENT WORK

Many advances have been made in autonomy Convoy of MPRS Vehiclesfor unmanned ground vehicles (UGVs) but most of these advances have been for large UGVs only, in that the sensors required for autonomy are typically large, heavy and require a significant amount of power. Because of the size, weight and power restrictions imposed by a man-portable UGV advances in autonomy have been very limited.

The SPAWAR Systems Center San Diego (SSC San Diego) has previously developed a waypoint navigation capability for small robots. That system required an operator to monitor a live video feed from the vehicle to ensure it did not strike any obstacles in its path. Now SSC San Diego in cooperation with the NASA Jet Propulsion Laboratory (JPL) has developed a miniature obstacle detection sensor suitable for small robots. SSC San Diego has also developed the obstacle-avoidance algorithms to navigate autonomously around obstacles.

In 1999 under the Joint Robotics Program (JRP) Man-Portable Robotic Systems (MPRS) project, SSC San Diego developed a small UGV intended for use by Army engineers for tunnel, sewer, cave, and urban structure reconnaissance. As originally developed, the UGV (called the URBOT for Urban Robot, Fig. 1), was strictly tele-operated from a wearable operator control unit (OCU) (Fig. 2). This allowed the soldier to manually drive the vehicle into high-risk areas and receive video feedback to assess the situation before entering. The system was used in several experiments at Fort Leonard Wood, MO, Fort Drum, NY, and Fort Polk, LA.1

MPRS URBOT
Figure 1. MPRS URBOT Figure.

MPRS URBOT OCU
Figure 2. MPRS URBOT OCU.

In 2002 SSC San Diego was tasked with developing a GPS-waypoint-navigation capability for the URBOT. The requirement was for a small robot to semi-autonomously navigate undetected several miles into a combat area. The user was concerned about operator fatigue if the system was to be tele-operated the entire distance. The concept of operations was to pre-plan the mission using high resolution imagery, and then to select a path that appeared to obstacle-free. The operator would monitor the video feed from the vehicle and watch for obstructions, taking manual control of the vehicle only when necessary to drive around an obstacle. Under this task, SSC San Diego developed one of the first waypoint navigation systems for UGVs of this size2, as well as the Multi-robot Operator Control Unit (MOCU) which continues to evolve, and is now the core command and control station for the Spartan ACTD. Figure 3 shows a screen shot of an early version of MOCU. The planned route is in blue, the raw GPS data (with large spikes) is red, and the actually URBOT path (determined by the Kalman Filter) is in magenta.

Screen Shot of MOCU
Figure 3. Screen shot of an early version of MOCU.

Autonomous obstacle avoidance (OA) was not attempted for this task primarily because of the lack of adequate sensors that met the size, weight, power and fidelity requirements of a rugged, man-portable system. A great deal of successful work has been done OA for larger UGVs, but those systems rely on sensors that are large, power-hungry, and require CPU intensive operations and are inappropriate for a small robots.

In recent years it has become increasingly clear that small robots provide a needed capability to the warfighter. It is also clear that as small robots become more prevalent the requirement for higher levels of autonomy will increase [3]. In Iraq soldiers are using and depending on small robots daily but only in situations that are controlled and pose a serious risk of injury or death. One reason for this is that all of the small robots currently fielded are strictly tele-operated. These tele-operated systems require the operator’s full attention and prevent him from maintaining his own personal security. The primary example of this is in Improvised Explosive Device (IED) disposal. In this application the risk to the soldier in manually disposing of the IED is huge. In most cases IED disposal is also done in a relatively controlled environment where the robot operator is in an area secured by a force protection team.

In order for small robots to become more useful in more tactical scenarios, they must have autonomous navigation capabilities. This is also the case for applications that would make use of teams of small robots, such as land mine clearance, communications relaying, etc. One of the most basic building blocks for any autonomous behavior is robust obstacle avoidance.

The requirement for developing sensors and autonomous capabilities for small platforms is also influenced by the platforms themselves. The primary small robot acquisition programs currently underway within the DoD are the Future Combat Systems (FCS) Soldier Unmanned Ground Vehicle (SUGV) and the NAVEOD Man-Transportable Robotic System (MTRS). The FCS SUGV platform will be approximately 25% smaller than the iRobot Packbot. In order for that vehicle to maintain its mobility (including self-righting) and survivability, any sensor system must be integrated into a very small payload area between the tracks. This will require the sensors to be extremely small by today’s standards and virtually eliminates most of the sensors currently being used in academic and scientific research.

PREVIOUS WORK

There is a great deal of research being conducted throughout the academic and scientific community on robotic autonomy and intelligence that use small robot platforms for development and testing. The vast majority of these efforts are using relatively inexpensive research platforms such as the ActivMedia Pioneer or iRobot ATRV-mini, which are not designed for outdoor military type applications. In fact, much of the research conducted using small robots has been focused primarily on indoor applications. Furthermore, the majority of this work has addressed algorithms and behaviors and has made use of sensors that provide the most accurate and complete data. As a result most of these projects have developed highly sophisticated behaviors that require sensors and processors that are generally not suitable for integration on a small, field-ready robot that must be environmentally sealed. The following discussion addresses past and present research that provides the best opportunity for transition to a small, rugged, outdoor robot.

Much of the early relevant work in autonomy for small robots was done under the DARPA Tactical Mobile Robot (TMR) program from 1998 to 2002. The TMR program had several goals, one of which was autonomy in urban environments, where the objective was to demonstrate robust traversal of complex terrain with minimal human intervention (less than 1 command per 50 meters). Under the TMR program, NASA’s Jet Propulsion Laboratory (JPL) demonstrated OA using stereo vision.4 During development and testing, JPL found that noise in the infrared (IR) and sonar sensor data precluded their use as part of the OA system. This matches what SSC San Diego has found through various attempts at using small IR and sonar sensors for OA in outdoor environments.

The JPL stereo cameras used for the TMR program had a field of view (FOV) of 97x47 degrees. The stereo imagery was processed into 80x60-pixel disparity maps using a 233-MHz Pentium II PC/104 processor stack. An obstacle-detection algorithm5 was directly applied to the disparity map, which resulted in three distinct classifications: no obstacle, traversable obstacle, or nontraversable obstacle. This data was projected onto a 2-D occupancy-grid map that was 2.5x2.5m with cells of 10cm. The OA algorithm used by JPL was an adaptation of the Carnegie Mellon University (CMU) Morphin algorithm.6 This algorithm evaluates a predetermined set of steering arcs that pass through the occupancy grid, penalizing arcs that are blocked by nontraversable obstacles or that pass through cells with traversable obstacles. The votes generated by this algorithm for each arc are then passed to the arbiter for combination with arc votes from other behaviors, such as waypoint following. This is the basis for work that SSC San Diego is conducting for the JRP MPRS project.

Currently, iRobot Corporation is investigating autonomy on their Packbot man-portable robot. This work is in support of the Tank-Automotive Research, Development and Engineering Center (TARDEC) Wayfarer and Sentinel programs. The Wayfarer program is focused on autonomous urban reconnaissance using small UGVs. Objective capabilities include waypoint navigation, obstacle avoidance, autonomous street following, building mapping, video and data recording and more. For sensors iRobot is primarily using a SICK LDA 360-degree laser rangefinder, a Point Grey Bumblebee stereo vision system, an Indigo Omega FLIR and the organic Packbot GPS receiver. This a very ambitious program that has shown promising results, but still does not address the sensor to robot size, weight and power ratio problems. These current sensors (especially the laser) will not fit on a SUGV-sized robot.

CONCLUSIONS

The results of this effort to date have yielded several conclusions. The first is that the miniature stereo vision system using the JPL Smart Cameras is suitable for robust integration into a small UGV. The system is very small, light-weight and requires relatively little power. It should be feasible to integrate these sensors into an FCS-SUGV-size vehicle without degrading the vehicle’s mobility, survivability or even limit the available payload space to a significant degree. The initial sensor data collected at SSC San Diego is very promising. The sensors are able to function effectively outdoors in varying lighting conditions, and the output data appears to be sufficiently noise-free with few false obstacles. The ability to accurately detect a wide range of obstacles of different sizes and textures is also promising.

The current drawbacks of the Smart Camera sensors include the relatively slow update rate and the low-grade optics. The slow update rate is a function of processing power. Currently, most of the stereo image processing is done in the DSP. It may be possible to make more use of the FPGA to share the processing load with the DSP. It is also likely that the software could be further optimized. The optics were chosen based on the size requirements for the Smart Camera. The resolution is adequate for this application at 640x480 but the poor quality of the lenses and difficulty aligning the lens housing with the imager during fabrication results in less than optimal performance.

Varying versions of the OA algorithm employed here have been used on many successful robotic vehicles including the NASA Mars Rovers, the CMU NavLab and others. It has proven to be a very flexible architecture that is easily adapted to different vehicles for different requirements. The algorithm used on the SSC San Diego URBOT is a fairly crude instantiation but also allows real-time performance on relatively low-power processors. The simulation is working well but the real-world effectiveness of this work will not be known until it can be tested on the vehicle.

Bookmark and Share
Updated: 10/20/2011 1:40 AM EST   Published (2.0)