STEREO VISION SENSOR
Over a period of two years SSC San Diego has worked with JPL to develop a miniaturized stereo vision system that could be used for OA on a small robot. The effort leveraged JPL's experience with stereo vision systems from the DARPA TMR program, Mars Rover programs, and resources from the DARPA Micro Air Vehicle program. The goal of this collaboration was to develop a small low-power obstacle-detection sensor for small robots.
The result of this collaboration and combined funding was a small camera board called the JPL SmartCam (Fig 4). This board includes both the processing and imaging elements so that no external hardware (imagers or processors) is required. The board measures 2 inches square and holds a Kodak 640x480 imager, a Xilinx Vertex11 FPGA, a Motorola Starcore DSP, 32MB of SDRAM and 4MB of flash memory. Inter-board communications (for stereo vision processing) are currently done via a high-speed serial bus between the DPS cores. There are also options for directly communicating between FPGAs but they are not being used at this time. The FPGA interfaces directly to the Kodak camera to collect imagery and performs some of the image preprocessing functions. The image data is then transferred to the DSP via the memory bus where the final stereo vision processing is performed. The FPGA is also used to output an NTSC signal that displays raw image data as well as various types of processed data.
Figure 4. JPL Smart Camera, front and back.
The DSP is running a scaled-down version of the JPL stereo vision software, which generates 160x120 pixel disparity maps. Figure 5 shows an example image and the corresponding disparity map, shown in grayscale, where the brighter parts of the image are closer to the camera. From the disparity map an obstacle detection algorithm generates a two-dimensional occupancy grid-like obstacle map. This obstacle map is output over a TTL-level serial port for use by the obstacle-avoidance system. The obstacle map is a 7x7m grid with 41 cells along each side, resulting in 1681 cells that are approximately 17cm square. With the current software/firmware implementation the obstacle map is output at a rate slightly under 2Hz.
Figure 5. Left image is rectified image from left stereo camera; right image is the corresponding disparity map.
The obstacle-detection algorithm effectively looks at the slope of the terrain to determine where obstacles lie. Consider Figure 6, which shows an example terrain profile with points X and Xa and the corresponding pixels in the image (P and Pa). The algorithm scans through each pixel P in the image and the associated 3-dimensional point X in space. It then calculates a ray which passes through the focal point F and a point (Y) located above X and equal to the minimum obstacle height value. The corresponding pixel for that point (Pa) is found and used to calculate its associated 3D point Xa. The pixel P is then considered to be an obstacle if the distance from the camera to Xa is shorter than to X or if the slope from X to Xa is greater than a predetermined threshold. The cell in the 2D obstacle map that corresponds to X is marked as an obstacle if the threshold value of pixels-per-obstacle is met or exceeded for that cell. Requiring obstacles to exceed the pixels-per-obstacle threshold helps filter out noisy data.
Figure 6. Depiction of obstacle detection.
SSC San Diego has conducted limited testing with the Smart Camera stereo vision sensors prior to installation in the robot, with the sensors at approximately the same height from the ground as they will be on the robot (22cm). The tests show that in relatively flat open terrain the cameras produce good results, successfully identifying obstacles ranging from large curbs to tumble weeds to steep slopes. Figure 7 shows an example image and the corresponding obstacle map.
Figure 7. Image from stereo camera and corresponding obstacle map.
More thorough tests are planned for the Smart Camera stereo vision system when installation on the URBOT is complete. To date the mechanical integration is complete and the calibration to compensate for the camera alignment in the new housing is in progress.
CHEMICAL RADIATION SENSORS
Detection and identification of chemical agents or radiation with sufficient rapidity or from an adequate distance is critical for personnel to successfully utilize protective gear or take evasive action. Robots equipped with chemical and radiation detectors can provide that critical early warning.
SPAWAR System Center San Diego developed a prototype Nuclear and Chemical (NC) Sensor Module using portable off-the-shelf chemical, gas, and radiation detection sensors and integrated it on the URBOT.
The NC Sensor Module includes three sensors: the BAE Systems Joint Chemical Agent Detector (JCAD) chemical warfare agent sensor, the RAE Systems MultiRae hazardous gas detector, and the Canberra AN/UDR-13 Radiac radiation sensor.
The JCAD chemical detector simultaneously senses blood, blister, and nerve agents at concentration levels that allow the user to successfully employ protective procedures. The lightweight unit is designed to be handheld, worn by the user, or mounted on a vehicle, building, or site of interest. The JCAD utilizes a Surface Acoustic Wave (SAW) sensor for chemical vapor detection and classification.
The MultiRAE Plus can monitor gas levels both in the area of the unit and up to 100 ft. away using a built-in pump, It houses up to five sensors: a photo-ionization detector (PID) for detecting volatile organic compounds(VOCs), a 10eV (electronvolt) lamp is standard; a protected catalytic bead for combustible gases; an electromechanical sensor for detecting oxygen (O); and two interchangeable electrochemical sensors for detecting toxic gases (CO HS, SO, NO, NO, Cl, HCN, NH, PH).
The AN/UDR-13 Radioactivity Detection Indication and Computation (Radiac)Set is a light-weight, handheld or pocket-carried tactical radiation meter. It can also be utilized in a vehicle or helicopter and if necessary the detection probes can be mounted on the vehicle exterior. The unit detects and measures total dose and dose rates for both initial, also referred to as prompt, radiation (emitted at the time of nuclear detonation) and residual radiation (emitted after detonation predominately from the decay of radioisotopes). It reports neutron and gamma dose from both initial and residual radiation.
An enclosure was prototyped to house the three sensors and an IP Engine single-board computer. The IP Engine provides the communication interface between the individual sensors and the robot, via RS-232 or UDP/Ethernet, auto-detecting which one is present. All sensors are self-powered; however, the module needs 12 VDC (volts direct current) for the processor. The sensor modules are plug-and-play, a unit can be easily swapped out, and the system auto detects which sensors are present. All sensor functions and events are supported by the Operator Control Unit (OCU); in addition, sensor readout is visible through windows in the enclosure. The module is water resistant and measures 13.15 inches in length, 13.5 inches in width, 3.2 inches in height, and weighs 9.75 pounds, including all three sensors.
In February 2003, the Chemical Biological Radiological Nuclear (CBRN) Sensor Module with Robotic Platform Limited Objective Experiment (LOE) was carried out inside Battery Woodward, a WWII-era coastal defense bunker complex, at SSC San Diego. The LOE was conducted by the Chemical Directorate of Combat Developments (DCD) in conjunction with the Robotic Systems Joint Project Office (RS JPO) and Engineer DCD. For the experiment, URBOT was equipped with the NC sensor module. The URBOT robot was successfully tested during the LOE.
The objective of the LOE was to investigate the integration, application, and capability of a CBRN sensor payload on a robotic platform. The mission for the Unmanned Ground Vehicle (UGV), equipped with a CBRN Sensor Module, was to be a force protection/survivability tool for dismounted forces by providing soldiers with standoff detection in contaminated areas. The LOE addressed four issues and determined that: (1) soldiers can effectively control the functions of the robot and CBRN Sensor Module payload; (2) a UGV equipped with a CBRN Sensor Module is a viable concept and has potential to provide effective force protection/survivability measures to enhance current and future operations; (3) best tactics, techniques, and procedures (TTPs) were defined and refined for employing the CBRN Sensor Module on a UGV; (4) the CBRN Sensor Module and robotic platforms used for the LOE were operationally safe, easy to operate, and maintain. The experiment to determine mission effectiveness (issue 2) involved forty iterations (twenty with each robot). Chemical simulants, provided by the U.S. Army Chemical School DCD, were randomly placed in the experiment area. The robotic system was able to detect and alarm on the simulants. The experiment officer recommended that follow-on testing be done with simulants that more closely emulate live agents and vapors and that testing be conducted with a radiological source. In the final report for the LOE, the experiment officer concluded, "The UGV's used during the experiment proved to be very reliable and experienced no maintenance down time. The integration of plug-and-play CBRN Sensor Modules on a UGV proved to be very reliable. The problems that did arise with the integration were identified at the beginning of the experiment and easily fixed.