Project P10:
Terrain and Object Matching in Off-road Scenes

Project Goal

As part of the DARPA Learning Applied to Ground Robots (LAGR) Program, the Stanford AI Lab is developing software that would allow an autonomous robot to navigate off-road terrain quickly using computer vision. Our approach involves using a reverse optical flow algorithm paired with short-range sensor readings to identify objects or terrain patches as either obstacles to be avoided or terrain types that allow high speed at distances where this information is useful. A key component of this approach is the ability to uniquely identify objects or terrain patches elsewhere in the image that have the same visual properties as the patch or object in question. For instance, given the sensor reading that indicates one particular tree in the following image is a hazard to be avoided, this algorithm would then label all other trees in the image as shown.

These unique representations must be chosen such that the processing time taken to check for multiple terrain types or obstacles is not prohibitive. Special care must be taken to avoid both false positives (which complicate path planning) and false negatives (which can lead to collisions).

Project Scope

This project will involve image processing techniques applied to a single image. These techniques may include, but will not be limited to, template matching, color histograms, color space conversions, statistical moments of pixel distributions, and image segmentation. It will be assumed that the software running on the LAGR robot will provide an image along with the specific point in pixel coordinates from which to draw the unique identifiers.

Tasks

  • Perform a literature review of known methods for unique identification of image regions.
  • Develop a data structure that holds unique identifiers for the provided image point that has been labeled as a hazard.
  • Search for image regions in the rest of the image that exhibit similar unique identifiers as those stored in the data structure.
  • Test promising algorithms with real data taken from the LAGR robot.
  • Integrate the final solution into the LAGR robot software.

Project Status

Philip Engstrom (qrz at stanford),
Paul Fontes (pfontes at stanford),
Jim Curry (curryj at stanford)
1 open space

Point of Contact

David Lieb, Andrew Lookingbill, and Hendrik Dahlkamp

Midterm Report

not yet submited

Final Report

not yet submitted






















































































Course overview
Announcements
Time and location
Course materials
Schedule
Instructors
Assignments
Projects
Policies
Links