== oval-depth_overlay.avi == * We've got the Velodyne and Ladybug aligned. All our object recognition currently uses the Velodyne only, but the Ladybug is nice for visualization. * Points are colored by distance to the sensor. == track_vis-classifications.avi == * We currently attempt to recognize pedestrians, bicyclists, and cars. * All object are segmented and tracked, then the tracks are classified. Appearance and behavioral information is taken into account using a boosting classifier and discrete Bayes filter. == run6-10-31-2010_17-52-08-detailed.avi == * Example video of the classifier at work. To be fair, this is looking at the entire track first, then making a prediction; this isn't necessarily what it would look like if you were to run the thing online. * Gray are background tracks, i.e. neither ped nor bike nor car. I know you dislike this, but it shows that the performance bottleneck is segmentation and tracking, not track classification. This is an important point for anyone there who works on object recognition. For example, we can tell that the car we fail to pick up at 0:15 was missed because the tracker didn't see it, probably because it gets segmented in with the overhanging tree and becomes too large to be considered by our tracker. == projection_example-hoover.avi == * We can also align external cameras to the Velodyne data. * Points are colored by distance to the car. == cropped_objects-hoover_part == * ... and then we can extract automatically-labeled images of things from above; these are suitable for training computer vision detection systems. This isn't really driving related, but if you have some burning need to recognize bicyclists from above, this is interesting to you. -- * Please feel free to email Alex if you find these things interesting. * teichman@stanford.edu (not my cs address.)