A Robot That Improves Its Ability To Learn
Joseph O'Sullivan and Sebastian Thrun
The use of machine learning is attractive in the design of autonomous
robots, since it enables robots to adapt to the unforeseen.
However, one of the key bottlenecks of current machine learning
algorithms is the enormous sample complexity, which appears to
prohibit their usage in all but the simplest of robotic domains.
To make machine learning more practical in such domains, more
powerful algorithms are needed that can generalize more accurately
from less training data.
This paper investigates the feasibility of learning algorithms that
gradually improve over the lifetime of the robot. When faced with a
novel thing to learn, knowledge acquired in previous learning tasks
improves the ability of the robot to generalize, hence reduces the
sample complexity. In this paper, we report results of applying a
particular algorithm to mobile robot perception problems, which was
originally proposed by Suddarth [Sud90]. The learning
tasks considered involve the recognition of persons, objects and
locations. We illustrate that having previously learned related
tasks allows a robot to learn a novel task from significantly fewer
training examples than if it was learning from scratch. Based on
these results, we argue that self-adjusting learning strategies are
superior to conventional learning algorithms in many robotic
domains.
Click here to obtain the full paper (500679 bytes).