|
Recent Advances in Robot Learning
Judy A. Franklin, Tom M. Mitchell, and Sebastian Thrun
In "Real-World Robotics: Learning To
Plan for Robust Execution," Bennett and DeJong introduce an
approach called permissive planning, where the permissiveness of a
plan is a measure of how closely the plan's preconditions must match
the real-world for the plan to succeed. A combination of
explanation-based learning to acquire plan schemata and a new approach
to the refinement of plans is described, along with an implementation
on a hardware robot arm.
"Robot Programming by Demonstration
(RPD): Supporting the Induction by Human Interaction," by
Friedrich et al. combines analytical and inductive learning to
generalize the notion of teaching a robot by example, using only a few
examples of the proper sequence of motions. On top of this is dialog
based learning, a series of questions and answers that occurs while
the human is demonstrating. This additional level helps the system
determine the intent of the human and narrows the hypothesis space.
Implementation involves a physical robot arm.
Chen et al. present "Performance
Improvement of Robot Continuous-Path Operation through Iterative
Learning Using Neural Networks." Performance improvement of
continuous path operation is the control engineer's approach to
solving the problem of explicitly teaching every move to a robot. This
paper has a tutorial nature in that concepts from engineering
robotics, such as closed-loop stability and PID control are clearly
described.
"Learning Controllers for Industrial
Robots," by Baroglio et al. is a summarization and comparison of a
number of machine learning techniques designed for nonlinear
systems. Included are Multilayer Perceptrons, Radial Basis Functions,
and Fuzzy Controllers. The comparison takes the form of algorithmic
and empirical analysis. This sets the stage for the description of two
original integrated learning algorithms.
In "Active Learning for
Vision-Based Robot Grasping," Salganicoff et al. employ a new
integrated learning algorithm, IE-ID3, to give active-learning ability
to a robot arm equipped with a vision system. The task is to choose
appropriate grasping approaches in order to pick up various
objects. Two important objectives are for the algorithm to produce
real-valued actions and for learning to occur quickly.
"Purposive Behavior Acquisition for a
Real Robot by Vision-Based Reinforcement Learning," by Asada et
al. describes a mobile robot that learns to shoot a ball into a goal,
using Q-learning. Environmental information is given only by the
visual image. In addition to Q-learning, the authors employ a learning
schedule called Learning from Easy Missions. This paradigm includes an
algorithmic decision maker for shifting to more difficult tasks.
The final paper, "Learning
Concepts from Sensor Data of a Mobile Robot" by Klingspor et al.,
also uses a mobile robot as a platform to explore the use of machine
learning in the link between low-level representations of sensing and
action and the high-level representation of planning. The applications
addressed involve robots that are not completely autonomous, but
interact with human users. The machine learning technique employed is
inductive logic programming with some modifications such as bounding
the number of rules and predicates as well as splitting the overall
task into several learning steps.
|