MAKING A LOW-DIMENSIONAL REPRESENTATION SUITABLE FOR DIVERSE TASKS
by Nathan Intrator and Shimon Edelman
We introduce a new approach to the training of classifiers for
performance on multiple tasks. The proposed hybrid training method
leads to improved generalization via a better low-dimensional
representation of the problem space. The quality of the representation
is assessed by embedding it in a 2D space using multidimensional
scaling, allowing a direct visualization of the results. The
performance of the approach is demonstrated on a highly nonlinear
image classification task.