The goal of ALOOF is to enable robots to tap into the ever-growing amount of knowledge available on the Web, by learning from there about the meaning of previously unseen objects, expressed in a form that makes them applicable when acting in situated environments. By searching the Web, robots will be able to learn about new objects, their specific properties, where they might be stored and so forth. To achieve this, robots need a mechanism for translating between the representations used in their real-world experience and those on the Web. We propose a meta-modal representation, composed of meta-modal entities and relations between them. A single entity is composed of modal features extracted from sensors or the Web. Amodal completion supports perception in the absence of a complete set of features. The combined features link to the semantic properties associated to each entity. All entities are organized into a structured ontology, supporting formal reasoning. This is complemented with methods for detecting gaps in the knowledge of the robot, for planning where to effectively obtain the knowledge, and for extracting relevant knowledge from Web resources. By situating meta-modal representations into the perception and action capabilities of robots, we will achieve a powerful mix of Web-supported and physical-interaction-based open-ended learning. Our scenario consists of a home setting where robots have to find/retrieve objects while understanding their meaning and relevance in the assigned task. Our measure of progress will be how many gaps, i.e. incomplete information about objects, can be resolved autonomously given specific prior knowledge. We will integrate results on different mobile robot platforms ranging from smaller mobile platforms, over Metralabs Scitos to a home service robot HOBBIT.
Start date: (36 months)
Funding support: 1 300 000 €
- University of Rome La Sapienza - Italy
- University of Birmingham - United Kingdom
- Technische Universität Wien - Austria
- INRIA Sophia Antipolis - France