MILES: Models of Interaction centred on Language, spacE and computational Semantics


The main goal of this project is to develop an architecture for interactive systems that combines a dialogue engine, a natural language generator, and a semantic representation based on ontologies covering both the real (or virtual) physical space and the user located within it.

In this project we intend to apply technological solutions for interaction to resolve difficulties in perception or localization that a real user might have within a particular spatial environment. For the purpose of this project, special attention will be paid to users with different types of functional diversity (different degrees in their ability to perceive or move around the environment). With the aim of developing the necessary technology, we will work with real environments and with virtual simulations of them.


Over the last few years there have been significant advances in the areas of dialogue systems, natural language generation, and virtual environments. However, in spite of the fact that these areas all have in common that they deal with human-computer interactions with an important presence of natural language components, there has been very little work centred in the conjunction of the three areas.

This coordinated project involves both the development of an architecture for managing interactions and its validation over a series of specific case studies. The three participating research groups take part in the initial specification of the architecture and the development of a base line for it based on integrating solutions to specific partial problems previously developed by different participants. Once this baseline architecture is established, each group will work on applying it to a different context.

Augmented Reality in FRAGUEL

The idea is to test the ease with which this solution can be adapted to specific tasks and how well different settings can be represented, both in terms of restrictions on space and interaction. Each case study that is contemplated will be the responsibility of one of the participating research groups.

The NOVA (Navigating with Ontology-based Verbal Assistance) subproject will focus on the task of guiding a user by means of verbal instructions and descriptions around a physical space unknown to him. Special attention will be paid to users with different types of functional diversity (different degrees in their ability to perceive or move about). An application to assist patients when moving between different locations of a big public hospital will be used as test case.

The SEPIA (Semantic models of Environments and Persons for Interaction Adapted to their needs in virtual environments) subproject will concern itself with the use of ontologies for representation of and reasoning about spatial concepts and their interaction with path finding and a representation of the perception of user. Speech-based guideAs case study, it will use an application that simulates in a virtual environment the evacuation protocols of a nuclear power station.

The DIMMO (DIalogue MultiModality based on Ontologies) subproject will address the development of advanced dialogue management solutions that take into account the selection of dialogue moves and multimodal presentation guided by ontologies. As a case study it will focus on assisting the navigation of a complex web site by means of 2D avatars acting as virtual guides.

This project is being developed in close collaboration with the LDC research group, from the Technical University of Madrid, and de JULIETTA research group, from the University of Sevilla.

As a result of this project, the following applications have been developed: