IVERNAO (Ontology-based Verbal Instructions for Navigation Assistance) is a one year project (Jan-Dec 2009) that aims at generating descriptions and instructions to help users move through an unknown building, using NLG techniques and Virtual Environments.
The initial objective is to provide instructions using references that may not be explicitly represented within the virtual environment, or that may not be visible to the user at a certain moment. In addition, we aim at providing assistance to users with disabilities, who may need special instructions to reach their goals.
With these objectives in mind, the goal of this project is to elaborate an ontological representation of the virtual environment, which in turn can be used by a reasoner to generate basic commands that can be processed by a natural language generator in order to transform them into an elaborated set of instructions similar to what a human user would say.
Virtual Environments are used as a means to simulate different degrees of user disabilities (i.e. motor, visual). This way, it will be possible to test alternative configurations of the instruction generator in order to optimize it according to the user's needs.
In addition, it is planned to use different combinations of input devices (keyboard, mouse, voice) in order to test the way in which they affect interaction.
The ultimate objective of the project is to obtain results that allow us to study the feasibility of generating instructions in real buildings using mobile devices, where users with disabilities can greatly benefit from in-situ indications.
This project is being developed in close collaboration with the LDC research group, from the Technical University of Madrid.