Vision-based interaction within a multimodal framework

Vítor J. Sá, Cornelius Malerczyk, Michael Schnaider

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Our contribution is to the field of video-based interaction techniques and is integrated in the home environment of the EMBASSI project. This project addresses innovative methods of man-machine interaction achieved through the development of intelligent assistance and anthropomorphic user interfaces. Within this project, multimodal techniques represent a basic requirement, especially considering those related to the integration of modalities. We are using a stereoscopic approach to allow the natural selection of devices via pointing gestures. The pointing hand is segmented from the video images and the 3D position and orientation of the forefinger is calculated. This modality has a subsequent integration with that of speech, in the context of a multimodal interaction infrastructure. In a first phase, we use semantic fusion with amodal input, considering the modalities in a so-called late fusion state.
Original languageEnglish
Title of host publicationAtas do 10º Encontro Português de Computação Gráfica
Publication statusPublished - 2001
Externally publishedYes
Event10º Encontro Português de Computação Gráfica - Lisboa, Portugal
Duration: 1 Oct 20013 Oct 2001
Conference number: 10

Publication series

Name
ISSN (Electronic)0873-1837

Conference

Conference10º Encontro Português de Computação Gráfica
Country/TerritoryPortugal
CityLisboa
Period1/10/013/10/01

Keywords

  • EMBASSI project
  • 3D deictic gestures
  • Multimodal man-machine interaction
  • Agent-based systems

Fingerprint

Dive into the research topics of 'Vision-based interaction within a multimodal framework'. Together they form a unique fingerprint.

Cite this