Skip to content

Sections
Personal tools
You are here: Home » Research » Projects » Phase II » IM2.MPR ( Multimodal processing and recognition )

IM2.MPR ( Multimodal processing and recognition )

Document Actions

Multimodal processing and recognition

 

IM2.MPR
IP Head: Aude Billard (EPFL)
Partners: IDIAP, ITS/EPFL, LIDIAP/EPFL, CVML/UniGE.
Given the proliferation of electronic recording devices (cameras, microphones, EEGs, etc) with ever cheaper, and ever increasing processing speed, storage, and bandwidth, together with the advances in automatically extracting and managing information recorded from these devices (such as speech recognition, face tracking, etc), it becomes more and more feasible to simultaneously capture the same sequence of events (such as during a meeting) with several devices, generating richer and more robust sets of feature streams. Efficiently modeling such data coming from multiple channels, thus resulting in multiple observation streams, and using the underlying models in real applications, are the goals of IM2.MPR.

The main objectives of this IP are thus three-fold:
  • investigate fundamental aspects of multi-channel/multi-stream processing
  • continue more applied research on several tasks for multi-stream/multi-channel processing, including tracking, audio-visual speech recognition, person identification, segmentation, 3D scene reconstruction, and activity recognition
  • identify possible additional modalities (such as infra-red, laser, and various other sensors)

Last modified 2007-03-28 09:51
 

Powered by Plone