CMU logo
Expand Menu
Close Menu

The Catchment Feature Model for Computational Multimodal Language

Speaker
Francis Quek
Associate Professor, Department of Computer Science and Engineering, Wright State University

When
-

Where
Newell-Simon Hall 1305 (Michael Mauldin Auditorium)

Description

The major challenge to gesture is relevance. After one has developed the first five stylized semaphore languages, what else is there? The Catchment Feature Model (CFM) addresses two questions in multimodal interaction: how do we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We motivate the CFM from psycholinguistic research, and present the Model. In contrast to “whole gesture” recognition, the CFM applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We shall discuss the CFM-based experimental framework, and cite concrete examples of Catchment Features (CF).

Speaker's Bio

Francis Quek is currently an Associate Professor in the Department of Computer Science and Engineering at the Wright State University. He has formerly been affiliated with the University of Illinois at Chicago, the University of Michigan Artificial Intelligence Laboratory, the Environmental Research Institute of Michigan (ERIM) and Hewlett-Packard Human Input Division. Francis received both his B.S.E. summa cum laude(1984) and M.S.E. (1984) in electrical engineering from the University of Michigan in two years. He completed his Ph.D. C.S.E. at the same university in 1990. He also has a Technician’s Diploma in Electronics and Communications Engineering from the Singapore Polytechnic (1978), and briefly attended Oregon State University in 1982. Francis is a member of the IEEE and ACM. He is director of the Vision Interfaces and Systems Laboratory (VISLab) which he established for computer vision, medical imaging, vision-based interaction, and human-computer interaction research. He performs research in multimodal verbal/non-verbal interaction, vision-based interaction, multimedia databases, medical imaging, collaboration technology, computer vision, human-computer interaction, and computer graphics. He leads several multiple-disciplinary research efforts to understand the communicative realities of multimodal interaction. Beside the basic science of multimodal human interaction and language he studies the implication multimodal communication behavior to studying neurological motor- speech disorders (Parkinson Disease), and distance tutoring, and spatial planning analysis.

Host
Jie Yang