Mobile Sign Language Recognition
We are building upon a constrained, lab-based Sign Language recognition system
with the goal of making it a mobile assistive technology. We examine using
multiple sensors for disambiguation of noisy data to improve recognition
accuracy. Our goal is to offer a sign recognition system as another choice of
augmenting communication between Deaf and hard of hearing people and the
hearing community. We seek to implement a self contained system that a Deaf
user could use as a limited interpretor. This wearable system would capture
and recognize the Deaf user's signing. The user could then cue the system to
generate text or speech.
Papers
Georgia Tech Gesture Toolkit: Supporting Experiments in Gesture
Recognition
Tracy Westeyn, Helene Brashear, Amin Atrash and Thad Starner.
To be published in the International Conference on Perceptive and
Multimodal User Interfaces 2003
Abstract
PDF
HTML
Bibtex
Request hardcopy
Using Multiple Sensors for Mobile Sign Language Recognition
Helene Brashear, Thad Starner, Paul Lukowicz and Holger Junker.
To be published in 7th IEEE International Symposium on
Wearable Computers, October 2003
Abstract
PS
PDF
HTML
Bibtex
Request hardcopy
Research Group
Dr. Thad Starner
Helene Brashear
Valerie Henderson
Arya Irani
Christopher Skeels
Research Organizations
GVU (Graphics, Visualization, and Usability) Center
CATEA (Center for Assistive Technology and Environmental Access)
Project Links
Previous Work by Thad Starner