{Using Multiple Sensors for Mobile Sign Language Recognition
Abstract
We build upon a constrained, lab-based Sign Language recognition system with
the goal of making it a mobile assistive technology. We examine using multiple
sensors for disambiguation of noisy data to improve recognition accuracy. Our
experiment compares the results of training a small gesture vocabulary using
noisy vision data, accelerometer data and both data sets combined.
PS
PDF
Request hardcopy
Back to publications