Learning visual models of social engagement

Bradley A. Singletary and Thad Starner

Proceedings of Human Computer Interaction International Workshop on Wearable Computing (HCII2001). New Orleans, LA. August 2001.

Abstract: We introduce a face detector for wearable computers that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives "social engagement," i.e., when the wearer begins to interact with other individuals.

Our experimental system proved > 90% accurate when tested on wearable video data captured at a professional conference. Over 300 individuals were captured during social engagement, and the data was separated into independent training and test sets. A metric for balancing the performance of face detection, localization, and recognition in the context of a wearable interface is discussed.

Recognizing social engagement with a user's wearable computer provides context data that can be useful in determining when the user is interruptible. In addition, social engagement detection may be incorporated into a user interface to improve the quality of mobile face recognition software. For example, the user may cue the face recognition system in a socially graceful way by turning slightly away and then toward a speaker when conditions for recognition are favorable.

Keywords: HMM, face detection, wearable computing

Postscript    PDF    HTML    Request hardcopy

Back to publications