%0 Conference Paper %B 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) %D 2011 %T The computer expression recognition toolbox (CERT) %A Littlewort, G. %A Whitehill, J. %A Wu, T. %A Fasel, I. %A Frank, M. %A Movellan, J. %A Bartlett, M. %K 3D orientation %K Accuracy %K automatic real-time facial expression recognition %K CERT %K computer expression recognition toolbox %K Detectors %K dual core laptop %K Emotion recognition %K Encoding %K extended Cohn-Kanade %K Face %K face recognition %K facial action unit coding system %K facial expression dataset %K Facial features %K FACS %K Gold %K Image coding %K software tool %K software tools %K two-alternative forced choice task %X

We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.

%B 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) %I IEEE %C Santa Barbara, CA %8 03/2011 %@ 978-1-4244-9140-7 %G eng %M 12007742 %0 Conference Paper %B IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009 %D 2009 %T Learning to Make Facial Expressions %A Wu, T. %A Butko, N. %A Ruvulo, P. %A Bartlett, M. %A Movellan, J. %K Actuators %K Emotion recognition %K face detection %K face recognition %K facial motor parameters %K Feedback %K Humans %K learning (artificial intelligence) %K Machine Learning %K Magnetic heads %K Pediatrics %K real-time facial expression recognition %K Robot sensing systems %K robotic head %K Robots %K self-guided learning %K Servomechanisms %K Servomotors %X

This paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.

%B IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009 %I IEEE %C Shanghai %8 06/2009 %@ 978-1-4244-4117-4 %G eng %M 10801981