TY - CONF T1 - Learning to Make Facial Expressions T2 - IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009 Y1 - 2009 A1 - Wu, T. A1 - Butko, N. A1 - Ruvulo, P. A1 - Bartlett, M. A1 - Movellan, J. KW - Actuators KW - Emotion recognition KW - face detection KW - face recognition KW - facial motor parameters KW - Feedback KW - Humans KW - learning (artificial intelligence) KW - Machine Learning KW - Magnetic heads KW - Pediatrics KW - real-time facial expression recognition KW - Robot sensing systems KW - robotic head KW - Robots KW - self-guided learning KW - Servomechanisms KW - Servomotors AB -
This paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.
JF - IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009 PB - IEEE CY - Shanghai SN - 978-1-4244-4117-4 ER - TY - CONF T1 - Automatic cry detection in early childhood education settings T2 - 7th IEEE International Conference on Development and Learning, 2008. ICDL 2008 Y1 - 2008 A1 - Ruvolo, P. A1 - Movellan, J. KW - Acoustic noise KW - auditory moods KW - automatic cry detection KW - behavioural sciences computing KW - Deafness KW - early childhood education settings KW - education KW - Educational robots KW - Emotion recognition KW - human coders KW - Humans KW - learning (artificial intelligence) KW - Machine Learning KW - Mood KW - preschool classrooms KW - Prototypes KW - Robustness KW - Working environment noise AB -We present results on applying a novel machine learning approach for learning auditory moods in natural environments [1] to the problem of detecting crying episodes in preschool classrooms. The resulting system achieved levels of performance approaching that of human coders and also significantly outperformed previous approaches to this problem [2].
JF - 7th IEEE International Conference on Development and Learning, 2008. ICDL 2008 PB - IEEE CY - Monterey, CA SN - 978-1-4244-2661-4 ER - TY - CONF T1 - A discriminative approach to frame-by-frame head pose tracking T2 - 8th IEEE International Conference on Automatic Face Gesture Recognition, 2008. FG '08 Y1 - 2008 A1 - Whitehill, J. A1 - Movellan, Javier R. KW - accuracy drift KW - continuous video sequence KW - controlled illumination condition KW - discriminative approach KW - face detection KW - face recognition KW - facial appearance KW - frame-by-frame head pose tracking KW - Humans KW - Image analysis KW - Image databases KW - Laboratories KW - Lighting KW - Magnetic heads KW - mean square error methods KW - pose estimation KW - Robustness KW - root-mean-square error tracking KW - System testing KW - Video sequences AB -We present a discriminative approach to frame-by-frame head pose tracking that is robust to a wide range of illuminations and facial appearances and that is inherently immune to accuracy drift. Most previous research on head pose tracking has been validated on test datasets spanning only a small (< 20) subjects under controlled illumination conditions on continuous video sequences. In contrast, the system presented in this paper was both trained and tested on a much larger database, GENKI, spanning tens of thousands of different subjects, illuminations, and geographical locations from images on the Web. Our pose estimator achieves accuracy of 5.82deg, 5.65deg, and 2.96deg root-mean-square (RMS) error for yaw, pitch, and roll, respectively. A set of 4000 images from this dataset, labeled for pose, was collected and released for use by the research community.
JF - 8th IEEE International Conference on Automatic Face Gesture Recognition, 2008. FG '08 PB - IEEE CY - Amsterdam SN - 978-1-4244-2153-4 ER - TY - CONF T1 - Visual saliency model for robot cameras T2 - IEEE International Conference on Robotics and Automation, 2008. ICRA 2008 Y1 - 2008 A1 - Butko, N. A1 - Zhang, L. A1 - Cottrell, G. A1 - Movellan, J. KW - Application software KW - approximation theory KW - Bayes methods KW - Bayesian methods KW - Bayesian model KW - camera control KW - Cameras KW - Central Processing Unit KW - Computational efficiency KW - Computational modeling KW - Explosions KW - fast approximation KW - human visual attention KW - Humans KW - Open loop systems KW - robot cameras KW - robot vision KW - Robot vision systems KW - robotic application KW - task free conditions KW - visual saliency model AB -Recent years have seen an explosion of research on the computational modeling of human visual attention in task free conditions, i.e., given an image predict where humans are likely to look. This area of research could potentially provide general purpose mechanisms for robots to orient their cameras. One difficulty is that most current models of visual saliency are computationally very expensive and not suited to real time implementations needed for robotic applications. Here we propose a fast approximation to a Bayesian model of visual saliency recently proposed in the literature. The approximation can run in real time on current computers at very little computational cost, leaving plenty of CPU cycles for other tasks. We empirically evaluate the saliency model in the domain of controlling saccades of a camera in social robotics situations. The goal was to orient a camera as quickly as possible toward human faces. We found that this simple general purpose saliency model doubled the success rate of the camera: it captured images of people 70% of the time, when compared to a 35% success rate when the camera was controlled using an open-loop scheme. After 3 saccades (camera movements), the robot was 96% likely to capture at least one person. The results suggest that visual saliency models may provide a useful front end for camera control in robotics applications.
JF - IEEE International Conference on Robotics and Automation, 2008. ICRA 2008 PB - IEEE CY - Pasadena, CA SN - 978-1-4244-1646-2 ER -