02299nas a2200445 4500008004100000020002200041245005500063210004900118260003700167520099600204653001901200653001301219653005401232653000901286653004401295653001401339653002101353653002401374653001301398653002501411653000901436653002101445653003701466653003001503653002001533653000901553653000901562653001701571653001801588653001901606653003901625100001901664700001801683700001101701700001401712700001401726700001701740700001701757856007901774 2011 eng d a978-1-4244-9140-700aThe computer expression recognition toolbox (CERT)0 acomputer expression recognition toolbox CERT aSanta Barbara, CAbIEEEc03/20113 a

We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.

10a3D orientation10aAccuracy10aautomatic real-time facial expression recognition10aCERT10acomputer expression recognition toolbox10aDetectors10adual core laptop10aEmotion recognition10aEncoding10aextended Cohn-Kanade10aFace10aface recognition10afacial action unit coding system10afacial expression dataset10aFacial features10aFACS10aGold10aImage coding10asoftware tool10asoftware tools10atwo-alternative forced choice task1 aLittlewort, G.1 aWhitehill, J.1 aWu, T.1 aFasel, I.1 aFrank, M.1 aMovellan, J.1 aBartlett, M. uhttps://rubi.ucsd.edu/content/computer-expression-recognition-toolbox-cert03033nas a2200565 4500008004100000022001400041245003700055210003700092260001200129300001400141490000700155520134500162653001501507653002801522653001401550653005301564653003501617653001301652653002401665653002001689653002201709653000901731653003301740653002101773653002701794653004801821653002801869653002001897653002201917653002501939653004401964653003902008653003202047653002802079653001302107653002102120653003002141653003102171653003202202653001202234653002602246653002202272653002102294100001802315700001902333700001402352700001702366700001702383856006702400 2009 eng d a0162-882800aToward Practical Smile Detection0 aToward Practical Smile Detection c11/2009 a2106-21110 v313 a

Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.

10aAlgorithms10aArtificial intelligence10aAutomated10aautomatic facial expression recognition research10aBiological Pattern Recognition10aBiometry10aComputer simulation10aComputer vision10aComputer-Assisted10aFace10aFace and gesture recognition10aface recognition10afeature representation10ahuman-level expression recognition accuracy10aillumination conditions10aImage databases10aImage Enhancement10aImage Interpretation10aimage registration image representation10alearning (artificial intelligence)10amachine learning approaches10aMachine Learning Models10an Humans10aobject detection10apractical smile detection10aReproducibility of Results10aSensitivity and Specificity10aSmiling10aSubtraction Technique10atraining data set10avisual databases1 aWhitehill, J.1 aLittlewort, G.1 aFasel, I.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/toward-practical-smile-detection02714nas a2200361 4500008004100000020002200041245006200063210006200125260003200187520158600219653002801805653002001833653002201853653002301875653002401898653003001922653001901952653001201971653003601983653003902019653002102058653002002079653002302099653003502122653001602157653001702173653001102190653001702201100001502218700001402233700001702247856008802264 2008 eng d a978-1-4244-1646-200aAuditory mood detection for social and educational robots0 aAuditory mood detection for social and educational robots aPasadena, CAbIEEEc05/20083 a

Social robots face the fundamental challenge of detecting and adapting their behavior to the current social mood. For example, robots that assist teachers in early education must choose different behaviors depending on whether the children are crying, laughing, sleeping, or singing songs. Interactive robotic applications require perceptual algorithms that both run in real time and are adaptable to the challenging conditions of daily life. This paper explores a novel approach to auditory mood detection which was born out of our experience immersing social robots in classroom environments. We propose a new set of low-level spectral contrast features that extends a class of features which have proven very successful for object recognition in the modern computer vision literature. Features are selected and combined using machine learning approaches so as to make decisions about the ongoing auditory mood. We demonstrate excellent performance on two standard emotional speech databases (the Berlin Emotional Speech [W. Burkhardt et al., 2005], and the ORATOR dataset [H. Quast, 2001]). In addition we establish strong baseline performance for mood detection on a database collected from a social robot immersed in a classroom of 18-24 months old children [J. Movellan er al., 2007]. This approach operates in real time at little computational cost. It has the potential to greatly enhance the effectiveness of social robots in daily life environments.

10aauditory mood detection10aComputer vision10aeducational robot10aEducational robots10aEmotion recognition10aemotional speech database10aface detection10ahearing10ainteractive robotic application10alearning (artificial intelligence)10aMachine Learning10aMood Prototypes10aobject recognition10aRobotics and Automation Robots10asocial mood10asocial robot10aSpeech10aUSA Councils1 aRuvolo, P.1 aFasel, I.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/auditory-mood-detection-social-and-educational-robots00495nas a2200109 4500008004100000245007700041210006900118260005200187490002600239100001400265856010600279 2006 eng d00aLearning Real-Time Object Detectors: Probabilistic Generative Approaches0 aLearning RealTime Object Detectors Probabilistic Generative Appr aSan Diegob University of California, San Diego0 vDoctoral dissertation1 aFasel, I. uhttps://rubi.ucsd.edu/content/learning-real-time-object-detectors-probabilistic-generative-approaches00346nas a2200109 4500008004100000245004000041210003900081100001400120700002000134700001700154856006500171 2005 eng d00aMPT: the Machine Perception Toolbox0 aMPT the Machine Perception Toolbox1 aFasel, I.1 aFortenberry, B.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/mpt-machine-perception-toolbox00540nas a2200133 4500008004100000245010000041210006900141100001900210700001700229700001400246700001400260700001700274856011500291 2004 eng d00aAnalysis of machine learning methods for real-time recognition of facial expressions from video0 aAnalysis of machine learning methods for realtime recognition of1 aLittlewort, G.1 aBartlett, M.1 aFasel, I.1 aChenu, J.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/analysis-machine-learning-methods-real-time-recognition-facial-expressions-video00668nas a2200181 4500008004100000245012300041210006900164300001400233490000700247100001900254700001700273700001400290700001400304700001400318700001700332700001700349856012000366 2004 eng d00aTowards social robots: Automatic evaluation of human-robot interaction by face detection and expression classification0 aTowards social robots Automatic evaluation of humanrobot interac a1563-15700 v161 aLittlewort, G.1 aBartlett, M.1 aChenu, J.1 aFasel, I.1 aKanda, T.1 aIshiguro, H.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/towards-social-robots-automatic-evaluation-human-robot-interaction-face-detection-and