02299nas a2200445 4500008004100000020002200041245005500063210004900118260003700167520099600204653001901200653001301219653005401232653000901286653004401295653001401339653002101353653002401374653001301398653002501411653000901436653002101445653003701466653003001503653002001533653000901553653000901562653001701571653001801588653001901606653003901625100001901664700001801683700001101701700001401712700001401726700001701740700001701757856007901774 2011 eng d a978-1-4244-9140-700aThe computer expression recognition toolbox (CERT)0 acomputer expression recognition toolbox CERT aSanta Barbara, CAbIEEEc03/20113 a
We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.
10a3D orientation10aAccuracy10aautomatic real-time facial expression recognition10aCERT10acomputer expression recognition toolbox10aDetectors10adual core laptop10aEmotion recognition10aEncoding10aextended Cohn-Kanade10aFace10aface recognition10afacial action unit coding system10afacial expression dataset10aFacial features10aFACS10aGold10aImage coding10asoftware tool10asoftware tools10atwo-alternative forced choice task1 aLittlewort, G.1 aWhitehill, J.1 aWu, T.1 aFasel, I.1 aFrank, M.1 aMovellan, J.1 aBartlett, M. uhttps://rubi.ucsd.edu/content/computer-expression-recognition-toolbox-cert01688nas a2200385 4500008004100000020002200041245004000063210004000103260002800143520060700171653001400778653002400792653001900816653002100835653002800856653001300884653001100897653003900908653002100947653001900968653001500987653004401002653002601046653001701072653001101089653002501100653002001125653001601145100001101161700001401172700001501186700001701201700001701218856006701235 2009 eng d a978-1-4244-4117-400aLearning to Make Facial Expressions0 aLearning to Make Facial Expressions aShanghaibIEEEc06/20093 aThis paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.
10aActuators10aEmotion recognition10aface detection10aface recognition10afacial motor parameters10aFeedback10aHumans10alearning (artificial intelligence)10aMachine Learning10aMagnetic heads10aPediatrics10areal-time facial expression recognition10aRobot sensing systems10arobotic head10aRobots10aself-guided learning10aServomechanisms10aServomotors1 aWu, T.1 aButko, N.1 aRuvulo, P.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/learning-make-facial-expressions03033nas a2200565 4500008004100000022001400041245003700055210003700092260001200129300001400141490000700155520134500162653001501507653002801522653001401550653005301564653003501617653001301652653002401665653002001689653002201709653000901731653003301740653002101773653002701794653004801821653002801869653002001897653002201917653002501939653004401964653003902008653003202047653002802079653001302107653002102120653003002141653003102171653003202202653001202234653002602246653002202272653002102294100001802315700001902333700001402352700001702366700001702383856006702400 2009 eng d a0162-882800aToward Practical Smile Detection0 aToward Practical Smile Detection c11/2009 a2106-21110 v313 aMachine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.
10aAlgorithms10aArtificial intelligence10aAutomated10aautomatic facial expression recognition research10aBiological Pattern Recognition10aBiometry10aComputer simulation10aComputer vision10aComputer-Assisted10aFace10aFace and gesture recognition10aface recognition10afeature representation10ahuman-level expression recognition accuracy10aillumination conditions10aImage databases10aImage Enhancement10aImage Interpretation10aimage registration image representation10alearning (artificial intelligence)10amachine learning approaches10aMachine Learning Models10an Humans10aobject detection10apractical smile detection10aReproducibility of Results10aSensitivity and Specificity10aSmiling10aSubtraction Technique10atraining data set10avisual databases1 aWhitehill, J.1 aLittlewort, G.1 aFasel, I.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/toward-practical-smile-detection00540nas a2200133 4500008004100000245010000041210006900141100001900210700001700229700001400246700001400260700001700274856011500291 2004 eng d00aAnalysis of machine learning methods for real-time recognition of facial expressions from video0 aAnalysis of machine learning methods for realtime recognition of1 aLittlewort, G.1 aBartlett, M.1 aFasel, I.1 aChenu, J.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/analysis-machine-learning-methods-real-time-recognition-facial-expressions-video00668nas a2200181 4500008004100000245012300041210006900164300001400233490000700247100001900254700001700273700001400290700001400304700001400318700001700332700001700349856012000366 2004 eng d00aTowards social robots: Automatic evaluation of human-robot interaction by face detection and expression classification0 aTowards social robots Automatic evaluation of humanrobot interac a1563-15700 v161 aLittlewort, G.1 aBartlett, M.1 aChenu, J.1 aFasel, I.1 aKanda, T.1 aIshiguro, H.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/towards-social-robots-automatic-evaluation-human-robot-interaction-face-detection-and