02299nas a2200445 4500008004100000020002200041245005500063210004900118260003700167520099600204653001901200653001301219653005401232653000901286653004401295653001401339653002101353653002401374653001301398653002501411653000901436653002101445653003701466653003001503653002001533653000901553653000901562653001701571653001801588653001901606653003901625100001901664700001801683700001101701700001401712700001401726700001701740700001701757856007901774 2011 eng d a978-1-4244-9140-700aThe computer expression recognition toolbox (CERT)0 acomputer expression recognition toolbox CERT aSanta Barbara, CAbIEEEc03/20113 a
We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.
10a3D orientation10aAccuracy10aautomatic real-time facial expression recognition10aCERT10acomputer expression recognition toolbox10aDetectors10adual core laptop10aEmotion recognition10aEncoding10aextended Cohn-Kanade10aFace10aface recognition10afacial action unit coding system10afacial expression dataset10aFacial features10aFACS10aGold10aImage coding10asoftware tool10asoftware tools10atwo-alternative forced choice task1 aLittlewort, G.1 aWhitehill, J.1 aWu, T.1 aFasel, I.1 aFrank, M.1 aMovellan, J.1 aBartlett, M. uhttps://rubi.ucsd.edu/content/computer-expression-recognition-toolbox-cert01688nas a2200385 4500008004100000020002200041245004000063210004000103260002800143520060700171653001400778653002400792653001900816653002100835653002800856653001300884653001100897653003900908653002100947653001900968653001500987653004401002653002601046653001701072653001101089653002501100653002001125653001601145100001101161700001401172700001501186700001701201700001701218856006701235 2009 eng d a978-1-4244-4117-400aLearning to Make Facial Expressions0 aLearning to Make Facial Expressions aShanghaibIEEEc06/20093 aThis paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.
10aActuators10aEmotion recognition10aface detection10aface recognition10afacial motor parameters10aFeedback10aHumans10alearning (artificial intelligence)10aMachine Learning10aMagnetic heads10aPediatrics10areal-time facial expression recognition10aRobot sensing systems10arobotic head10aRobots10aself-guided learning10aServomechanisms10aServomotors1 aWu, T.1 aButko, N.1 aRuvulo, P.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/learning-make-facial-expressions