02299nas a2200445 4500008004100000020002200041245005500063210004900118260003700167520099600204653001901200653001301219653005401232653000901286653004401295653001401339653002101353653002401374653001301398653002501411653000901436653002101445653003701466653003001503653002001533653000901553653000901562653001701571653001801588653001901606653003901625100001901664700001801683700001101701700001401712700001401726700001701740700001701757856007901774 2011 eng d a978-1-4244-9140-700aThe computer expression recognition toolbox (CERT)0 acomputer expression recognition toolbox CERT aSanta Barbara, CAbIEEEc03/20113 a

We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.

10a3D orientation10aAccuracy10aautomatic real-time facial expression recognition10aCERT10acomputer expression recognition toolbox10aDetectors10adual core laptop10aEmotion recognition10aEncoding10aextended Cohn-Kanade10aFace10aface recognition10afacial action unit coding system10afacial expression dataset10aFacial features10aFACS10aGold10aImage coding10asoftware tool10asoftware tools10atwo-alternative forced choice task1 aLittlewort, G.1 aWhitehill, J.1 aWu, T.1 aFasel, I.1 aFrank, M.1 aMovellan, J.1 aBartlett, M. uhttps://rubi.ucsd.edu/content/computer-expression-recognition-toolbox-cert03033nas a2200565 4500008004100000022001400041245003700055210003700092260001200129300001400141490000700155520134500162653001501507653002801522653001401550653005301564653003501617653001301652653002401665653002001689653002201709653000901731653003301740653002101773653002701794653004801821653002801869653002001897653002201917653002501939653004401964653003902008653003202047653002802079653001302107653002102120653003002141653003102171653003202202653001202234653002602246653002202272653002102294100001802315700001902333700001402352700001702366700001702383856006702400 2009 eng d a0162-882800aToward Practical Smile Detection0 aToward Practical Smile Detection c11/2009 a2106-21110 v313 a

Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.

10aAlgorithms10aArtificial intelligence10aAutomated10aautomatic facial expression recognition research10aBiological Pattern Recognition10aBiometry10aComputer simulation10aComputer vision10aComputer-Assisted10aFace10aFace and gesture recognition10aface recognition10afeature representation10ahuman-level expression recognition accuracy10aillumination conditions10aImage databases10aImage Enhancement10aImage Interpretation10aimage registration image representation10alearning (artificial intelligence)10amachine learning approaches10aMachine Learning Models10an Humans10aobject detection10apractical smile detection10aReproducibility of Results10aSensitivity and Specificity10aSmiling10aSubtraction Technique10atraining data set10avisual databases1 aWhitehill, J.1 aLittlewort, G.1 aFasel, I.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/toward-practical-smile-detection02582nas a2200373 4500008004100000020002200041245007500063210006900138260003100207520141200238653002801650653003201678653002201710653001601732653001001748653001401758653002301772653002001795653001601815653002801831653002001859653002801879653002301907653001901930653002801949653002201977653003001999653001102029100001502040700001802055700001502073700001702088856010302105 2008 eng d a978-1-4244-2661-400aBuilding a more effective teaching robot using apprenticeship learning0 aBuilding a more effective teaching robot using apprenticeship le aMonterey, CAbIEEc08/20083 a

What defines good teaching? While attributes such as timing, responsiveness to social cues, and pacing of material clearly play a role, it is difficult to create a comprehensive specification of what it means to be a good teacher. On the other hand, it is relatively easy to obtain examples of expert teaching behavior by observing a real teacher. With this inspiration as our guide, we investigated apprenticeship learning methods [1] that use data recorded from expert teachers as a means of improving the teaching abilities of RUBI, a social robot immersed in a classroom of 18-24 month old children. While this approach has achieved considerable success in mechanical control, such as automated helicopter flight [2], until now there has been little work on applying it to the field of social robotics. This paper explores two particular approaches to apprenticeship learning, and analyzes the models of teaching that each approach learns from the data of the human teacher. Empirical results indicate that the apprenticeship learning paradigm, though still nascent in its use in the social robotics field, holds promise, and that our proposed methods can already extract meaningful teaching models from demonstrations of a human expert.

10aapprenticeship learning10aautomated helicopter flight10aAutomatic control10aData mining10aDelay10aeducation10aEducational robots10aexpert teaching10aHelicopters10aHuman-robot interaction10ahumanoid robots10aHumans Learning systems10amechanical control10arobot teaching10aRobotics and Automation10aRUBI social robot10atime 18 month to 24 month10atiming1 aRuvolo, P.1 aWhitehill, J.1 aVirnes, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/building-more-effective-teaching-robot-using-apprenticeship-learning02225nas a2200373 4500008004100000020002200041245006700063210006300130260002900193520104300222653001901265653003001284653003801314653002801352653001901380653002101399653002201420653003801442653001101480653001901491653002001510653001701530653001301547653001901560653003001579653002001609653001501629653003601644653001901680653002001699100001801719700002501737856008901762 2008 eng d a978-1-4244-2153-400aA discriminative approach to frame-by-frame head pose tracking0 adiscriminative approach to framebyframe head pose tracking aAmsterdambIEEEc09/20083 a

We present a discriminative approach to frame-by-frame head pose tracking that is robust to a wide range of illuminations and facial appearances and that is inherently immune to accuracy drift. Most previous research on head pose tracking has been validated on test datasets spanning only a small (< 20) subjects under controlled illumination conditions on continuous video sequences. In contrast, the system presented in this paper was both trained and tested on a much larger database, GENKI, spanning tens of thousands of different subjects, illuminations, and geographical locations from images on the Web. Our pose estimator achieves accuracy of 5.82deg, 5.65deg, and 2.96deg root-mean-square (RMS) error for yaw, pitch, and roll, respectively. A set of 4000 images from this dataset, labeled for pose, was collected and released for use by the research community.

10aaccuracy drift10acontinuous video sequence10acontrolled illumination condition10adiscriminative approach10aface detection10aface recognition10afacial appearance10aframe-by-frame head pose tracking10aHumans10aImage analysis10aImage databases10aLaboratories10aLighting10aMagnetic heads10amean square error methods10apose estimation10aRobustness10aroot-mean-square error tracking10aSystem testing10aVideo sequences1 aWhitehill, J.1 aMovellan, Javier, R. uhttps://rubi.ucsd.edu/content/discriminative-approach-frame-frame-head-pose-tracking