<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Littlewort, G.</style></author><author><style face="normal" font="default" size="100%">Whitehill, J.</style></author><author><style face="normal" font="default" size="100%">Wu, T.</style></author><author><style face="normal" font="default" size="100%">Fasel, I.</style></author><author><style face="normal" font="default" size="100%">Frank, M.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author><author><style face="normal" font="default" size="100%">Bartlett, M.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The computer expression recognition toolbox (CERT)</style></title><secondary-title><style face="normal" font="default" size="100%">2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011)</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">3D orientation</style></keyword><keyword><style  face="normal" font="default" size="100%">Accuracy</style></keyword><keyword><style  face="normal" font="default" size="100%">automatic real-time facial expression recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">CERT</style></keyword><keyword><style  face="normal" font="default" size="100%">computer expression recognition toolbox</style></keyword><keyword><style  face="normal" font="default" size="100%">Detectors</style></keyword><keyword><style  face="normal" font="default" size="100%">dual core laptop</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotion recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">Encoding</style></keyword><keyword><style  face="normal" font="default" size="100%">extended Cohn-Kanade</style></keyword><keyword><style  face="normal" font="default" size="100%">Face</style></keyword><keyword><style  face="normal" font="default" size="100%">face recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">facial action unit coding system</style></keyword><keyword><style  face="normal" font="default" size="100%">facial expression dataset</style></keyword><keyword><style  face="normal" font="default" size="100%">Facial features</style></keyword><keyword><style  face="normal" font="default" size="100%">FACS</style></keyword><keyword><style  face="normal" font="default" size="100%">Gold</style></keyword><keyword><style  face="normal" font="default" size="100%">Image coding</style></keyword><keyword><style  face="normal" font="default" size="100%">software tool</style></keyword><keyword><style  face="normal" font="default" size="100%">software tools</style></keyword><keyword><style  face="normal" font="default" size="100%">two-alternative forced choice task</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2011</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Santa Barbara, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-9140-7</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p class=&quot;p1&quot;&gt;We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">12007742</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Tanaka, F.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A barebones communicative robot based on social contingency and Infomax Control</style></title><secondary-title><style face="normal" font="default" size="100%">The 17th IEEE International Symposium on Robot and Human Interactive Communication, 2008. RO-MAN 2008</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Actuators</style></keyword><keyword><style  face="normal" font="default" size="100%">barebones communicative robot</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication system control</style></keyword><keyword><style  face="normal" font="default" size="100%">Delay</style></keyword><keyword><style  face="normal" font="default" size="100%">Detectors</style></keyword><keyword><style  face="normal" font="default" size="100%">Human robot interaction</style></keyword><keyword><style  face="normal" font="default" size="100%">human-model updating capability</style></keyword><keyword><style  face="normal" font="default" size="100%">humanoid robots</style></keyword><keyword><style  face="normal" font="default" size="100%">Hydrogen</style></keyword><keyword><style  face="normal" font="default" size="100%">Infomax control</style></keyword><keyword><style  face="normal" font="default" size="100%">man-machine systems</style></keyword><keyword><style  face="normal" font="default" size="100%">Pediatrics</style></keyword><keyword><style  face="normal" font="default" size="100%">policy improvement</style></keyword><keyword><style  face="normal" font="default" size="100%">Robot control</style></keyword><keyword><style  face="normal" font="default" size="100%">Robot sensing systems</style></keyword><keyword><style  face="normal" font="default" size="100%">Scheduling</style></keyword><keyword><style  face="normal" font="default" size="100%">social contingency</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Munich</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-2212-8</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;In this paper, we present a barebones robot which is capable of interacting with humans based on social contingency. It expands the previous work of a contingency detector into having both human-model updating (developmental capability) and policy improvement (learning capability) based on the framework of Infomax control. The proposed new controller interacts with humans in both active and responsive ways handling the turn-taking between them.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10174337</style></accession-num></record></records></xml>