<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Butko, N.</style></author><author><style face="normal" font="default" size="100%">Theocharous, G.</style></author><author><style face="normal" font="default" size="100%">Philipose, M.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Automated facial affect analysis for one-on-one tutoring applications</style></title><secondary-title><style face="normal" font="default" size="100%">2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011)</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">automated facial affect analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">automated tutoring system</style></keyword><keyword><style  face="normal" font="default" size="100%">behavioural sciences computing</style></keyword><keyword><style  face="normal" font="default" size="100%">computer vision technique</style></keyword><keyword><style  face="normal" font="default" size="100%">Context</style></keyword><keyword><style  face="normal" font="default" size="100%">decision making</style></keyword><keyword><style  face="normal" font="default" size="100%">education</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotion recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">face recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">Human</style></keyword><keyword><style  face="normal" font="default" size="100%">human computer interaction</style></keyword><keyword><style  face="normal" font="default" size="100%">Labeling</style></keyword><keyword><style  face="normal" font="default" size="100%">Machine Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Mood</style></keyword><keyword><style  face="normal" font="default" size="100%">n Histograms</style></keyword><keyword><style  face="normal" font="default" size="100%">one-on-one tutoring application</style></keyword><keyword><style  face="normal" font="default" size="100%">s Intelligent tutoring systems</style></keyword><keyword><style  face="normal" font="default" size="100%">student mood analysis</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2011</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Santa Barbara, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-9140-7</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;In this paper, we explore the use of computer vision techniques to analyze students' moods during one-on-one teaching interactions. The eventual goal is to create automated tutoring systems that are sensitive to the student's mood and affective state. We find that the problem of accurately determining a child's mood from a single video frame is surprisingly difficult, even for humans. However when the system is allowed to make decisions based on information from 10 to 30 seconds of video, excellent performance may be obtained.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">12007758</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ruvolo, P.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Automatic cry detection in early childhood education settings</style></title><secondary-title><style face="normal" font="default" size="100%">7th IEEE International Conference on Development and Learning, 2008. ICDL 2008</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Acoustic noise</style></keyword><keyword><style  face="normal" font="default" size="100%">auditory moods</style></keyword><keyword><style  face="normal" font="default" size="100%">automatic cry detection</style></keyword><keyword><style  face="normal" font="default" size="100%">behavioural sciences computing</style></keyword><keyword><style  face="normal" font="default" size="100%">Deafness</style></keyword><keyword><style  face="normal" font="default" size="100%">early childhood education settings</style></keyword><keyword><style  face="normal" font="default" size="100%">education</style></keyword><keyword><style  face="normal" font="default" size="100%">Educational robots</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotion recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">human coders</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">learning (artificial intelligence)</style></keyword><keyword><style  face="normal" font="default" size="100%">Machine Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Mood</style></keyword><keyword><style  face="normal" font="default" size="100%">preschool classrooms</style></keyword><keyword><style  face="normal" font="default" size="100%">Prototypes</style></keyword><keyword><style  face="normal" font="default" size="100%">Robustness</style></keyword><keyword><style  face="normal" font="default" size="100%">Working environment noise</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Monterey, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-2661-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;We present results on applying a novel machine learning approach for learning auditory moods in natural environments [1] to the problem of detecting crying episodes in preschool classrooms. The resulting system achieved levels of performance approaching that of human coders and also significantly outperformed previous approaches to this problem [2].&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10367600</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ruvolo, P.</style></author><author><style face="normal" font="default" size="100%">Whitehill, J.</style></author><author><style face="normal" font="default" size="100%">Virnes, M.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Building a more effective teaching robot using apprenticeship learning</style></title><secondary-title><style face="normal" font="default" size="100%">7th IEEE International Conference on Development and Learning, 2008. ICDL 2008</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">apprenticeship learning</style></keyword><keyword><style  face="normal" font="default" size="100%">automated helicopter flight</style></keyword><keyword><style  face="normal" font="default" size="100%">Automatic control</style></keyword><keyword><style  face="normal" font="default" size="100%">Data mining</style></keyword><keyword><style  face="normal" font="default" size="100%">Delay</style></keyword><keyword><style  face="normal" font="default" size="100%">education</style></keyword><keyword><style  face="normal" font="default" size="100%">Educational robots</style></keyword><keyword><style  face="normal" font="default" size="100%">expert teaching</style></keyword><keyword><style  face="normal" font="default" size="100%">Helicopters</style></keyword><keyword><style  face="normal" font="default" size="100%">Human-robot interaction</style></keyword><keyword><style  face="normal" font="default" size="100%">humanoid robots</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans Learning systems</style></keyword><keyword><style  face="normal" font="default" size="100%">mechanical control</style></keyword><keyword><style  face="normal" font="default" size="100%">robot teaching</style></keyword><keyword><style  face="normal" font="default" size="100%">Robotics and Automation</style></keyword><keyword><style  face="normal" font="default" size="100%">RUBI social robot</style></keyword><keyword><style  face="normal" font="default" size="100%">time 18 month to 24 month</style></keyword><keyword><style  face="normal" font="default" size="100%">timing</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Monterey, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-2661-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;What defines good teaching? While attributes such as timing, responsiveness to social cues, and pacing of material clearly play a role, it is difficult to create a comprehensive specification of what it means to be a good teacher. On the other hand, it is relatively easy to obtain examples of expert teaching behavior by observing a real teacher. With this inspiration as our guide, we investigated apprenticeship learning methods [1] that use data recorded from expert teachers as a means of improving the teaching abilities of RUBI, a social robot immersed in a classroom of 18-24 month old children. While this approach has achieved considerable success in mechanical control, such as automated helicopter flight [2], until now there has been little work on applying it to the field of social robotics. This paper explores two particular approaches to apprenticeship learning, and analyzes the models of teaching that each approach learns from the data of the human teacher. Empirical results indicate that the apprenticeship learning paradigm, though still nascent in its use in the social robotics field, holds promise, and that our proposed methods can already extract meaningful teaching models from demonstrations of a human expert.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10367601</style></accession-num></record></records></xml>