<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ruvolo, P.</style></author><author><style face="normal" font="default" size="100%">Fasel, I.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Auditory mood detection for social and educational robots</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE International Conference on Robotics and Automation, 2008. ICRA 2008</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory mood detection</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer vision</style></keyword><keyword><style  face="normal" font="default" size="100%">educational robot</style></keyword><keyword><style  face="normal" font="default" size="100%">Educational robots</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotion recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">emotional speech database</style></keyword><keyword><style  face="normal" font="default" size="100%">face detection</style></keyword><keyword><style  face="normal" font="default" size="100%">hearing</style></keyword><keyword><style  face="normal" font="default" size="100%">interactive robotic application</style></keyword><keyword><style  face="normal" font="default" size="100%">learning (artificial intelligence)</style></keyword><keyword><style  face="normal" font="default" size="100%">Machine Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Mood Prototypes</style></keyword><keyword><style  face="normal" font="default" size="100%">object recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">Robotics and Automation Robots</style></keyword><keyword><style  face="normal" font="default" size="100%">social mood</style></keyword><keyword><style  face="normal" font="default" size="100%">social robot</style></keyword><keyword><style  face="normal" font="default" size="100%">Speech</style></keyword><keyword><style  face="normal" font="default" size="100%">USA Councils</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">05/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Pasadena, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-1646-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px;&quot;&gt;Social robots face the fundamental challenge of detecting and adapting their behavior to the current social mood. For example, robots that assist teachers in early education must choose different behaviors depending on whether the children are crying, laughing, sleeping, or singing songs. Interactive robotic applications require perceptual algorithms that both run in real time and are adaptable to the challenging conditions of daily life. This paper explores a novel approach to auditory mood detection which was born out of our experience immersing social robots in classroom environments. We propose a new set of low-level spectral contrast features that extends a class of features which have proven very successful for object recognition in the modern computer vision literature. Features are selected and combined using machine learning approaches so as to make decisions about the ongoing auditory mood. We demonstrate excellent performance on two standard emotional speech databases (the Berlin Emotional Speech [W. Burkhardt et al., 2005], and the ORATOR dataset [H. Quast, 2001]). In addition we establish strong baseline performance for mood detection on a database collected from a social robot immersed in a classroom of 18-24 months old children [J. Movellan er al., 2007]. This approach operates in real time at little computational cost. It has the potential to greatly enhance the effectiveness of social robots in daily life environments.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10014826</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ruvolo, P.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Automatic cry detection in early childhood education settings</style></title><secondary-title><style face="normal" font="default" size="100%">7th IEEE International Conference on Development and Learning, 2008. ICDL 2008</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Acoustic noise</style></keyword><keyword><style  face="normal" font="default" size="100%">auditory moods</style></keyword><keyword><style  face="normal" font="default" size="100%">automatic cry detection</style></keyword><keyword><style  face="normal" font="default" size="100%">behavioural sciences computing</style></keyword><keyword><style  face="normal" font="default" size="100%">Deafness</style></keyword><keyword><style  face="normal" font="default" size="100%">early childhood education settings</style></keyword><keyword><style  face="normal" font="default" size="100%">education</style></keyword><keyword><style  face="normal" font="default" size="100%">Educational robots</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotion recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">human coders</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">learning (artificial intelligence)</style></keyword><keyword><style  face="normal" font="default" size="100%">Machine Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Mood</style></keyword><keyword><style  face="normal" font="default" size="100%">preschool classrooms</style></keyword><keyword><style  face="normal" font="default" size="100%">Prototypes</style></keyword><keyword><style  face="normal" font="default" size="100%">Robustness</style></keyword><keyword><style  face="normal" font="default" size="100%">Working environment noise</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Monterey, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-2661-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;We present results on applying a novel machine learning approach for learning auditory moods in natural environments [1] to the problem of detecting crying episodes in preschool classrooms. The resulting system achieved levels of performance approaching that of human coders and also significantly outperformed previous approaches to this problem [2].&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10367600</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ruvolo, P.</style></author><author><style face="normal" font="default" size="100%">Whitehill, J.</style></author><author><style face="normal" font="default" size="100%">Virnes, M.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Building a more effective teaching robot using apprenticeship learning</style></title><secondary-title><style face="normal" font="default" size="100%">7th IEEE International Conference on Development and Learning, 2008. ICDL 2008</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">apprenticeship learning</style></keyword><keyword><style  face="normal" font="default" size="100%">automated helicopter flight</style></keyword><keyword><style  face="normal" font="default" size="100%">Automatic control</style></keyword><keyword><style  face="normal" font="default" size="100%">Data mining</style></keyword><keyword><style  face="normal" font="default" size="100%">Delay</style></keyword><keyword><style  face="normal" font="default" size="100%">education</style></keyword><keyword><style  face="normal" font="default" size="100%">Educational robots</style></keyword><keyword><style  face="normal" font="default" size="100%">expert teaching</style></keyword><keyword><style  face="normal" font="default" size="100%">Helicopters</style></keyword><keyword><style  face="normal" font="default" size="100%">Human-robot interaction</style></keyword><keyword><style  face="normal" font="default" size="100%">humanoid robots</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans Learning systems</style></keyword><keyword><style  face="normal" font="default" size="100%">mechanical control</style></keyword><keyword><style  face="normal" font="default" size="100%">robot teaching</style></keyword><keyword><style  face="normal" font="default" size="100%">Robotics and Automation</style></keyword><keyword><style  face="normal" font="default" size="100%">RUBI social robot</style></keyword><keyword><style  face="normal" font="default" size="100%">time 18 month to 24 month</style></keyword><keyword><style  face="normal" font="default" size="100%">timing</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Monterey, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-2661-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;What defines good teaching? While attributes such as timing, responsiveness to social cues, and pacing of material clearly play a role, it is difficult to create a comprehensive specification of what it means to be a good teacher. On the other hand, it is relatively easy to obtain examples of expert teaching behavior by observing a real teacher. With this inspiration as our guide, we investigated apprenticeship learning methods [1] that use data recorded from expert teachers as a means of improving the teaching abilities of RUBI, a social robot immersed in a classroom of 18-24 month old children. While this approach has achieved considerable success in mechanical control, such as automated helicopter flight [2], until now there has been little work on applying it to the field of social robotics. This paper explores two particular approaches to apprenticeship learning, and analyzes the models of teaching that each approach learns from the data of the human teacher. Empirical results indicate that the apprenticeship learning paradigm, though still nascent in its use in the social robotics field, holds promise, and that our proposed methods can already extract meaningful teaching models from demonstrations of a human expert.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10367601</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Tanaka, F.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author><author><style face="normal" font="default" size="100%">Taylor, C.</style></author><author><style face="normal" font="default" size="100%">Ruvolo, P.</style></author><author><style face="normal" font="default" size="100%">Eckhardt, M.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The RUBI Project: A Progress Report</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the 2nd ACM/IEEE International Conference of Human-Robot Interaction</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2007</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record></records></xml>