<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Butko, N.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Infomax Control of Eye Movements</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Transactions on Autonomous Mental Development</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">active information gathering</style></keyword><keyword><style  face="normal" font="default" size="100%">autonomous computer program</style></keyword><keyword><style  face="normal" font="default" size="100%">autonomous physical agent</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer vision</style></keyword><keyword><style  face="normal" font="default" size="100%">dynamic tracking task</style></keyword><keyword><style  face="normal" font="default" size="100%">Eye movement</style></keyword><keyword><style  face="normal" font="default" size="100%">eye movement strategy</style></keyword><keyword><style  face="normal" font="default" size="100%">face detection</style></keyword><keyword><style  face="normal" font="default" size="100%">faces</style></keyword><keyword><style  face="normal" font="default" size="100%">Infomax control</style></keyword><keyword><style  face="normal" font="default" size="100%">motor system</style></keyword><keyword><style  face="normal" font="default" size="100%">object detection</style></keyword><keyword><style  face="normal" font="default" size="100%">optimal control</style></keyword><keyword><style  face="normal" font="default" size="100%">optimal eye movement controller</style></keyword><keyword><style  face="normal" font="default" size="100%">policy gradient</style></keyword><keyword><style  face="normal" font="default" size="100%">probabilistic model</style></keyword><keyword><style  face="normal" font="default" size="100%">sensory system</style></keyword><keyword><style  face="normal" font="default" size="100%">static scenes</style></keyword><keyword><style  face="normal" font="default" size="100%">Visual Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">visual search</style></keyword><keyword><style  face="normal" font="default" size="100%">visual system</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2010</style></year></dates><volume><style face="normal" font="default" size="100%">2</style></volume><pages><style face="normal" font="default" size="100%">91-107</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;Recently, infomax methods of optimal control have begun to reshape how we think about active information gathering. We show how such methods can be used to formulate the problem of choosing where to look. We show how an optimal eye movement controller can be learned from subjective experiences of information gathering, and we explore in simulation properties of the optimal controller. This controller outperforms other eye movement strategies proposed in the literature. The learned eye movement strategies are tailored to the specific visual system of the learner-we show that agents with different kinds of eyes should follow different eye movement strategies. Then we use these insights to build an autonomous computer program that follows this approach and learns to search for faces in images faster than current state-of-the-art techniques. The context of these results is search in static scenes, but the approach extends easily, and gives further efficiency gains, to dynamic tracking tasks. A limitation of infomax methods is that they require probabilistic models of uncertainty of the sensory system, the motor system, and the external world. In the final section of this paper, we propose future avenues of research by which autonomous physical agents may use developmental experience to subjectively characterize the uncertainties they face.&lt;/span&gt;&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue><section><style face="normal" font="default" size="100%">91</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Wu, T.</style></author><author><style face="normal" font="default" size="100%">Butko, N.</style></author><author><style face="normal" font="default" size="100%">Ruvulo, P.</style></author><author><style face="normal" font="default" size="100%">Bartlett, M.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Learning to Make Facial Expressions</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Actuators</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotion recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">face detection</style></keyword><keyword><style  face="normal" font="default" size="100%">face recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">facial motor parameters</style></keyword><keyword><style  face="normal" font="default" size="100%">Feedback</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">learning (artificial intelligence)</style></keyword><keyword><style  face="normal" font="default" size="100%">Machine Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Magnetic heads</style></keyword><keyword><style  face="normal" font="default" size="100%">Pediatrics</style></keyword><keyword><style  face="normal" font="default" size="100%">real-time facial expression recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">Robot sensing systems</style></keyword><keyword><style  face="normal" font="default" size="100%">robotic head</style></keyword><keyword><style  face="normal" font="default" size="100%">Robots</style></keyword><keyword><style  face="normal" font="default" size="100%">self-guided learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Servomechanisms</style></keyword><keyword><style  face="normal" font="default" size="100%">Servomotors</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2009</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Shanghai</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-4117-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px;&quot;&gt;This paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10801981</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ruvolo, P.</style></author><author><style face="normal" font="default" size="100%">Fasel, I.</style></author><author><style face="normal" font="default" size="100%">Movellan, J.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Auditory mood detection for social and educational robots</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE International Conference on Robotics and Automation, 2008. ICRA 2008</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory mood detection</style></keyword><keyword><style  face="normal" font="default" size="100%">Computer vision</style></keyword><keyword><style  face="normal" font="default" size="100%">educational robot</style></keyword><keyword><style  face="normal" font="default" size="100%">Educational robots</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotion recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">emotional speech database</style></keyword><keyword><style  face="normal" font="default" size="100%">face detection</style></keyword><keyword><style  face="normal" font="default" size="100%">hearing</style></keyword><keyword><style  face="normal" font="default" size="100%">interactive robotic application</style></keyword><keyword><style  face="normal" font="default" size="100%">learning (artificial intelligence)</style></keyword><keyword><style  face="normal" font="default" size="100%">Machine Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Mood Prototypes</style></keyword><keyword><style  face="normal" font="default" size="100%">object recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">Robotics and Automation Robots</style></keyword><keyword><style  face="normal" font="default" size="100%">social mood</style></keyword><keyword><style  face="normal" font="default" size="100%">social robot</style></keyword><keyword><style  face="normal" font="default" size="100%">Speech</style></keyword><keyword><style  face="normal" font="default" size="100%">USA Councils</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">05/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Pasadena, CA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-1646-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px;&quot;&gt;Social robots face the fundamental challenge of detecting and adapting their behavior to the current social mood. For example, robots that assist teachers in early education must choose different behaviors depending on whether the children are crying, laughing, sleeping, or singing songs. Interactive robotic applications require perceptual algorithms that both run in real time and are adaptable to the challenging conditions of daily life. This paper explores a novel approach to auditory mood detection which was born out of our experience immersing social robots in classroom environments. We propose a new set of low-level spectral contrast features that extends a class of features which have proven very successful for object recognition in the modern computer vision literature. Features are selected and combined using machine learning approaches so as to make decisions about the ongoing auditory mood. We demonstrate excellent performance on two standard emotional speech databases (the Berlin Emotional Speech [W. Burkhardt et al., 2005], and the ORATOR dataset [H. Quast, 2001]). In addition we establish strong baseline performance for mood detection on a database collected from a social robot immersed in a classroom of 18-24 months old children [J. Movellan er al., 2007]. This approach operates in real time at little computational cost. It has the potential to greatly enhance the effectiveness of social robots in daily life environments.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10014826</style></accession-num></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Whitehill, J.</style></author><author><style face="normal" font="default" size="100%">Movellan, Javier R.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A discriminative approach to frame-by-frame head pose tracking</style></title><secondary-title><style face="normal" font="default" size="100%">8th IEEE International Conference on Automatic Face Gesture Recognition, 2008. FG '08</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">accuracy drift</style></keyword><keyword><style  face="normal" font="default" size="100%">continuous video sequence</style></keyword><keyword><style  face="normal" font="default" size="100%">controlled illumination condition</style></keyword><keyword><style  face="normal" font="default" size="100%">discriminative approach</style></keyword><keyword><style  face="normal" font="default" size="100%">face detection</style></keyword><keyword><style  face="normal" font="default" size="100%">face recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">facial appearance</style></keyword><keyword><style  face="normal" font="default" size="100%">frame-by-frame head pose tracking</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Image analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Image databases</style></keyword><keyword><style  face="normal" font="default" size="100%">Laboratories</style></keyword><keyword><style  face="normal" font="default" size="100%">Lighting</style></keyword><keyword><style  face="normal" font="default" size="100%">Magnetic heads</style></keyword><keyword><style  face="normal" font="default" size="100%">mean square error methods</style></keyword><keyword><style  face="normal" font="default" size="100%">pose estimation</style></keyword><keyword><style  face="normal" font="default" size="100%">Robustness</style></keyword><keyword><style  face="normal" font="default" size="100%">root-mean-square error tracking</style></keyword><keyword><style  face="normal" font="default" size="100%">System testing</style></keyword><keyword><style  face="normal" font="default" size="100%">Video sequences</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">09/2008</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Amsterdam</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-4244-2153-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: rgb(68, 68, 68); font-family: 'Lucida Grande', Verdana, sans-serif; font-size: 14px; background-color: rgba(0, 0, 0, 0.0470588);&quot;&gt;We present a discriminative approach to frame-by-frame head pose tracking that is robust to a wide range of illuminations and facial appearances and that is inherently immune to accuracy drift. Most previous research on head pose tracking has been validated on test datasets spanning only a small (&amp;lt; 20) subjects under controlled illumination conditions on continuous video sequences. In contrast, the system presented in this paper was both trained and tested on a much larger database, GENKI, spanning tens of thousands of different subjects, illuminations, and geographical locations from images on the Web. Our pose estimator achieves accuracy of 5.82deg, 5.65deg, and 2.96deg root-mean-square (RMS) error for yaw, pitch, and roll, respectively. A set of 4000 images from this dataset, labeled for pose, was collected and released for use by the research community.&lt;/span&gt;&lt;/p&gt;
</style></abstract><accession-num><style face="normal" font="default" size="100%">10571793</style></accession-num></record></records></xml>