01907nas a2200373 4500008004100000020002200041245007400063210006900137260003700206520070300243653003700946653003000983653003501013653003001048653001201078653002001090653001401110653002401124653002101148653001001169653003101179653001301210653002101223653000901244653001701253653003601270653003501306653002601341100001401367700002001381700001801401700001701419856009701436 2011 eng d a978-1-4244-9140-700aAutomated facial affect analysis for one-on-one tutoring applications0 aAutomated facial affect analysis for oneonone tutoring applicati aSanta Barbara, CAbIEEEc03/20113 a

In this paper, we explore the use of computer vision techniques to analyze students' moods during one-on-one teaching interactions. The eventual goal is to create automated tutoring systems that are sensitive to the student's mood and affective state. We find that the problem of accurately determining a child's mood from a single video frame is surprisingly difficult, even for humans. However when the system is allowed to make decisions based on information from 10 to 30 seconds of video, excellent performance may be obtained.

10aautomated facial affect analysis10aautomated tutoring system10abehavioural sciences computing10acomputer vision technique10aContext10adecision making10aeducation10aEmotion recognition10aface recognition10aHuman10ahuman computer interaction10aLabeling10aMachine Learning10aMood10an Histograms10aone-on-one tutoring application10as Intelligent tutoring systems10astudent mood analysis1 aButko, N.1 aTheocharous, G.1 aPhilipose, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/automated-facial-affect-analysis-one-one-tutoring-applications02633nas a2200397 4500008004100000022001400041245003700055210003700092300001100129490000600140520152800146653003301674653003201707653003001739653002001769653002601789653001701815653002601832653001901858653001001877653002001887653001701907653002101924653002001945653003601965653002002001653002402021653001902045653001802064653002202082653001802104653001802122100001402140700001702154856006402171 2010 eng d a1943-060400aInfomax Control of Eye Movements0 aInfomax Control of Eye Movements a91-1070 v23 a

Recently, infomax methods of optimal control have begun to reshape how we think about active information gathering. We show how such methods can be used to formulate the problem of choosing where to look. We show how an optimal eye movement controller can be learned from subjective experiences of information gathering, and we explore in simulation properties of the optimal controller. This controller outperforms other eye movement strategies proposed in the literature. The learned eye movement strategies are tailored to the specific visual system of the learner-we show that agents with different kinds of eyes should follow different eye movement strategies. Then we use these insights to build an autonomous computer program that follows this approach and learns to search for faces in images faster than current state-of-the-art techniques. The context of these results is search in static scenes, but the approach extends easily, and gives further efficiency gains, to dynamic tracking tasks. A limitation of infomax methods is that they require probabilistic models of uncertainty of the sensory system, the motor system, and the external world. In the final section of this paper, we propose future avenues of research by which autonomous physical agents may use developmental experience to subjectively characterize the uncertainties they face.

10aactive information gathering10aautonomous computer program10aautonomous physical agent10aComputer vision10adynamic tracking task10aEye movement10aeye movement strategy10aface detection10afaces10aInfomax control10amotor system10aobject detection10aoptimal control10aoptimal eye movement controller10apolicy gradient10aprobabilistic model10asensory system10astatic scenes10aVisual Perception10avisual search10avisual system1 aButko, N.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/infomax-control-eye-movements01688nas a2200385 4500008004100000020002200041245004000063210004000103260002800143520060700171653001400778653002400792653001900816653002100835653002800856653001300884653001100897653003900908653002100947653001900968653001500987653004401002653002601046653001701072653001101089653002501100653002001125653001601145100001101161700001401172700001501186700001701201700001701218856006701235 2009 eng d a978-1-4244-4117-400aLearning to Make Facial Expressions0 aLearning to Make Facial Expressions aShanghaibIEEEc06/20093 a

This paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.

10aActuators10aEmotion recognition10aface detection10aface recognition10afacial motor parameters10aFeedback10aHumans10alearning (artificial intelligence)10aMachine Learning10aMagnetic heads10aPediatrics10areal-time facial expression recognition10aRobot sensing systems10arobotic head10aRobots10aself-guided learning10aServomechanisms10aServomotors1 aWu, T.1 aButko, N.1 aRuvulo, P.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/learning-make-facial-expressions02735nas a2200409 4500008004100000020002200041245004400063210004400107260003200151520155400183653002501737653002501762653001801787653002101805653001901826653001901845653001201864653002801876653002901904653002701933653001501960653002301975653002701998653001102025653002202036653001802058653001702076653002502093653002402118653002502142653002602167100001402193700001402207700001702221700001702238856007002255 2008 eng d a978-1-4244-1646-200aVisual saliency model for robot cameras0 aVisual saliency model for robot cameras aPasadena, CAbIEEEc05/20083 a

Recent years have seen an explosion of research on the computational modeling of human visual attention in task free conditions, i.e., given an image predict where humans are likely to look. This area of research could potentially provide general purpose mechanisms for robots to orient their cameras. One difficulty is that most current models of visual saliency are computationally very expensive and not suited to real time implementations needed for robotic applications. Here we propose a fast approximation to a Bayesian model of visual saliency recently proposed in the literature. The approximation can run in real time on current computers at very little computational cost, leaving plenty of CPU cycles for other tasks. We empirically evaluate the saliency model in the domain of controlling saccades of a camera in social robotics situations. The goal was to orient a camera as quickly as possible toward human faces. We found that this simple general purpose saliency model doubled the success rate of the camera: it captured images of people 70% of the time, when compared to a 35% success rate when the camera was controlled using an open-loop scheme. After 3 saccades (camera movements), the robot was 96% likely to capture at least one person. The results suggest that visual saliency models may provide a useful front end for camera control in robotics applications.

10aApplication software10aapproximation theory10aBayes methods10aBayesian methods10aBayesian model10acamera control10aCameras10aCentral Processing Unit10aComputational efficiency10aComputational modeling10aExplosions10afast approximation10ahuman visual attention10aHumans10aOpen loop systems10arobot cameras10arobot vision10aRobot vision systems10arobotic application10atask free conditions10avisual saliency model1 aButko, N.1 aZhang, L.1 aCottrell, G.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/visual-saliency-model-robot-cameras00263nas a2200097 4500008004100000245002200041210002200063100001400085700001700099856004900116 2007 eng d00aLearning to Learn0 aLearning to Learn1 aButko, N.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/learning-learn