01615nas a2200265 4500008004100000022004100041245008400082210006900166260003800235300001200273520063500285653004500920653005300965653004001018653003801058653005401096653004401150100001401194700001701208700001501225700001501240700001401255700001401269856006601283 2014 eng d a978-3-319-10060-9, 978-3-319-10061-600aGrounding a Sociable Robot’s Movements in Multimodal, Situational Engagements0 aGrounding a Sociable Robot s Movements in Multimodal Situational bSpringer International Publishing a267-2813 a
|
To deal with the question of what a sociable robot is, we describe how an educational robot is encountered by children, teachers and designers in a preschool. We consider the importance of the robot’s body by focusing on how its movements are contingently embedded in interactional situations. We point out that the effects of agency that these movements generate are inseparable from their grounding in locally coordinated, multimodal actions and interactions. |
Sociable robots are benefiting from machine perception systems that automatically recognize social behavior (e.g., detect and recognize people, recognize their facial expressions and gestures).
1 aMovellan, J.1 aMalmir, M.1 aForster, D. uhttps://rubi.ucsd.edu/content/hri-tool-monitor-socio-emotional-development-early-childhood-education00519nas a2200145 4500008004100000245007400041210006900115260000900184100001500193700001600208700001900224700001700243700001700260856009600277 2013 eng d00aHome Alone: Social Robots for Digital Ethnography of Toddler Behavior0 aHome Alone Social Robots for Digital Ethnography of Toddler Beha bIEEE1 aMalmir, M.1 aForster, D.1 aYoungstrom, K.1 aMorrison, L.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/home-alone-social-robots-digital-ethnography-toddler-behavior00540nas a2200157 4500008004100000020002200041245006200063210006100125260003300186100001600219700001500235700001600250700001400266700001700280856008500297 2012 eng d a978-1-4673-4964-200aDesign and early evaluation of the RUBI-5 sociable robots0 aDesign and early evaluation of the RUBI5 sociable robots aSan Diego, CAbIEEEc11/20121 aJohnson, D.1 aMalmir, M.1 aForster, D.1 aAlač, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/design-and-early-evaluation-rubi-5-sociable-robots01907nas a2200373 4500008004100000020002200041245007400063210006900137260003700206520070300243653003700946653003000983653003501013653003001048653001201078653002001090653001401110653002401124653002101148653001001169653003101179653001301210653002101223653000901244653001701253653003601270653003501306653002601341100001401367700002001381700001801401700001701419856009701436 2011 eng d a978-1-4244-9140-700aAutomated facial affect analysis for one-on-one tutoring applications0 aAutomated facial affect analysis for oneonone tutoring applicati aSanta Barbara, CAbIEEEc03/20113 aIn this paper, we explore the use of computer vision techniques to analyze students' moods during one-on-one teaching interactions. The eventual goal is to create automated tutoring systems that are sensitive to the student's mood and affective state. We find that the problem of accurately determining a child's mood from a single video frame is surprisingly difficult, even for humans. However when the system is allowed to make decisions based on information from 10 to 30 seconds of video, excellent performance may be obtained.
10aautomated facial affect analysis10aautomated tutoring system10abehavioural sciences computing10acomputer vision technique10aContext10adecision making10aeducation10aEmotion recognition10aface recognition10aHuman10ahuman computer interaction10aLabeling10aMachine Learning10aMood10an Histograms10aone-on-one tutoring application10as Intelligent tutoring systems10astudent mood analysis1 aButko, N.1 aTheocharous, G.1 aPhilipose, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/automated-facial-affect-analysis-one-one-tutoring-applications02299nas a2200445 4500008004100000020002200041245005500063210004900118260003700167520099600204653001901200653001301219653005401232653000901286653004401295653001401339653002101353653002401374653001301398653002501411653000901436653002101445653003701466653003001503653002001533653000901553653000901562653001701571653001801588653001901606653003901625100001901664700001801683700001101701700001401712700001401726700001701740700001701757856007901774 2011 eng d a978-1-4244-9140-700aThe computer expression recognition toolbox (CERT)0 acomputer expression recognition toolbox CERT aSanta Barbara, CAbIEEEc03/20113 aWe present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.
10a3D orientation10aAccuracy10aautomatic real-time facial expression recognition10aCERT10acomputer expression recognition toolbox10aDetectors10adual core laptop10aEmotion recognition10aEncoding10aextended Cohn-Kanade10aFace10aface recognition10afacial action unit coding system10afacial expression dataset10aFacial features10aFACS10aGold10aImage coding10asoftware tool10asoftware tools10atwo-alternative forced choice task1 aLittlewort, G.1 aWhitehill, J.1 aWu, T.1 aFasel, I.1 aFrank, M.1 aMovellan, J.1 aBartlett, M. uhttps://rubi.ucsd.edu/content/computer-expression-recognition-toolbox-cert01906nas a2200265 4500008004100000022002500041245011900066210006900185260001200254300001200266490000700278520104500285653000901330653001101339653001201350653003001362653001501392653001801407653002001425653002501445100001401470700001701484700001501501856012401516 2011 eng d a0306-3127, 1460-365900aWhen a robot is social: Spatial arrangements and multimodal semiotic engagement in the practice of social robotics0 aWhen a robot is social Spatial arrangements and multimodal semio c12/2011 a893-9260 v413 aSocial roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot’s design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot’s design activity, and we argue that the robot’s social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot’s social agency is not simply controlled by individual will. Instead, the human–machine couplings are demanded by the situational dynamics in which the robot is lodged.
10abody10adesign10agesture10ahuman–robot interaction10alaboratory10asocial agency10asocial robotics10aspatial organization1 aAlač, M.1 aMovellan, J.1 aTanaka, F. uhttps://rubi.ucsd.edu/content/when-robot-social-spatial-arrangements-and-multimodal-semiotic-engagement-practice-social02633nas a2200397 4500008004100000022001400041245003700055210003700092300001100129490000600140520152800146653003301674653003201707653003001739653002001769653002601789653001701815653002601832653001901858653001001877653002001887653001701907653002101924653002001945653003601965653002002001653002402021653001902045653001802064653002202082653001802104653001802122100001402140700001702154856006402171 2010 eng d a1943-060400aInfomax Control of Eye Movements0 aInfomax Control of Eye Movements a91-1070 v23 aRecently, infomax methods of optimal control have begun to reshape how we think about active information gathering. We show how such methods can be used to formulate the problem of choosing where to look. We show how an optimal eye movement controller can be learned from subjective experiences of information gathering, and we explore in simulation properties of the optimal controller. This controller outperforms other eye movement strategies proposed in the literature. The learned eye movement strategies are tailored to the specific visual system of the learner-we show that agents with different kinds of eyes should follow different eye movement strategies. Then we use these insights to build an autonomous computer program that follows this approach and learns to search for faces in images faster than current state-of-the-art techniques. The context of these results is search in static scenes, but the approach extends easily, and gives further efficiency gains, to dynamic tracking tasks. A limitation of infomax methods is that they require probabilistic models of uncertainty of the sensory system, the motor system, and the external world. In the final section of this paper, we propose future avenues of research by which autonomous physical agents may use developmental experience to subjectively characterize the uncertainties they face.
10aactive information gathering10aautonomous computer program10aautonomous physical agent10aComputer vision10adynamic tracking task10aEye movement10aeye movement strategy10aface detection10afaces10aInfomax control10amotor system10aobject detection10aoptimal control10aoptimal eye movement controller10apolicy gradient10aprobabilistic model10asensory system10astatic scenes10aVisual Perception10avisual search10avisual system1 aButko, N.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/infomax-control-eye-movements00449nas a2200109 4500008004100000245009000041210006900131300001200200490000700212100001700219856010300236 2010 eng d00aWarning: The author of this document may have no mental states. Read at your own risk0 aWarning The author of this document may have no mental states Re a238-2450 v111 aMovellan, J. uhttps://rubi.ucsd.edu/content/warning-author-document-may-have-no-mental-states-read-your-own-risk01476nas a2200157 4500008004100000245004600041210004600087300001200133490000800145520103300153100001701186700001301203700001701216700001801233856006701251 2009 eng d00aFoundations for a New Science of Learning0 aFoundations for a New Science of Learning a284-2880 v3253 aHuman learning is distinguished by the range and complexity of skills that can be learned and the degree of abstraction that can be achieved compared with those of other species. Homo sapiens is also the only species that has developed formal ways to enhance learning: teachers, schools, and curricula. Human infants have an intense interest in people and their behavior and possess powerful implicit learning mechanisms that are affected by social interaction. Neuroscientists are beginning to understand the brain mechanisms underlying learning and how shared brain systems for perception and action support social learning. Machine learning algorithms are being developed that allow robots and computers to learn autonomously. New insights from many different fields are converging to create a new science of learning that may transform educational practices.
1 aMeltzoff, A.1 aKuhl, P.1 aMovellan, J.1 aSejnowski, T. uhttps://rubi.ucsd.edu/content/foundations-new-science-learning01688nas a2200385 4500008004100000020002200041245004000063210004000103260002800143520060700171653001400778653002400792653001900816653002100835653002800856653001300884653001100897653003900908653002100947653001900968653001500987653004401002653002601046653001701072653001101089653002501100653002001125653001601145100001101161700001401172700001501186700001701201700001701218856006701235 2009 eng d a978-1-4244-4117-400aLearning to Make Facial Expressions0 aLearning to Make Facial Expressions aShanghaibIEEEc06/20093 aThis paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.
10aActuators10aEmotion recognition10aface detection10aface recognition10afacial motor parameters10aFeedback10aHumans10alearning (artificial intelligence)10aMachine Learning10aMagnetic heads10aPediatrics10areal-time facial expression recognition10aRobot sensing systems10arobotic head10aRobots10aself-guided learning10aServomechanisms10aServomotors1 aWu, T.1 aButko, N.1 aRuvulo, P.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/learning-make-facial-expressions01916nas a2200373 4500008004100000020002200041245005400063210005400117260003200171520077900203653001500982653003200997653003701029653002901066653002301095653001001118653001801128653002801146653002401174653001501198653002601213653001301239653001901252653003301271653001601304653003001320653002701350653001501377100001701392700001701409700001501426700001701441856008401458 2009 eng d a978-1-60558-404-100aSociable robot improves toddler vocabulary skills0 aSociable robot improves toddler vocabulary skills aLa Jolla, CAbIEEEc03/20093 aWe report results of a study in which a low cost sociable robot was immersed at an Early Childhood Education Center for a period of 2 weeks. The study was designed to investigate whether the robot, which operated fully autonomously during the intervention period, could improve target vocabulary skills of 18-24 month of age toddlers. The results showed a 27% improvement in knowledge of the target words taught by the robot when compared to a matched set of control words. The results suggest that sociable robots may be an effective and low cost technology to enrich Early Childhood Education environments.
10aAlgorithms10aautonomously operated robot10aEarly Childhood Education Center10aEducational institutions10aEducational robots10aGames10ahuman factors10aHuman-robot interaction10aintervention period10aPediatrics10aRobot sensing systems10arobotics10asociable robot10asocial aspects of automation10atime 2 week10atoddler vocabulary skills10aUbiquitous computering10aVocabulary1 aMovellan, J.1 aEckhardt, M.1 aVirnes, M.1 aRodriguez, A uhttps://rubi.ucsd.edu/content/sociable-robot-improves-toddler-vocabulary-skills03033nas a2200565 4500008004100000022001400041245003700055210003700092260001200129300001400141490000700155520134500162653001501507653002801522653001401550653005301564653003501617653001301652653002401665653002001689653002201709653000901731653003301740653002101773653002701794653004801821653002801869653002001897653002201917653002501939653004401964653003902008653003202047653002802079653001302107653002102120653003002141653003102171653003202202653001202234653002602246653002202272653002102294100001802315700001902333700001402352700001702366700001702383856006702400 2009 eng d a0162-882800aToward Practical Smile Detection0 aToward Practical Smile Detection c11/2009 a2106-21110 v313 aMachine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.
10aAlgorithms10aArtificial intelligence10aAutomated10aautomatic facial expression recognition research10aBiological Pattern Recognition10aBiometry10aComputer simulation10aComputer vision10aComputer-Assisted10aFace10aFace and gesture recognition10aface recognition10afeature representation10ahuman-level expression recognition accuracy10aillumination conditions10aImage databases10aImage Enhancement10aImage Interpretation10aimage registration image representation10alearning (artificial intelligence)10amachine learning approaches10aMachine Learning Models10an Humans10aobject detection10apractical smile detection10aReproducibility of Results10aSensitivity and Specificity10aSmiling10aSubtraction Technique10atraining data set10avisual databases1 aWhitehill, J.1 aLittlewort, G.1 aFasel, I.1 aBartlett, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/toward-practical-smile-detection02714nas a2200361 4500008004100000020002200041245006200063210006200125260003200187520158600219653002801805653002001833653002201853653002301875653002401898653003001922653001901952653001201971653003601983653003902019653002102058653002002079653002302099653003502122653001602157653001702173653001102190653001702201100001502218700001402233700001702247856008802264 2008 eng d a978-1-4244-1646-200aAuditory mood detection for social and educational robots0 aAuditory mood detection for social and educational robots aPasadena, CAbIEEEc05/20083 aSocial robots face the fundamental challenge of detecting and adapting their behavior to the current social mood. For example, robots that assist teachers in early education must choose different behaviors depending on whether the children are crying, laughing, sleeping, or singing songs. Interactive robotic applications require perceptual algorithms that both run in real time and are adaptable to the challenging conditions of daily life. This paper explores a novel approach to auditory mood detection which was born out of our experience immersing social robots in classroom environments. We propose a new set of low-level spectral contrast features that extends a class of features which have proven very successful for object recognition in the modern computer vision literature. Features are selected and combined using machine learning approaches so as to make decisions about the ongoing auditory mood. We demonstrate excellent performance on two standard emotional speech databases (the Berlin Emotional Speech [W. Burkhardt et al., 2005], and the ORATOR dataset [H. Quast, 2001]). In addition we establish strong baseline performance for mood detection on a database collected from a social robot immersed in a classroom of 18-24 months old children [J. Movellan er al., 2007]. This approach operates in real time at little computational cost. It has the potential to greatly enhance the effectiveness of social robots in daily life environments.
10aauditory mood detection10aComputer vision10aeducational robot10aEducational robots10aEmotion recognition10aemotional speech database10aface detection10ahearing10ainteractive robotic application10alearning (artificial intelligence)10aMachine Learning10aMood Prototypes10aobject recognition10aRobotics and Automation Robots10asocial mood10asocial robot10aSpeech10aUSA Councils1 aRuvolo, P.1 aFasel, I.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/auditory-mood-detection-social-and-educational-robots01620nas a2200349 4500008004100000020002200041245006600063210006600129260003200195520052200227653001900749653001900768653002800787653003500815653001300850653003900863653001400902653002300916653002400939653001700963653001100980653003900991653002101030653000901051653002501060653001501085653001501100653003001115100001501145700001701160856009301177 2008 eng d a978-1-4244-2661-400aAutomatic cry detection in early childhood education settings0 aAutomatic cry detection in early childhood education settings aMonterey, CAbIEEEc08/20083 aWe present results on applying a novel machine learning approach for learning auditory moods in natural environments [1] to the problem of detecting crying episodes in preschool classrooms. The resulting system achieved levels of performance approaching that of human coders and also significantly outperformed previous approaches to this problem [2].
10aAcoustic noise10aauditory moods10aautomatic cry detection10abehavioural sciences computing10aDeafness10aearly childhood education settings10aeducation10aEducational robots10aEmotion recognition10ahuman coders10aHumans10alearning (artificial intelligence)10aMachine Learning10aMood10apreschool classrooms10aPrototypes10aRobustness10aWorking environment noise1 aRuvolo, P.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/automatic-cry-detection-early-childhood-education-settings01706nas a2200337 4500008004100000020002200041245008400063210006900147260002600216520061900242653001400861653003400875653003300909653001000942653001400952653002800966653003600994653002001030653001301050653002001063653002401083653001501107653002301122653001801145653002601163653001501189653002301204100001501227700001701242856010901259 2008 eng d a978-1-4244-2212-800aA barebones communicative robot based on social contingency and Infomax Control0 abarebones communicative robot based on social contingency and In aMunichbIEEEc08/20083 aIn this paper, we present a barebones robot which is capable of interacting with humans based on social contingency. It expands the previous work of a contingency detector into having both human-model updating (developmental capability) and policy improvement (learning capability) based on the framework of Infomax control. The proposed new controller interacts with humans in both active and responsive ways handling the turn-taking between them.
10aActuators10abarebones communicative robot10aCommunication system control10aDelay10aDetectors10aHuman robot interaction10ahuman-model updating capability10ahumanoid robots10aHydrogen10aInfomax control10aman-machine systems10aPediatrics10apolicy improvement10aRobot control10aRobot sensing systems10aScheduling10asocial contingency1 aTanaka, F.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/barebones-communicative-robot-based-social-contingency-and-infomax-control02582nas a2200373 4500008004100000020002200041245007500063210006900138260003100207520141200238653002801650653003201678653002201710653001601732653001001748653001401758653002301772653002001795653001601815653002801831653002001859653002801879653002301907653001901930653002801949653002201977653003001999653001102029100001502040700001802055700001502073700001702088856010302105 2008 eng d a978-1-4244-2661-400aBuilding a more effective teaching robot using apprenticeship learning0 aBuilding a more effective teaching robot using apprenticeship le aMonterey, CAbIEEc08/20083 aWhat defines good teaching? While attributes such as timing, responsiveness to social cues, and pacing of material clearly play a role, it is difficult to create a comprehensive specification of what it means to be a good teacher. On the other hand, it is relatively easy to obtain examples of expert teaching behavior by observing a real teacher. With this inspiration as our guide, we investigated apprenticeship learning methods [1] that use data recorded from expert teachers as a means of improving the teaching abilities of RUBI, a social robot immersed in a classroom of 18-24 month old children. While this approach has achieved considerable success in mechanical control, such as automated helicopter flight [2], until now there has been little work on applying it to the field of social robotics. This paper explores two particular approaches to apprenticeship learning, and analyzes the models of teaching that each approach learns from the data of the human teacher. Empirical results indicate that the apprenticeship learning paradigm, though still nascent in its use in the social robotics field, holds promise, and that our proposed methods can already extract meaningful teaching models from demonstrations of a human expert.
10aapprenticeship learning10aautomated helicopter flight10aAutomatic control10aData mining10aDelay10aeducation10aEducational robots10aexpert teaching10aHelicopters10aHuman-robot interaction10ahumanoid robots10aHumans Learning systems10amechanical control10arobot teaching10aRobotics and Automation10aRUBI social robot10atime 18 month to 24 month10atiming1 aRuvolo, P.1 aWhitehill, J.1 aVirnes, M.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/building-more-effective-teaching-robot-using-apprenticeship-learning02735nas a2200409 4500008004100000020002200041245004400063210004400107260003200151520155400183653002501737653002501762653001801787653002101805653001901826653001901845653001201864653002801876653002901904653002701933653001501960653002301975653002701998653001102025653002202036653001802058653001702076653002502093653002402118653002502142653002602167100001402193700001402207700001702221700001702238856007002255 2008 eng d a978-1-4244-1646-200aVisual saliency model for robot cameras0 aVisual saliency model for robot cameras aPasadena, CAbIEEEc05/20083 aRecent years have seen an explosion of research on the computational modeling of human visual attention in task free conditions, i.e., given an image predict where humans are likely to look. This area of research could potentially provide general purpose mechanisms for robots to orient their cameras. One difficulty is that most current models of visual saliency are computationally very expensive and not suited to real time implementations needed for robotic applications. Here we propose a fast approximation to a Bayesian model of visual saliency recently proposed in the literature. The approximation can run in real time on current computers at very little computational cost, leaving plenty of CPU cycles for other tasks. We empirically evaluate the saliency model in the domain of controlling saccades of a camera in social robotics situations. The goal was to orient a camera as quickly as possible toward human faces. We found that this simple general purpose saliency model doubled the success rate of the camera: it captured images of people 70% of the time, when compared to a 35% success rate when the camera was controlled using an open-loop scheme. After 3 saccades (camera movements), the robot was 96% likely to capture at least one person. The results suggest that visual saliency models may provide a useful front end for camera control in robotics applications.
10aApplication software10aapproximation theory10aBayes methods10aBayesian methods10aBayesian model10acamera control10aCameras10aCentral Processing Unit10aComputational efficiency10aComputational modeling10aExplosions10afast approximation10ahuman visual attention10aHumans10aOpen loop systems10arobot cameras10arobot vision10aRobot vision systems10arobotic application10atask free conditions10avisual saliency model1 aButko, N.1 aZhang, L.1 aCottrell, G.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/visual-saliency-model-robot-cameras00263nas a2200097 4500008004100000245002200041210002200063100001400085700001700099856004900116 2007 eng d00aLearning to Learn0 aLearning to Learn1 aButko, N.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/learning-learn00392nas a2200133 4500008004100000245004000041210003500081100001500116700001700131700001500148700001500163700001700178856006300195 2007 eng d00aThe RUBI Project: A Progress Report0 aRUBI Project A Progress Report1 aTanaka, F.1 aMovellan, J.1 aTaylor, C.1 aRuvolo, P.1 aEckhardt, M. uhttps://rubi.ucsd.edu/content/rubi-project-progress-report00483nas a2200121 4500008004100000245008500041210006900126490000800195100001500203700001700218700001700235856010900252 2007 eng d00aSocialization between toddlers and robots at an early childhood education center0 aSocialization between toddlers and robots at an early childhood 0 v1041 aTanaka, F.1 aCicourel, A.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/socialization-between-toddlers-and-robots-early-childhood-education-center00548nas a2200109 4500008004100000245013200041210006900173260003800242100001500280700001700295856012600312 2006 eng d00aBehavior Analysis of Children’s Touch on a Small Humanoid Robot: Long-term Observation at a Daily Classroom over Three Months0 aBehavior Analysis of Children s Touch on a Small Humanoid Robot aUnited KingdombHatfieldc09/20061 aTanaka, F.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/behavior-analysis-children%E2%80%99s-touch-small-humanoid-robot-long-term-observation-daily00541nas a2200133 4500008004100000245009600041210006900137260001900206100001500225700001700240700002000257700001500277856011500292 2006 eng d00aDaily HRI evaluation at a classroom environment: Reports from dance interaction experiments0 aDaily HRI evaluation at a classroom environment Reports from dan aSalt Lake City1 aTanaka, F.1 aMovellan, J.1 aFortenberry, B.1 aAisaka, K. uhttps://rubi.ucsd.edu/content/daily-hri-evaluation-classroom-environment-reports-dance-interaction-experiments00415nas a2200109 4500008004100000245006100041210005600102260003200158100001500190700001700205856008300222 2006 eng d00aThe RUBI Project: Designing Everyday Robots by Immersion0 aRUBI Project Designing Everyday Robots by Immersion aBloomington, U.S.Ac06/20061 aTanaka, F.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/rubi-project-designing-everyday-robots-immersion00635nas a2200157 4500008004100000245011300041210006900154260003100223300001200254653002100266100001500287700002000302700001500322700001700337856012300354 2005 eng d00aDeveloping Dance Interaction between QRIO and Toddlers in a Classroom Environment: Plans for the First Steps0 aDeveloping Dance Interaction between QRIO and Toddlers in a Clas aNashville, U.S.A.c08/2005 a223-22810aBest Paper Award1 aTanaka, F.1 aFortenberry, B.1 aAisaka, K.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/developing-dance-interaction-between-qrio-and-toddlers-classroom-environment-plans-first00346nas a2200109 4500008004100000245004000041210003900081100001400120700002000134700001700154856006500171 2005 eng d00aMPT: the Machine Perception Toolbox0 aMPT the Machine Perception Toolbox1 aFasel, I.1 aFortenberry, B.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/mpt-machine-perception-toolbox00561nas a2200133 4500008004100000245010600041210006900147260001700216100001500233700002000248700001500268700001700283856012700300 2005 eng d00aPlans for developing real-time dance interaction between qrio and toddlers in a classroom environment0 aPlans for developing realtime dance interaction between qrio and aOsaka, Japan1 aTanaka, F.1 aFortenberry, B.1 aAisaka, K.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/plans-developing-real-time-dance-interaction-between-qrio-and-toddlers-classroom-environment00451nas a2200133 4500008004100000245005800041210005200099260001700151100001700168700001500185700002000200700001500220856008200235 2005 eng d00aThe RUBI project: Origins, principles and first steps0 aRUBI project Origins principles and first steps aOsaka, Japan1 aMovellan, J.1 aTanaka, F.1 aFortenberry, B.1 aAisaka, K. uhttps://rubi.ucsd.edu/content/rubi-project-origins-principles-and-first-steps00540nas a2200133 4500008004100000245010000041210006900141100001900210700001700229700001400246700001400260700001700274856011500291 2004 eng d00aAnalysis of machine learning methods for real-time recognition of facial expressions from video0 aAnalysis of machine learning methods for realtime recognition of1 aLittlewort, G.1 aBartlett, M.1 aFasel, I.1 aChenu, J.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/analysis-machine-learning-methods-real-time-recognition-facial-expressions-video00668nas a2200181 4500008004100000245012300041210006900164300001400233490000700247100001900254700001700273700001400290700001400304700001400318700001700332700001700349856012000366 2004 eng d00aTowards social robots: Automatic evaluation of human-robot interaction by face detection and expression classification0 aTowards social robots Automatic evaluation of humanrobot interac a1563-15700 v161 aLittlewort, G.1 aBartlett, M.1 aChenu, J.1 aFasel, I.1 aKanda, T.1 aIshiguro, H.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/towards-social-robots-automatic-evaluation-human-robot-interaction-face-detection-and