TY - CHAP T1 - Grounding a Sociable Robot’s Movements in Multimodal, Situational Engagements T2 - New Frontiers in Artificial Intelligence Y1 - 2014 A1 - Alač, M. A1 - Movellan, J. A1 - Malmir, M. A1 - Nakano, Y. A1 - Satoh, K. A1 - Bekki, D. KW - Artificial Intelligence (incl. Robotics) KW - Computer Appl. in Administrative Data Processing KW - Data Mining and Knowledge Discovery KW - Information Storage and Retrieval KW - Information Systems Applications (incl. Internet) KW - Mathematical Logic and Formal Languages AB -
| To deal with the question of what a sociable robot is, we describe how an educational robot is encountered by children, teachers and designers in a preschool. We consider the importance of the robot’s body by focusing on how its movements are contingently embedded in interactional situations. We point out that the effects of agency that these movements generate are inseparable from their grounding in locally coordinated, multimodal actions and interactions. |
Sociable robots are benefiting from machine perception systems that automatically recognize social behavior (e.g., detect and recognize people, recognize their facial expressions and gestures).
JF - HRI 2014 2nd Workshop on "Applications for Emotional Robot" CY - Bielefeld ER - TY - CONF T1 - Home Alone: Social Robots for Digital Ethnography of Toddler Behavior T2 - roceedings of the IEEE International Conference on Computer Vision Workshops Y1 - 2013 A1 - Malmir, M. A1 - Forster, D. A1 - Youngstrom, K. A1 - Morrison, L. A1 - Movellan, J. JF - roceedings of the IEEE International Conference on Computer Vision Workshops PB - IEEE ER - TY - CONF T1 - Design and early evaluation of the RUBI-5 sociable robots T2 - Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on Y1 - 2012 A1 - Johnson, D. A1 - Malmir, M. A1 - Forster, D. A1 - Alač, M. A1 - Movellan, J. JF - Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on PB - IEEE CY - San Diego, CA SN - 978-1-4673-4964-2 ER - TY - CONF T1 - Automated facial affect analysis for one-on-one tutoring applications T2 - 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) Y1 - 2011 A1 - Butko, N. A1 - Theocharous, G. A1 - Philipose, M. A1 - Movellan, J. KW - automated facial affect analysis KW - automated tutoring system KW - behavioural sciences computing KW - computer vision technique KW - Context KW - decision making KW - education KW - Emotion recognition KW - face recognition KW - Human KW - human computer interaction KW - Labeling KW - Machine Learning KW - Mood KW - n Histograms KW - one-on-one tutoring application KW - s Intelligent tutoring systems KW - student mood analysis AB -In this paper, we explore the use of computer vision techniques to analyze students' moods during one-on-one teaching interactions. The eventual goal is to create automated tutoring systems that are sensitive to the student's mood and affective state. We find that the problem of accurately determining a child's mood from a single video frame is surprisingly difficult, even for humans. However when the system is allowed to make decisions based on information from 10 to 30 seconds of video, excellent performance may be obtained.
JF - 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) PB - IEEE CY - Santa Barbara, CA SN - 978-1-4244-9140-7 ER - TY - CONF T1 - The computer expression recognition toolbox (CERT) T2 - 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) Y1 - 2011 A1 - Littlewort, G. A1 - Whitehill, J. A1 - Wu, T. A1 - Fasel, I. A1 - Frank, M. A1 - Movellan, J. A1 - Bartlett, M. KW - 3D orientation KW - Accuracy KW - automatic real-time facial expression recognition KW - CERT KW - computer expression recognition toolbox KW - Detectors KW - dual core laptop KW - Emotion recognition KW - Encoding KW - extended Cohn-Kanade KW - Face KW - face recognition KW - facial action unit coding system KW - facial expression dataset KW - Facial features KW - FACS KW - Gold KW - Image coding KW - software tool KW - software tools KW - two-alternative forced choice task AB -We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different prototypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+[1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.
JF - 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) PB - IEEE CY - Santa Barbara, CA SN - 978-1-4244-9140-7 ER - TY - JOUR T1 - When a robot is social: Spatial arrangements and multimodal semiotic engagement in the practice of social robotics JF - Social Studies of Science Y1 - 2011 A1 - Alač, M. A1 - Movellan, J. A1 - Tanaka, F. KW - body KW - design KW - gesture KW - human–robot interaction KW - laboratory KW - social agency KW - social robotics KW - spatial organization AB -Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot’s design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot’s design activity, and we argue that the robot’s social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot’s social agency is not simply controlled by individual will. Instead, the human–machine couplings are demanded by the situational dynamics in which the robot is lodged.
VL - 41 IS - 6 ER - TY - JOUR T1 - Infomax Control of Eye Movements JF - IEEE Transactions on Autonomous Mental Development Y1 - 2010 A1 - Butko, N. A1 - Movellan, J. KW - active information gathering KW - autonomous computer program KW - autonomous physical agent KW - Computer vision KW - dynamic tracking task KW - Eye movement KW - eye movement strategy KW - face detection KW - faces KW - Infomax control KW - motor system KW - object detection KW - optimal control KW - optimal eye movement controller KW - policy gradient KW - probabilistic model KW - sensory system KW - static scenes KW - Visual Perception KW - visual search KW - visual system AB -Recently, infomax methods of optimal control have begun to reshape how we think about active information gathering. We show how such methods can be used to formulate the problem of choosing where to look. We show how an optimal eye movement controller can be learned from subjective experiences of information gathering, and we explore in simulation properties of the optimal controller. This controller outperforms other eye movement strategies proposed in the literature. The learned eye movement strategies are tailored to the specific visual system of the learner-we show that agents with different kinds of eyes should follow different eye movement strategies. Then we use these insights to build an autonomous computer program that follows this approach and learns to search for faces in images faster than current state-of-the-art techniques. The context of these results is search in static scenes, but the approach extends easily, and gives further efficiency gains, to dynamic tracking tasks. A limitation of infomax methods is that they require probabilistic models of uncertainty of the sensory system, the motor system, and the external world. In the final section of this paper, we propose future avenues of research by which autonomous physical agents may use developmental experience to subjectively characterize the uncertainties they face.
VL - 2 IS - 2 ER - TY - JOUR T1 - Warning: The author of this document may have no mental states. Read at your own risk JF - Interaction Studies Y1 - 2010 A1 - Movellan, J. VL - 11 IS - 2 ER - TY - JOUR T1 - Foundations for a New Science of Learning JF - Science Y1 - 2009 A1 - Meltzoff, A. A1 - Kuhl, P. A1 - Movellan, J. A1 - Sejnowski, T. AB -Human learning is distinguished by the range and complexity of skills that can be learned and the degree of abstraction that can be achieved compared with those of other species. Homo sapiens is also the only species that has developed formal ways to enhance learning: teachers, schools, and curricula. Human infants have an intense interest in people and their behavior and possess powerful implicit learning mechanisms that are affected by social interaction. Neuroscientists are beginning to understand the brain mechanisms underlying learning and how shared brain systems for perception and action support social learning. Machine learning algorithms are being developed that allow robots and computers to learn autonomously. New insights from many different fields are converging to create a new science of learning that may transform educational practices.
VL - 325 IS - 5938 ER - TY - CONF T1 - Learning to Make Facial Expressions T2 - IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009 Y1 - 2009 A1 - Wu, T. A1 - Butko, N. A1 - Ruvulo, P. A1 - Bartlett, M. A1 - Movellan, J. KW - Actuators KW - Emotion recognition KW - face detection KW - face recognition KW - facial motor parameters KW - Feedback KW - Humans KW - learning (artificial intelligence) KW - Machine Learning KW - Magnetic heads KW - Pediatrics KW - real-time facial expression recognition KW - Robot sensing systems KW - robotic head KW - Robots KW - self-guided learning KW - Servomechanisms KW - Servomotors AB -This paper explores the process of self-guided learning of realistic facial expression production by a robotic head with 31 degrees of freedom. Facial motor parameters were learned using feedback from real-time facial expression recognition from video. The experiments show that the mapping of servos to expressions was learned in under one-hour of training time. We discuss how our work may help illuminate the computational study of how infants learn to make facial expressions.
JF - IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009 PB - IEEE CY - Shanghai SN - 978-1-4244-4117-4 ER - TY - JOUR T1 - Moving Android On Social Robots and Body-in-Interaction JF - Social Studies of Science Y1 - 2009 A1 - Alač, M. KW - embodiment KW - gesture KW - human—technology extension KW - laboratory studies KW - multimodal semiotic interaction KW - social robotics AB -| Social robotics studies embodied technologies designed for social interaction. This paper examines the implied idea of embodiment using as data a sequence in which practitioners of social robotics are involved in designing a robot's movement. The moments of learning and work in the laboratory enact the social body as material, dynamic, and multiparty: the body-in-interaction. In describing subject—object reconfigurations, the paper explores how the well-known ideas of extending the body with instruments can be applied to a technology designed to function as our surrogate. |
VL - 39 IS - 4 ER - TY - CONF T1 - Sociable robot improves toddler vocabulary skills T2 - 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI) Y1 - 2009 A1 - Movellan, J. A1 - Eckhardt, M. A1 - Virnes, M. A1 - Rodriguez, A KW - Algorithms KW - autonomously operated robot KW - Early Childhood Education Center KW - Educational institutions KW - Educational robots KW - Games KW - human factors KW - Human-robot interaction KW - intervention period KW - Pediatrics KW - Robot sensing systems KW - robotics KW - sociable robot KW - social aspects of automation KW - time 2 week KW - toddler vocabulary skills KW - Ubiquitous computering KW - Vocabulary AB -
We report results of a study in which a low cost sociable robot was immersed at an Early Childhood Education Center for a period of 2 weeks. The study was designed to investigate whether the robot, which operated fully autonomously during the intervention period, could improve target vocabulary skills of 18-24 month of age toddlers. The results showed a 27% improvement in knowledge of the target words taught by the robot when compared to a matched set of control words. The results suggest that sociable robots may be an effective and low cost technology to enrich Early Childhood Education environments.
JF - 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI) PB - IEEE CY - La Jolla, CA SN - 978-1-60558-404-1 ER - TY - JOUR T1 - Toward Practical Smile Detection JF - IEEE Transactions on Pattern Analysis and Machine Intelligence Y1 - 2009 A1 - Whitehill, J. A1 - Littlewort, G. A1 - Fasel, I. A1 - Bartlett, M. A1 - Movellan, J. KW - Algorithms KW - Artificial intelligence KW - Automated KW - automatic facial expression recognition research KW - Biological Pattern Recognition KW - Biometry KW - Computer simulation KW - Computer vision KW - Computer-Assisted KW - Face KW - Face and gesture recognition KW - face recognition KW - feature representation KW - human-level expression recognition accuracy KW - illumination conditions KW - Image databases KW - Image Enhancement KW - Image Interpretation KW - image registration image representation KW - learning (artificial intelligence) KW - machine learning approaches KW - Machine Learning Models KW - n Humans KW - object detection KW - practical smile detection KW - Reproducibility of Results KW - Sensitivity and Specificity KW - Smiling KW - Subtraction Technique KW - training data set KW - visual databases AB -Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.
VL - 31 IS - 11 ER - TY - CONF T1 - Auditory mood detection for social and educational robots T2 - IEEE International Conference on Robotics and Automation, 2008. ICRA 2008 Y1 - 2008 A1 - Ruvolo, P. A1 - Fasel, I. A1 - Movellan, J. KW - auditory mood detection KW - Computer vision KW - educational robot KW - Educational robots KW - Emotion recognition KW - emotional speech database KW - face detection KW - hearing KW - interactive robotic application KW - learning (artificial intelligence) KW - Machine Learning KW - Mood Prototypes KW - object recognition KW - Robotics and Automation Robots KW - social mood KW - social robot KW - Speech KW - USA Councils AB -Social robots face the fundamental challenge of detecting and adapting their behavior to the current social mood. For example, robots that assist teachers in early education must choose different behaviors depending on whether the children are crying, laughing, sleeping, or singing songs. Interactive robotic applications require perceptual algorithms that both run in real time and are adaptable to the challenging conditions of daily life. This paper explores a novel approach to auditory mood detection which was born out of our experience immersing social robots in classroom environments. We propose a new set of low-level spectral contrast features that extends a class of features which have proven very successful for object recognition in the modern computer vision literature. Features are selected and combined using machine learning approaches so as to make decisions about the ongoing auditory mood. We demonstrate excellent performance on two standard emotional speech databases (the Berlin Emotional Speech [W. Burkhardt et al., 2005], and the ORATOR dataset [H. Quast, 2001]). In addition we establish strong baseline performance for mood detection on a database collected from a social robot immersed in a classroom of 18-24 months old children [J. Movellan er al., 2007]. This approach operates in real time at little computational cost. It has the potential to greatly enhance the effectiveness of social robots in daily life environments.
JF - IEEE International Conference on Robotics and Automation, 2008. ICRA 2008 PB - IEEE CY - Pasadena, CA SN - 978-1-4244-1646-2 ER - TY - CONF T1 - Automatic cry detection in early childhood education settings T2 - 7th IEEE International Conference on Development and Learning, 2008. ICDL 2008 Y1 - 2008 A1 - Ruvolo, P. A1 - Movellan, J. KW - Acoustic noise KW - auditory moods KW - automatic cry detection KW - behavioural sciences computing KW - Deafness KW - early childhood education settings KW - education KW - Educational robots KW - Emotion recognition KW - human coders KW - Humans KW - learning (artificial intelligence) KW - Machine Learning KW - Mood KW - preschool classrooms KW - Prototypes KW - Robustness KW - Working environment noise AB -We present results on applying a novel machine learning approach for learning auditory moods in natural environments [1] to the problem of detecting crying episodes in preschool classrooms. The resulting system achieved levels of performance approaching that of human coders and also significantly outperformed previous approaches to this problem [2].
JF - 7th IEEE International Conference on Development and Learning, 2008. ICDL 2008 PB - IEEE CY - Monterey, CA SN - 978-1-4244-2661-4 ER - TY - CONF T1 - A barebones communicative robot based on social contingency and Infomax Control T2 - The 17th IEEE International Symposium on Robot and Human Interactive Communication, 2008. RO-MAN 2008 Y1 - 2008 A1 - Tanaka, F. A1 - Movellan, J. KW - Actuators KW - barebones communicative robot KW - Communication system control KW - Delay KW - Detectors KW - Human robot interaction KW - human-model updating capability KW - humanoid robots KW - Hydrogen KW - Infomax control KW - man-machine systems KW - Pediatrics KW - policy improvement KW - Robot control KW - Robot sensing systems KW - Scheduling KW - social contingency AB -In this paper, we present a barebones robot which is capable of interacting with humans based on social contingency. It expands the previous work of a contingency detector into having both human-model updating (developmental capability) and policy improvement (learning capability) based on the framework of Infomax control. The proposed new controller interacts with humans in both active and responsive ways handling the turn-taking between them.
JF - The 17th IEEE International Symposium on Robot and Human Interactive Communication, 2008. RO-MAN 2008 PB - IEEE CY - Munich SN - 978-1-4244-2212-8 ER - TY - CONF T1 - Building a more effective teaching robot using apprenticeship learning T2 - 7th IEEE International Conference on Development and Learning, 2008. ICDL 2008 Y1 - 2008 A1 - Ruvolo, P. A1 - Whitehill, J. A1 - Virnes, M. A1 - Movellan, J. KW - apprenticeship learning KW - automated helicopter flight KW - Automatic control KW - Data mining KW - Delay KW - education KW - Educational robots KW - expert teaching KW - Helicopters KW - Human-robot interaction KW - humanoid robots KW - Humans Learning systems KW - mechanical control KW - robot teaching KW - Robotics and Automation KW - RUBI social robot KW - time 18 month to 24 month KW - timing AB -What defines good teaching? While attributes such as timing, responsiveness to social cues, and pacing of material clearly play a role, it is difficult to create a comprehensive specification of what it means to be a good teacher. On the other hand, it is relatively easy to obtain examples of expert teaching behavior by observing a real teacher. With this inspiration as our guide, we investigated apprenticeship learning methods [1] that use data recorded from expert teachers as a means of improving the teaching abilities of RUBI, a social robot immersed in a classroom of 18-24 month old children. While this approach has achieved considerable success in mechanical control, such as automated helicopter flight [2], until now there has been little work on applying it to the field of social robotics. This paper explores two particular approaches to apprenticeship learning, and analyzes the models of teaching that each approach learns from the data of the human teacher. Empirical results indicate that the apprenticeship learning paradigm, though still nascent in its use in the social robotics field, holds promise, and that our proposed methods can already extract meaningful teaching models from demonstrations of a human expert.
JF - 7th IEEE International Conference on Development and Learning, 2008. ICDL 2008 PB - IEE CY - Monterey, CA SN - 978-1-4244-2661-4 ER - TY - CONF T1 - A discriminative approach to frame-by-frame head pose tracking T2 - 8th IEEE International Conference on Automatic Face Gesture Recognition, 2008. FG '08 Y1 - 2008 A1 - Whitehill, J. A1 - Movellan, Javier R. KW - accuracy drift KW - continuous video sequence KW - controlled illumination condition KW - discriminative approach KW - face detection KW - face recognition KW - facial appearance KW - frame-by-frame head pose tracking KW - Humans KW - Image analysis KW - Image databases KW - Laboratories KW - Lighting KW - Magnetic heads KW - mean square error methods KW - pose estimation KW - Robustness KW - root-mean-square error tracking KW - System testing KW - Video sequences AB -We present a discriminative approach to frame-by-frame head pose tracking that is robust to a wide range of illuminations and facial appearances and that is inherently immune to accuracy drift. Most previous research on head pose tracking has been validated on test datasets spanning only a small (< 20) subjects under controlled illumination conditions on continuous video sequences. In contrast, the system presented in this paper was both trained and tested on a much larger database, GENKI, spanning tens of thousands of different subjects, illuminations, and geographical locations from images on the Web. Our pose estimator achieves accuracy of 5.82deg, 5.65deg, and 2.96deg root-mean-square (RMS) error for yaw, pitch, and roll, respectively. A set of 4000 images from this dataset, labeled for pose, was collected and released for use by the research community.
JF - 8th IEEE International Conference on Automatic Face Gesture Recognition, 2008. FG '08 PB - IEEE CY - Amsterdam SN - 978-1-4244-2153-4 ER - TY - CONF T1 - Visual saliency model for robot cameras T2 - IEEE International Conference on Robotics and Automation, 2008. ICRA 2008 Y1 - 2008 A1 - Butko, N. A1 - Zhang, L. A1 - Cottrell, G. A1 - Movellan, J. KW - Application software KW - approximation theory KW - Bayes methods KW - Bayesian methods KW - Bayesian model KW - camera control KW - Cameras KW - Central Processing Unit KW - Computational efficiency KW - Computational modeling KW - Explosions KW - fast approximation KW - human visual attention KW - Humans KW - Open loop systems KW - robot cameras KW - robot vision KW - Robot vision systems KW - robotic application KW - task free conditions KW - visual saliency model AB -Recent years have seen an explosion of research on the computational modeling of human visual attention in task free conditions, i.e., given an image predict where humans are likely to look. This area of research could potentially provide general purpose mechanisms for robots to orient their cameras. One difficulty is that most current models of visual saliency are computationally very expensive and not suited to real time implementations needed for robotic applications. Here we propose a fast approximation to a Bayesian model of visual saliency recently proposed in the literature. The approximation can run in real time on current computers at very little computational cost, leaving plenty of CPU cycles for other tasks. We empirically evaluate the saliency model in the domain of controlling saccades of a camera in social robotics situations. The goal was to orient a camera as quickly as possible toward human faces. We found that this simple general purpose saliency model doubled the success rate of the camera: it captured images of people 70% of the time, when compared to a 35% success rate when the camera was controlled using an open-loop scheme. After 3 saccades (camera movements), the robot was 96% likely to capture at least one person. The results suggest that visual saliency models may provide a useful front end for camera control in robotics applications.
JF - IEEE International Conference on Robotics and Automation, 2008. ICRA 2008 PB - IEEE CY - Pasadena, CA SN - 978-1-4244-1646-2 ER - TY - CONF T1 - Learning to Learn T2 - IEEE International Conference on Development and Learning Y1 - 2007 A1 - Butko, N. A1 - Movellan, J. JF - IEEE International Conference on Development and Learning ER - TY - Generic T1 - The RUBI Project: A Progress Report T2 - Proceedings of the 2nd ACM/IEEE International Conference of Human-Robot Interaction Y1 - 2007 A1 - Tanaka, F. A1 - Movellan, J. A1 - Taylor, C. A1 - Ruvolo, P. A1 - Eckhardt, M. JF - Proceedings of the 2nd ACM/IEEE International Conference of Human-Robot Interaction ER - TY - JOUR T1 - Socialization between toddlers and robots at an early childhood education center JF - Proceedings of the National Academy of Science Y1 - 2007 A1 - Tanaka, F. A1 - Cicourel, A. A1 - Movellan, J. VL - 104 IS - 46 ER - TY - Generic T1 - Behavior Analysis of Children’s Touch on a Small Humanoid Robot: Long-term Observation at a Daily Classroom over Three Months T2 - Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) Y1 - 2006 A1 - Tanaka, F. A1 - Movellan, J. JF - Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) PB - Hatfield CY - United Kingdom ER - TY - Generic T1 - Daily HRI evaluation at a classroom environment: Reports from dance interaction experiments T2 - Proceedings of the 2006 Conference on Human-Robot Interaction (HRI) Y1 - 2006 A1 - Tanaka, F. A1 - Movellan, J. A1 - Fortenberry, B. A1 - Aisaka, K. JF - Proceedings of the 2006 Conference on Human-Robot Interaction (HRI) CY - Salt Lake City ER - TY - THES T1 - Learning Real-Time Object Detectors: Probabilistic Generative Approaches T2 - Department of Cognitive Science Y1 - 2006 A1 - Fasel, I. JF - Department of Cognitive Science PB - University of California, San Diego CY - San Diego VL - Doctoral dissertation ER - TY - CONF T1 - The RUBI Project: Designing Everyday Robots by Immersion T2 - Fifth International Conference on Development and Learning (ICDL) Y1 - 2006 A1 - Tanaka, F. A1 - Movellan, J. JF - Fifth International Conference on Development and Learning (ICDL) CY - Bloomington, U.S.A ER - TY - Generic T1 - Developing Dance Interaction between QRIO and Toddlers in a Classroom Environment: Plans for the First Steps T2 - Proceedings of the 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN) Y1 - 2005 A1 - Tanaka, F. A1 - Fortenberry, B. A1 - Aisaka, K. A1 - Movellan, J. KW - Best Paper Award JF - Proceedings of the 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN) CY - Nashville, U.S.A. ER - TY - COMP T1 - MPT: the Machine Perception Toolbox Y1 - 2005 A1 - Fasel, I. A1 - Fortenberry, B. A1 - Movellan, J. ER - TY - Generic T1 - Plans for developing real-time dance interaction between qrio and toddlers in a classroom environment T2 - Proceedings of the International Conference on Development and Learning (ICDL05) Y1 - 2005 A1 - Tanaka, F. A1 - Fortenberry, B. A1 - Aisaka, K. A1 - Movellan, J. JF - Proceedings of the International Conference on Development and Learning (ICDL05) CY - Osaka, Japan ER - TY - Generic T1 - The RUBI project: Origins, principles and first steps T2 - Proceedings of the International Conference on Development and Learning (ICDL05) Y1 - 2005 A1 - Movellan, J. A1 - Tanaka, F. A1 - Fortenberry, B. A1 - Aisaka, K. JF - Proceedings of the International Conference on Development and Learning (ICDL05) CY - Osaka, Japan ER - TY - CONF T1 - Analysis of machine learning methods for real-time recognition of facial expressions from video T2 - Computer Vision and Pattern Recognition: Face Processing Workshop Y1 - 2004 A1 - Littlewort, G. A1 - Bartlett, M. A1 - Fasel, I. A1 - Chenu, J. A1 - Movellan, J. JF - Computer Vision and Pattern Recognition: Face Processing Workshop ER - TY - JOUR T1 - Towards social robots: Automatic evaluation of human-robot interaction by face detection and expression classification JF - Advances in Neural Information Processing Systems Y1 - 2004 A1 - Littlewort, G. A1 - Bartlett, M. A1 - Chenu, J. A1 - Fasel, I. A1 - Kanda, T. A1 - Ishiguro, H. A1 - Movellan, J. VL - 16 ER -