01620nas a2200349 4500008004100000020002200041245006600063210006600129260003200195520052200227653001900749653001900768653002800787653003500815653001300850653003900863653001400902653002300916653002400939653001700963653001100980653003900991653002101030653000901051653002501060653001501085653001501100653003001115100001501145700001701160856009301177 2008 eng d a978-1-4244-2661-400aAutomatic cry detection in early childhood education settings0 aAutomatic cry detection in early childhood education settings aMonterey, CAbIEEEc08/20083 a
We present results on applying a novel machine learning approach for learning auditory moods in natural environments [1] to the problem of detecting crying episodes in preschool classrooms. The resulting system achieved levels of performance approaching that of human coders and also significantly outperformed previous approaches to this problem [2].
10aAcoustic noise10aauditory moods10aautomatic cry detection10abehavioural sciences computing10aDeafness10aearly childhood education settings10aeducation10aEducational robots10aEmotion recognition10ahuman coders10aHumans10alearning (artificial intelligence)10aMachine Learning10aMood10apreschool classrooms10aPrototypes10aRobustness10aWorking environment noise1 aRuvolo, P.1 aMovellan, J. uhttps://rubi.ucsd.edu/content/automatic-cry-detection-early-childhood-education-settings02225nas a2200373 4500008004100000020002200041245006700063210006300130260002900193520104300222653001901265653003001284653003801314653002801352653001901380653002101399653002201420653003801442653001101480653001901491653002001510653001701530653001301547653001901560653003001579653002001609653001501629653003601644653001901680653002001699100001801719700002501737856008901762 2008 eng d a978-1-4244-2153-400aA discriminative approach to frame-by-frame head pose tracking0 adiscriminative approach to framebyframe head pose tracking aAmsterdambIEEEc09/20083 aWe present a discriminative approach to frame-by-frame head pose tracking that is robust to a wide range of illuminations and facial appearances and that is inherently immune to accuracy drift. Most previous research on head pose tracking has been validated on test datasets spanning only a small (< 20) subjects under controlled illumination conditions on continuous video sequences. In contrast, the system presented in this paper was both trained and tested on a much larger database, GENKI, spanning tens of thousands of different subjects, illuminations, and geographical locations from images on the Web. Our pose estimator achieves accuracy of 5.82deg, 5.65deg, and 2.96deg root-mean-square (RMS) error for yaw, pitch, and roll, respectively. A set of 4000 images from this dataset, labeled for pose, was collected and released for use by the research community.
10aaccuracy drift10acontinuous video sequence10acontrolled illumination condition10adiscriminative approach10aface detection10aface recognition10afacial appearance10aframe-by-frame head pose tracking10aHumans10aImage analysis10aImage databases10aLaboratories10aLighting10aMagnetic heads10amean square error methods10apose estimation10aRobustness10aroot-mean-square error tracking10aSystem testing10aVideo sequences1 aWhitehill, J.1 aMovellan, Javier, R. uhttps://rubi.ucsd.edu/content/discriminative-approach-frame-frame-head-pose-tracking