02225nas a2200373 4500008004100000020002200041245006700063210006300130260002900193520104300222653001901265653003001284653003801314653002801352653001901380653002101399653002201420653003801442653001101480653001901491653002001510653001701530653001301547653001901560653003001579653002001609653001501629653003601644653001901680653002001699100001801719700002501737856008901762 2008 eng d a978-1-4244-2153-400aA discriminative approach to frame-by-frame head pose tracking0 adiscriminative approach to framebyframe head pose tracking aAmsterdambIEEEc09/20083 a
We present a discriminative approach to frame-by-frame head pose tracking that is robust to a wide range of illuminations and facial appearances and that is inherently immune to accuracy drift. Most previous research on head pose tracking has been validated on test datasets spanning only a small (< 20) subjects under controlled illumination conditions on continuous video sequences. In contrast, the system presented in this paper was both trained and tested on a much larger database, GENKI, spanning tens of thousands of different subjects, illuminations, and geographical locations from images on the Web. Our pose estimator achieves accuracy of 5.82deg, 5.65deg, and 2.96deg root-mean-square (RMS) error for yaw, pitch, and roll, respectively. A set of 4000 images from this dataset, labeled for pose, was collected and released for use by the research community.
10aaccuracy drift10acontinuous video sequence10acontrolled illumination condition10adiscriminative approach10aface detection10aface recognition10afacial appearance10aframe-by-frame head pose tracking10aHumans10aImage analysis10aImage databases10aLaboratories10aLighting10aMagnetic heads10amean square error methods10apose estimation10aRobustness10aroot-mean-square error tracking10aSystem testing10aVideo sequences1 aWhitehill, J.1 aMovellan, Javier, R. uhttps://rubi.ucsd.edu/content/discriminative-approach-frame-frame-head-pose-tracking