Gaze Estimation Using a Camera-Based Model in a Classroom


lars Ojinnaka

Document Type


Degree Name

Master of Science (MS)


Computer Science and Info Sys

Date of Award

Spring 2020


In this study, we develop and design methods to estimate the gaze vector of a subject using video eye-tracking from regular camera inputs with respect to an area of interest within an environment. Gaze estimation within an environment while performing a task has great potential for facilitating human computer interactive systems to assist with, monitor and increase task performance. Our underlying assumption is that visual cognitive processes of the subject give insights into more important measures such as attention and mind wandering. For example, a final version of the system we develop could be used to estimate attention and engagement of students broken down by time periods and cross referencing the activities that were happening in the classroom at each of these moments. In this research we develop the components of a pipeline for a system that would take regular camera input of an environment with potentially multiple subjects to be tracked. We implement some of these pieces to demonstrate the feasibility of current technology to capture and track eye fixations from regular camera inputs. We also explore and discuss the feasibility of implementing and putting together all of the pieces of such a system to create the envisioned end-to-end gaze vector estimation of multiple subjects. Results are presented of training classifiers for estimating head pose from video images. We also develop and present techniques for estimating position information of people identified in camera images. Together with other classifiers, this information can theoretically be used to estimate the eye gaze vectors of subjects in the images, and project this to the area within the environment currently being gazed at.


Derek Harter

Subject Categories

Computer Sciences | Physical Sciences and Mathematics