Introduction
We have created a large publicly available gaze data set: 5,880 images of 56 people over varying gaze directions and head poses. For each
subject, there are 5 head poses and 21 gaze directions per head pose, giving our data set more images and fixed gaze targets
than any other publicly available gaze data set at the time of its release. Our subjects were ethnically diverse and 21 of them wore glasses.
We created this data set to train a detector to sense eye contact in an image using a passive, appearance-based approach. However, the data set
can be used for many other gaze estimation and tracking purposes as well. For more information about the data set, please refer to the paper listed
in "Citation" below.
Citation
This data set is made available for non-commercial use only. If you use this data set, please cite the following paper:
"Gaze Locking: Passive Eye Contact Detection for Human?Object Interaction,"
B.A. Smith, Q. Yin, S.K. Feiner and S.K. Nayar,
ACM Symposium on User Interface Software and Technology (UIST),
pp. 271-280, Oct. 2013.
[PDF] [bib] [©] [Project Page]
Detailed Statistics
Our data set contains a total of 5,880 high-resolution images of 56 different people (32 male, 24 female), and each image has a resolution of 5,184 x 3,456 pixels.
21 of our subjects were Asian, 19 were White, 8 were South Asian, 7 were Black, and 4 were Hispanic or Latino. Our subjects ranged from 18 to 36 years of age, and
21 of them wore prescription glasses.
For each subject, we acquired images for each combination of five horizontal head poses (0°, ±15°, ±30°), seven horizontal gaze directions
(0°, ±5°, ±10°, ±15°), and three vertical gaze directions (0°, ±10°). Note that this means we collected five gaze
locking images (0° vertical and horizontal gaze direction) for each subject: one for each head pose.
The table below compares our gaze data set with data sets recently made by McMurrough et al. [link], Ponz et al.
[link], and Weidenbacher et al. [link]
for gaze tracking. McMurrough et al.'s data set is video-based and includes precise head pose measurements rather than simply calibrating head pose. The Gi4E data set
does not stabilize subjects' head pose. Weidenbacher et al.'s data set offers a wide variety of fixed head poses, but many have only two corresponding gaze directions.
Collection Procedure
We recorded each image with a Canon EOS Rebel T3i camera and a Canon EF-S 18–135 mm IS f/3.5–5.6 zoom lens. As shown in the figure below, subjects were
seated in a fixed location in front of a black background, and a grid of dots was attached to a wall in front of them. The dots were placed in 5° increments
horizontally and 10° increments vertically. There were five camera positions marked on the floor (one for each head pose), and each position was 2 m from the
subject. The dots were organized in such a way that each camera position had a corresponding 7 x 3 grid.
The subjects used a height-adjustable chin rest to stabilize their face and position their eyes 70 cm above the floor. The camera was placed at eye height, as was the
center row of dots. For each subject and head pose (camera position), we took three to six images of the subject gazing (in a raster scan fashion) at each dot of the
pose's corresponding grid of dots. The images were captured asynchronously. To ensure the subject was in focus, not blinking, and looking in the correct direction,
we viewed each image at full resolution afterwards and kept the best one from each set of three or six.
Download Link
You can download the data set here.
This data set is made available for non-commercial use only. If you use this data set, please cite the paper listed in "Citation" above.
Project Page
Gaze Locking: Passive Eye Contact Detection for Human–Object Interaction