This study presents a novel framework for 3D gaze tracking tailored for mixed-reality settings, aimed at enhancing joint attention and collaborative efforts in team-based scenarios. Conventional gaze tracking, often limited by monocular cameras and traditional eye-tracking apparatus, struggles with simultaneous data synchronization and analysis from multiple participants in group contexts. Our proposed framework leverages state-of-the-art computer vision and machine learning techniques to overcome these obstacles, enabling precise 3D gaze estimation without dependence on specialized hardware. Utilizing facial recognition and deep learning, the framework achieves real-time, tracking of gaze patterns across several individuals, addressing common depth estimation errors, and ensuring spatial and identity consistency within the dataset. This provides mechanisms for significant advances in behavior and interaction analysis in educational and professional training applications in dynamic and unstructured environments.
This framework integrates facial recognition and advanced gaze analysis to enable real-time 3D gaze tracking in collaborative mixed-reality environments. It precisely estimates gaze directions and maps interactions within dynamically reconstructed 3D spaces, enhancing the study of social dynamics and participant engagement.
The facial recognition module utilizes a finely-tuned Multi-task Cascaded Convolutional Network (MTCNN) for detecting and tracking faces in real-time. This module identifies and tracks participants across frames, even amidst movement and occlusions. By creating vector embeddings with FaceNet, the system ensures consistent identification, linking detected faces to their respective identities. This enables detailed analysis of individual participant behavior, providing a robust foundation for subsequent gaze tracking and interaction mapping within the 3D environment.
The gaze estimation module is designed to provide precise 3D gaze tracking by combining advanced computer vision techniques with depth estimation. Initially, the module processes face crops using L2CS-Net to output 3D gaze vectors that indicate the direction of gaze through pitch and yaw angles. These vectors are then integrated into a 3D rotation matrix to establish gaze orientation. To further enhance accuracy, the module employs ZoeDepth for metric depth estimation, allowing for consistent and realistic 3D scene reconstruction. By reprojecting the 2D facial data into a 3D space, the system accurately determines where each participant is looking within the reconstructed environment. This method ensures that the gaze interactions are precisely mapped to relevant objects or areas of interest, providing deep insights into participant focus and engagement in dynamic, mixed-reality settings.
@misc{davalos20243dgazetrackingstudying,
title={3D Gaze Tracking for Studying Collaborative Interactions in Mixed-Reality Environments},
author={Eduardo Davalos and Yike Zhang and Ashwin T. S. and Joyce H. Fonteles and Umesh Timalsina and Guatam Biswas},
year={2024},
eprint={2406.11003},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.11003},
}