CAI Logo

HAGI++: Head-Assisted Gaze Imputation and Generation

Chuhan Jiao, Zhiming Hu, Andreas Bulling

arXiv:2511.02468, 2025.


Abstract

Mobile eye tracking plays a vital role in capturing human visual attention across both real-world and extended reality (XR) environments, making it an essential tool for applications ranging from behavioural research to human-computer interaction. However, missing values due to blinks, pupil detection errors, or illumination changes pose significant challenges for further gaze data analysis. To address this challenge, we introduce HAGI++ - a multi-modal diffusion-based approach for gaze data imputation that, for the first time, uses the integrated head orientation sensors to exploit the inherent correlation between head and eye movements. HAGI++ employs a transformer-based diffusion model to learn cross-modal dependencies between eye and head representations and can be readily extended to incorporate additional body movements. Extensive evaluations on the large-scale Nymeria, Ego-Exo4D, and HOT3D datasets demonstrate that HAGI++ consistently outperforms conventional interpolation methods and deep learning-based time-series imputation baselines in gaze imputation. Furthermore, statistical analyses confirm that HAGI++ produces gaze velocity distributions that closely match actual human gaze behaviour, ensuring more realistic gaze imputations. Moreover, by incorporating wrist motion captured from commercial wearable devices, HAGI++ surpasses prior methods that rely on full-body motion capture in the extreme case of 100% missing gaze data (pure gaze generation). Our method paves the way for more complete and accurate eye gaze recordings in real-world settings and has significant potential for enhancing gaze-based analysis and interaction across various application domains.

Links


BibTeX

@techreport{jiao25_arxiv, title = {{{HAGI}}++: {{Head-Assisted Gaze Imputation}} and {{Generation}}}, shorttitle = {{{HAGI}}++}, author = {Jiao, Chuhan and Hu, Zhiming and Bulling, Andreas}, year = {2025}, publisher = {arXiv}, doi = {10.48550/arXiv.2511.02468} }