CAI Logo

AttentionLeak: What Does Human Attention Reveal About Information Visualisation?

Malte Sönnichsen, Mayar Elfares, Yao Wang, Ralf Küsters, Alina Roitberg, Andreas Bulling

Proc. International Conference on Document Analysis and Recognition (ICDAR), pp. , 2025.


Abstract

In scenarios where direct access to displayed content, such as secured web pages or confidential documents, is restricted, eye-tracking data can serve as a side channel for information inference. Represented as human attention maps, eye tracking data is widely used in research, for example, to quantify how users explore visual information. In this work, we specifically focus on visual question-answering (VQA) scenarios to demonstrate, for the first time, that a rich amount of information can be leaked solely from human attention maps. Hence, we assume that an adversary only has access to the gaze attention maps and aims to derive a range of attributes about the image (e.g. the chart type), the question (e.g. question type), and the answer (e.g. the accuracy-based complexity). This information leakage could be the first step towards potentially more complex insights about human perception and cognition. Our experiments demonstrate that deriving attributes is feasible, and simultaneously predicting multiple attributes improves the success rate for attributes that are difficult to infer. This paper highlights potential threats, encouraging the community to address these concerns and develop appropriate privacy-preserving solutions.

Links


BibTeX

@inproceedings{sonnichsen25_icdar, title = {AttentionLeak: What Does Human Attention Reveal About Information Visualisation?}, author = {Sönnichsen, Malte and Elfares, Mayar and Wang, Yao and Küsters, Ralf and Roitberg, Alina and Bulling, Andreas}, year = {2025}, pages = {}, booktitle = {Proc. International Conference on Document Analysis and Recognition (ICDAR)} }