Ontology-Guided Diffusion for Zero-Shot Visual Sim2Real Transfer
Mohamed Youssef, Mayar Elfares, Anna-Maria Meer, Matteo Bortoletto, Andreas Bulling
arXiv:2603.18719, 2026.
Abstract
Bridging the simulation-to-reality (sim2real) gap remains challenging as labelled real-world data is scarce. Existing diffusion-based approaches rely on unstructured prompts or statistical alignment, which do not capture the structured factors that make images look real. We introduce Ontology- Guided Diffusion (OGD), a neuro-symbolic zero-shot sim2real image translation framework that represents realism as structured knowledge. OGD decomposes realism into an ontology of interpretable traits – such as lighting and material properties – and encodes their relationships in a knowledge graph. From a synthetic image, OGD infers trait activations and uses a graph neural network to produce a global embedding. In parallel, a symbolic planner uses the ontology traits to compute a consistent sequence of visual edits needed to narrow the realism gap. The graph embedding conditions a pretrained instruction-guided diffusion model via cross-attention, while the planned edits are converted into a structured instruction prompt. Across benchmarks, our graph-based embeddings better distinguish real from synthetic imagery than baselines, and OGD outperforms state-of-the-art diffusion methods in sim2real image translations. Overall, OGD shows that explicitly encoding realism structure enables interpretable, data-efficient, and generalisable zero-shot sim2real transfer.Links
doi: 10.48550/arXiv.2603.18719
Paper: youssef26_arxiv.pdf
BibTeX
@techreport{youssef26_arxiv,
title = {Ontology-{{Guided Diffusion}} for {{Zero-Shot Visual Sim2Real Transfer}}},
author = {Youssef, Mohamed and Elfares, Mayar and Meer, Anna-Maria and Bortoletto, Matteo and Bulling, Andreas},
year = {2026},
doi = {10.48550/arXiv.2603.18719}
}