High Entropy Leads to Symmetry Equivariant Policies in Dec-POMDPs
Johannes Forkel, Constantin Ruhdorfer, Andreas Bulling, Jakob Foerster
arXiv:2511.22581, 2026.
Abstract
We prove that in any Dec-POMDP, sufficiently high entropy regularization ensures that policy gradient ascent with tabular softmax parametrization always converges, for any initialization, to the same joint policy, and that this joint policy is equivariant w.r.t. all symmetries of the Dec-POMDP. In particular, policies coming from different random seeds will be fully compatible, in that their cross-play returns are equal to their self-play returns. Through extensive empirical evaluation of independent PPO in the Hanabi, Overcooked, and Yokai environments, we find that the entropy coefficient has a massive influence on the cross-play returns between independently trained policies, and that the drop in self-play returns coming from increased entropy regularization can often be counteracted by greedifying the learned policies after training. In Hanabi we achieve a new SOTA in inter-seed cross-play this way. Despite clear limitations of this recipe, which we point out, both our theoretical and empirical results indicate that during hyperparameter sweeps in Dec-POMDPs, one should consider far higher entropy coefficients than is typically done.Links
doi: 10.48550/arXiv.2511.22581
Paper: forkel26_arxiv.pdf
BibTeX
@techreport{forkel26_arxiv,
title = {High Entropy Leads to Symmetry Equivariant Policies in {{Dec-POMDPs}}},
author = {Forkel, Johannes and Ruhdorfer, Constantin and Bulling, Andreas and Foerster, Jakob},
year = {2026},
doi = {10.48550/arXiv.2511.22581}
}