Abstract
P. Altmann, T. Phan, M. Zorn, C. Linnhoff-Popien and S. Koenig. Dynamic Incentivized Cooperation under Changing Rewards [Extended Abstract]. In International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages (in print), 2026.Abstract: Many real-world multi-agent systems are characterized by two simultaneous challenges: strategic tension in social dilemmas and non-stationary reward signals. While peer incentivization (PI) has emerged as a decentralized mechanism to promote cooperation in multi-agent reinforcement learning (MARL), existing approaches typically rely on fixed or externally scaled incentive magnitudes. When environmental rewards change, due to scaling, shifting, or drift, the relative strength between rewards and incentives can become misaligned, which destabilizes cooperation even when the underlying strategic structure remains unchanged. We analyze this structural sensitivity and argue that reward normalization preserves gradient invariance but does not resolve incentive misalignment in social dilemmas. We then introduce Dynamic Reward Incentives for Variable Exchange (DRIVE), a reciprocal shaping mechanism that exchanges reward differences rather than fixed magnitudes. Because these differences are expressed in reward units, they scale proportionally under affine reward changes, preserving the relative influence of environmental rewards and incentives.
Many publishers do not want authors to make their papers available electronically after the papers have been published. Please use the electronic versions provided here only if hardcopies are not yet available. If you have comments on any of these papers, please send me an email! Also, please send me your papers if we have common interests.
This page was automatically created by a bibliography maintenance system that was developed as part of an undergraduate research project, advised by Sven Koenig.