Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Multi-Agent Security: Security as Key to AI Safety

Cooperative AI via Decentralized Commitment Devices

Xyn Sun · Davide Crapis · Matt Stephenson · Jonathan Passerat-Palmbach

Keywords: [ Maximal Extractable Value (MEV) ] [ Multi-Agent Reinforcement Learning (MARL) ] [ credible commitment devices ] [ cooperative AI ] [ multi-agent security ]

[ ] [ Project Page ]
Sat 16 Dec 9:05 a.m. PST — 9:20 a.m. PST
 
presentation: Multi-Agent Security: Security as Key to AI Safety
Sat 16 Dec 7 a.m. PST — 3:30 p.m. PST

Abstract:

Credible commitment devices have been a popular approach for robust multi-agent coordination. However, existing commitment mechanisms face limitations like privacy, integrity, and susceptibility to mediator or user strategic behavior. It is unclear if the cooperative AI techniques we study are robust to real-world incentives and attack vectors. Fortunately, decentralized commitment devices that utilize cryptography have been deployed in the wild, and numerous studies have shown their ability to coordinate algorithmic agents, especially when agents face rational or sometimes adversarial opponents with significant economic incentives, currently in the order of several million to billions of dollars. In this paper, we illustrate potential security issues in cooperative AI via examples in the decentralization literature and, in particular, Maximal Extractable Value (MEV). We call for expanded research into decentralized commitments to advance cooperative AI capabilities for secure coordination in open environments and empirical testing frameworks to evaluate multi-agent coordination ability given real-world commitment constraints.

Chat is not available.