Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning and the Physical Sciences

Cooperative multi-agent reinforcement learning outperforms decentralized execution in high-dimensional nonequilibrium control for steady-state design

Shriram Chennakesavalu · Grant Rotskoff


Abstract:

Experimental advances enabling high-resolution external control create new opportunities to produce materials with exotic properties. In this work, we investigate how a multi-agent reinforcement learning approach can be used to design external control protocols for self-assembly. We find that a fully decentralized approach performs remarkably well even with a "coarse" level of external control. More importantly, we see that a partially decentralized approach, where we include information about surrounding regions allows us to better control our system towards some target distribution. We explain this by analyzing our approach as a partially-observed Markov decision process. With a partially decentralized approach, the agent is able to act more presciently, both by preventing the formation of undesirable structures and by better stabilizing target structures, as compared to a fully decentralized approach.

Chat is not available.