What is the Actor-Critic Approach for Controlling Language Model Agents in Decision-Making?

Original title: Controlling Large Language Model-based Agents for Large-Scale Decision-Making: An Actor-Critic Approach

Authors: Bin Zhang, Hangyu Mao, Jingqing Ruan, Ying Wen, Yang Li, Shao Zhang, Zhiwei Xu, Dapeng Li, Ziyue Li, Rui Zhao, Lijuan Li, Guoliang Fan

In the article, the authors discuss how the advancements in large language models (LLMs) have created new possibilities for planning and decision-making in multi-agent systems. However, they point out that as the number of agents increases, there are challenges related to hallucination in LLMs and coordination in multi-agent systems (MAS). They also highlight the importance of efficiently using tokens when using LLMs to facilitate interactions among a large number of agents.

To address these issues, the authors propose a new framework inspired by the actor-critic framework used in multi-agent reinforcement learning. They develop a modular and token-efficient solution that effectively tackles the challenges posed by LLMs and MAS. To validate their approach, they conduct experiments involving system resource allocation and robot grid transportation. The results demonstrate the significant advantages offered by their proposed approach.

Overall, this article introduces a novel framework that enhances coordination and decision-making capabilities in LLMs within large-scale multi-agent environments, addressing the challenges faced by these systems.

Original article: https://arxiv.org/abs/2311.13884