Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable Models
This paper discusses the theory and algorithms for interacting large language model agents (LLMAs) using methods from statistical signal processing and microeconomics. While both fields are mature, their application to decision-making involving interacting LLMAs remains unexplored. Motivated by Baye...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10870230/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823857128949940224 |
---|---|
author | Adit Jain Vikram Krishnamurthy |
author_facet | Adit Jain Vikram Krishnamurthy |
author_sort | Adit Jain |
collection | DOAJ |
description | This paper discusses the theory and algorithms for interacting large language model agents (LLMAs) using methods from statistical signal processing and microeconomics. While both fields are mature, their application to decision-making involving interacting LLMAs remains unexplored. Motivated by Bayesian sentiment analysis on online platforms, we construct interpretable models and stochastic control algorithms that enable LLMAs to interact and perform Bayesian inference. Because interacting LLMAs learn from both prior decisions and external inputs, they can exhibit bias and herding behavior. Thus, developing interpretable models and stochastic control algorithms is essential to understand and mitigate these behaviors. This paper has three main results. First, we show using Bayesian revealed preferences from microeconomics that an individual LLMA satisfies the necessary and sufficient conditions for rationally inattentive (bounded rationality) Bayesian utility maximization and, given an observation, the LLMA chooses an action that maximizes a regularized utility. Second, we utilize Bayesian social learning to construct interpretable models for LLMAs that interact sequentially with each other and the environment while performing Bayesian inference. Our proposed models capture the herding behavior exhibited by interacting LLMAs. Third, we propose a stochastic control framework to delay herding and improve state estimation accuracy under two settings: 1) centrally controlled LLMAs and 2) autonomous LLMAs with incentives. Throughout the paper, we numerically demonstrate the effectiveness of our methods on real datasets for hate speech classification and product quality assessment, using open-source models like LLaMA and Mistral and closed-source models like ChatGPT. The main takeaway of this paper, based on substantial empirical analysis and mathematical formalism, is that LLMAs act as rationally bounded Bayesian agents that exhibit social learning when interacting. Traditionally, such models are used in economics to study interacting human decision-makers. |
format | Article |
id | doaj-art-56a10052bed043369db53cddd7eeb1b2 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-56a10052bed043369db53cddd7eeb1b22025-02-12T00:02:16ZengIEEEIEEE Access2169-35362025-01-0113254652550410.1109/ACCESS.2025.353859910870230Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable ModelsAdit Jain0https://orcid.org/0009-0005-4831-0758Vikram Krishnamurthy1https://orcid.org/0000-0002-4170-6056School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USASchool of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USAThis paper discusses the theory and algorithms for interacting large language model agents (LLMAs) using methods from statistical signal processing and microeconomics. While both fields are mature, their application to decision-making involving interacting LLMAs remains unexplored. Motivated by Bayesian sentiment analysis on online platforms, we construct interpretable models and stochastic control algorithms that enable LLMAs to interact and perform Bayesian inference. Because interacting LLMAs learn from both prior decisions and external inputs, they can exhibit bias and herding behavior. Thus, developing interpretable models and stochastic control algorithms is essential to understand and mitigate these behaviors. This paper has three main results. First, we show using Bayesian revealed preferences from microeconomics that an individual LLMA satisfies the necessary and sufficient conditions for rationally inattentive (bounded rationality) Bayesian utility maximization and, given an observation, the LLMA chooses an action that maximizes a regularized utility. Second, we utilize Bayesian social learning to construct interpretable models for LLMAs that interact sequentially with each other and the environment while performing Bayesian inference. Our proposed models capture the herding behavior exhibited by interacting LLMAs. Third, we propose a stochastic control framework to delay herding and improve state estimation accuracy under two settings: 1) centrally controlled LLMAs and 2) autonomous LLMAs with incentives. Throughout the paper, we numerically demonstrate the effectiveness of our methods on real datasets for hate speech classification and product quality assessment, using open-source models like LLaMA and Mistral and closed-source models like ChatGPT. The main takeaway of this paper, based on substantial empirical analysis and mathematical formalism, is that LLMAs act as rationally bounded Bayesian agents that exhibit social learning when interacting. Traditionally, such models are used in economics to study interacting human decision-makers.https://ieeexplore.ieee.org/document/10870230/Bayesian social learninglarge language modelsBayesian revealed preferencesstructural resultsoptimal stopping POMDPsself-attention |
spellingShingle | Adit Jain Vikram Krishnamurthy Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable Models IEEE Access Bayesian social learning large language models Bayesian revealed preferences structural results optimal stopping POMDPs self-attention |
title | Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable Models |
title_full | Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable Models |
title_fullStr | Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable Models |
title_full_unstemmed | Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable Models |
title_short | Interacting Large Language Model Agents Bayesian Social Learning Based Interpretable Models |
title_sort | interacting large language model agents bayesian social learning based interpretable models |
topic | Bayesian social learning large language models Bayesian revealed preferences structural results optimal stopping POMDPs self-attention |
url | https://ieeexplore.ieee.org/document/10870230/ |
work_keys_str_mv | AT aditjain interactinglargelanguagemodelagentsbayesiansociallearningbasedinterpretablemodels AT vikramkrishnamurthy interactinglargelanguagemodelagentsbayesiansociallearningbasedinterpretablemodels |