Conference 2024
Top image

 
Home
Program LNMB conference
Registration LNMB Conference
Invited Speakers LNMB Conference
Program PhD presentations
Abstracts PhD presentations
Announcement NGB/LNMB Seminar
Abstracts/Bios NGB/LNMB Seminar
Registration NGB/LNMB Seminar
Conference Office
How to get there
 
Return to LNMB Site
 

Nicole Immorlica: Theoretical Models of Generative AI in Economic Environments

Abstract: In this two-part talk, we discuss theoretical models of generative AI in strategic scenarios such as learning and persuasion. We posit that these AI-powered tools can be modeled as economic agents with detailed information about the state of the world, and potentially with their own objectives as well. Classic economic agents can interact with the AI agent to help guide their strategic actions.

Part 1: AI and Learning In this part of the talk, we study the impact of an AI agent on learning. In this problem, the human and AI simultaneously learn about options. However, the two agents may have misaligned rewards for the same outcome and may be decentralized (unable to communicate their values or coordinate actions). The resulting Stackelberg game makes it challenging for the joint system to simultaneously learn desirable outcomes for both players. We examine which pairs of decentralized learning algorithms for the AI and the human are conducive to low regret for both players. We show that while it is impossible for any pair of learning algorithms to compete with standard Stackelberg game benchmarks, it is possible to achieve sublinear regret with respect to a relaxed benchmark that is more suitable for this decentralized learning setting.

Part 2: AI and Persuasion In this part of the talk, we study the impact of an AI agent on (Bayesian) persuasion. In the standard persuasion setup, a sender uses knowledge about a receiver's beliefs to persuade her to take certain actions. However, in many settings the sender may only have limited information about the receiver she is trying to persuade. Motivated by recent empirical research showing that Generative AI can simulate economic agents, we introduce a model of persuasion in which the sender is uncertain about the receiver's beliefs, but has access to an oracle which can simulate the receiver's behavior under any messaging policy. These simulations allow the sender to refine her information about the receiver, enabling her to be more persuasive. We show how the sender can leverage her ability to simulate the receiver in order to design querying policies which maximize her utility.

This talk is based on joint works with Kate Donahue, Meena Jagadeesan, Keegan Harris, Brendan Lucier and Alex Slivkins.