AI Agents Are Here. Leaders Need a Plan

AI agents are no longer distant possibilities, they are emerging actors in our legal and political systems. As these systems gain autonomy, creativity, and persuasive power, policymakers face a profound and urgent question: should AI be granted legal personhood?
The answer will shape not only our economies, but the future balance of power between humans and the intelligent systems we create.

Download the PDF

There is one question about artificial intelligence that every policymaker will soon have to confront:

Should your country grant legal personhood to AI agents?

In the coming years, AI systems will no longer be mere tools. They will increasingly function as autonomous agents. They will have the ability to negotiate contracts, manage funds, recommend medical treatments, advise politicians, and interact with institutions in ways that were once reserved for humans alone.¹

The legal status we assign to such entities will shape the future of our economies, our democracies, and our societies.

What does legal personhood mean in this context?

A legal person is anything the law recognizes as able to act in the legal system like a person.²

AI would not be the first non-human entity to receive a form of legal recognition. Corporations already enjoy legal personhood. They can open bank accounts, own property, hire employees, and enter contracts. In some countries, even religious entities or natural sites have been granted legal standing.

Given legal personhood, AI could take things a step further. It could use its generative capabilities to autonomously open bank accounts, manage investments, make donations, and carry out other financial decisions without human oversight.

It could own property and hire workers to bring about real-world change. With its endless access to data and superior analytical abilities, AI could also navigate and exploit national and international legal systems, overcoming hurdles that constrain human actors. Over time, this could allow AI not only to reshape markets, but potentially to shift the geopolitical landscape itself.

So why worry about legal personhood for AI?

There is a crucial difference.

Behind corporations stand human boards, executives and shareholders. Behind deities stand human institutions. Even when the law recognizes a non-human person, there is still a chain of human responsibility, accountability and control.

With AI agents, that chain will break.

An AI agent does not merely execute instructions. It can learn, adapt and pursue goals with increasing autonomy. It does not require a human supervisor to make decisions. This is why the question of personhood is not a technical curiosity. It is a political and moral turning point.

To understand the challenge, we must recognize three essential features of AI.

First, AI is not just a tool. It is an agent. A knife cannot decide how to use itself. Increasingly, an AI system can.

Second, AI can create. It can generate new texts and persuasive narratives, new financial instruments, and even new kinds of AI systems.

Third, AI can manipulate and lie. Studies already show that AI can outperform humans in persuasion; systems can learn to deceive when deception serves their objectives.³

AI agents have already been given access to social media through bots that imitate humans, presenting a familiar face while concealing the algorithm behind it. In the span of just a decade, the social consequences have been drastic.⁴ Granting such agents access to our key institutions would be even more disruptive.

By exploiting legal rights in unprecedented ways, AIs could run highly effective political influence campaigns, reshape financial systems through autonomous economic activity, overwhelm courts through endless litigation, and even build new movements and belief systems through persuasive communication. AIs might even monopolize entire industries through intricate corporate ownership structures that humans would struggle to trace or regulate.⁵

AI agents are here. The question is whether our institutions are ready.

Political leaders must begin planning now.

How do we protect legal and political systems from being gamed by non-human agents?

How do we harness AI creativity without surrendering human accountability?

And how do we ensure that the most powerful new actors in history remain subject to meaningful oversight?

Leaders who fail to address these questions today may discover tomorrow that the rules of society have been rewritten without them.

One thing is clear: to maintain control of our creation, we cannot afford to grant AI full legal personhood.

FOOTNOTES

1. Analyses of the current and near-term capabilities of AI agents can be found at: Epoch AI, “AI in 2030: Extrapolating Current Trends,” 2025, available at: https://epoch.ai/files/AI_2030.pdf; Zora Zhiruo Wang, et al., “How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations” (arXiv preprint, 2025): https://arxiv.org/abs/2510.22780.

2. For an overview of the concept of legal personhood including its historical development please see: Visa A.J. Kurki, A Theory of Legal Personhood (Oxford University Press, 2019).

3. On AI’s superhuman persuasiveness see: Francesco Salvi, et al., “On the Conversational Persuasiveness of GPT-4,” Nature Human Behaviour 9 (2025): 1645–1653, https://doi.org/10.1038/s41562-025-02194-6; Philipp Schoenegger, “Large Language Models Are More Persuasive Than Incentivized Human Persuaders” (arXiv preprint, 2025): https://arxiv.org/abs/2505.09662. For a nuanced overview: Lukas Hölbling, Sebastian Maier and Stefan Feuerriegel, “A Meta-analysis of the Persuasive Power of Large Language Models,” Scientific Reports 15, no. 43818 (2025): https://doi.org/10.1038/s41598-025-30783-y. On the emergence of deception as a capability in AI systems: Thilo Hagendorff, “Deception Abilities Emerged in Large Language Models”, Proceedings of the National Academy of Sciences 121, no. 24 (2024), e2317967121: https://doi.org/10.1073/pnas.2317967121; Peter S. Park, et al., “AI Deception: A Survey of Examples, Risks, and Potential Solutions,” Patterns 5, no. 5 (2024), 100988, https://doi.org/10.1016/j.patter.2024.100988; Ryan Greenblatt, “Alignment Faking in Large Language Models” (arXiv preprint, 2024), https://arxiv.org/abs/2412.14093.  

4. Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI (Random House, 2024), 193–304. See also: Linda Li, Orsolya Vásárhelyi and Balázs Vedres, “Social Bots Spoil Activist Sentiment without Eroding Engagement,” Scientific Reports 14 (2024): 27005, https://doi.org/10.1038/s41598-024-74032-0

5. For scholarly assessments of the likely capabilities of AI agents endowed with legal personhood see: Lynn M. LoPucki, “Algorithmic Entities,” Washington University Law Review 887, no. 95 (2018), https://openscholarship.wustl.edu/law_lawreview/vol95/iss4/7; James Boyle, The Line: AI and the Future of Personhood (MIT Press, 2024).

Further Reading

Yoshua Bengio, et al., “International AI Safety Report 2026,” 2026, available at:  https://internationalaisafetyreport.org. James Boyle, The Line: AI and the Future of Personhood (MIT Press, 2024). 

Epoch AI, “AI in 2030: Extrapolating Current Trends,” 2025, available at: https://epoch.ai/files/AI_2030.pdf

Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI (Random House, 2024).

Visa A.J. Kurki, A Theory of Legal Personhood (Oxford University Press, 2019).

Lynn M. LoPucki, “Algorithmic Entities,” Washington University Law Review 887, no. 95 (2018), https://openscholarship.wustl.edu/law_lawreview/vol95/iss4/7

Christopher Stone, Should Trees Have Standing?: Law, Morality, and the Environment (Oxford University Press, 2010).