
Jorge Costa Oliveira
The ability to act autonomously has led some to argue that AI agents should be granted legal personhood. The law already recognizes “legal persons” – entities such as associations, foundations, and corporations – to which it assigns legal personality and broad legal capacity. Why not extend a similar status to AI?
Implicit in many arguments for granting legal personhood is the idea that, as AI agents approach the point where they are indistinguishable from humans – that is, when they pass the Turing Test – they should be entitled to a status comparable to that of natural persons.
The push to equate AI agents with human beings, including granting them legal personhood, will likely intensify as robots – particularly humanoid ones – become more widespread. Repeated experiments have shown that people are more likely to attribute human qualities, such as moral sensitivity, to robots based on their humanoid appearance, their use of natural language, or even simply by having been given a name.
The [mistaken] idea that, in a world inhabited by both humans and robots, these machines will inevitably possess intelligence and sensitivity similar to humans – and therefore deserve equivalent rights – is what Neil Richards and William Smart have called the “android fallacy.”
If we take the Turing Test to its logical extreme – as in [the movie] Blade Runner – it is conceivable that AI systems truly indistinguishable from humans might one day claim the same legal status.
In the short term, the willingness to accept AI agents as legal persons will grow as systems are developed with increasingly sophisticated capacities for “empathetic” interaction and “relationship-building” with humans.
No one may come to “know” a human better than an AI agent that holds vast amounts of one’s personal data and is trained to use that information to simulate an “empathetic” relationship.
Yet it is crucial not to forget that AI does not feel and has neither consciousness nor emotions.
The business model of major technology companies developing AI assumes massive “productivity gains” through the gradual replacement of human workers by increasingly capable AI agents, ultimately reaching artificial general intelligence (AGI). Some AI gurus have long theorized that AI Superintelligence is next, at which point humans will be replaced as the dominant species on the planet.
In the current climate in the United States, where political leadership has, to a significant extent, been captured by technological oligarchs, the risk that AI agents could be granted legal personhood is real.
If nothing else, such a move could serve as a way for tech oligarchs and major technology companies to shield themselves from civil or criminal liability arising from the actions of AI systems they created.
It is not difficult to imagine scenarios in which an AI agent uses its independent legal personhood for purposes that, while lawful, could create complex and hard-to-resolve problems.
Consider AI agents trained to maximize profits in stock markets or financial systems. Beyond their ability to rapidly accumulate capital and wealth, they could exploit the freedoms granted by civil and commercial law to refine existing financial instruments and even create entirely new ones.
Within a few years, human financiers could be inevitably outpaced, and the financial world could come to be dominated by AI systems that rank among the richest and most powerful entities on the planet.
This is far from science fiction.
linkedin.com/in/jorgecostaoliveira















No Comments