
Jorge Costa Oliveira
There is no doubt that Artificial Intelligence (AI) will revolutionize industries, supercharge scientific research, and make many aspects of life and work more efficient. Yet, there is no consensus on whether we should regulate its development and use, on the scope and extent of such regulation, and on how much we should allow tech companies to self-regulate a field of such economic and social relevance that also carries enormous risks.
Many proselytizers of AI staunchly oppose regulation of the sector, offering several arguments: (i) limitation of innovation (excessive regulation could stifle innovation and AI development, undermining economic growth potential and improvements in quality of life); (ii) difficulty of regulation (AI is a complex and constantly evolving technology, making it hard for governments to design effective and up-to-date regulations); (iii) risk of violations (regulation may not be able to fully prevent AI abuses and could even create new vulnerabilities); (iv) cost and complexity (implementing and maintaining regulations can be expensive and complex, especially for small businesses and start-ups).
Naturally, tech oligarchs in the industry strongly push for self-regulation. “New technology often brings new challenges, and it’s up to companies to make sure they build and deploy products responsibly,” said Meta CEO Mark Zuckerberg before the U.S. Senate in 2023, defending self-regulation in AI: “We’re able to build safeguards into these systems.”
The same mantra – “self-regulation is important” – was repeated by Sam Altman, CEO of OpenAI, during a visit to New Delhi that same year, as hype over ChatGPT was building globally. However, he also conceded that “the world should not be left entirely in the hands of the companies either, given what we think is the power of this technology.”
On the pro-regulation side, there are strong arguments: (i) protection of human rights (ensuring AI is developed and used in ways that safeguard copyright, data protection, privacy, fair labor practices, non-discrimination, and the environment); (ii) prevention of abuses (preventing violations of the law or uses that endanger public safety – such as generative deepfakes, the creation of chemical and biological weapons, and cyberattacks); (iii) promotion of trust (ensuring that technologies are developed and deployed transparently and responsibly); (iv) fostering responsible innovation (ensuring companies develop technologies that are safe and beneficial to society).
Clearly, it is important to consider the pros and cons of AI regulation and develop approaches that promote responsible innovation, protect human rights, and ensure safety and effectiveness.
China (in 2023), the EU (in 2024), and South Korea (in 2025) have already enacted AI legislation. Canada, Brazil, and other countries are finalizing laws, as are the OECD and UNESCO.
India is postponing its decision in order to attract AI investment. The current U.S. leadership, which came to power in cahoots with the major techno-oligarchs, opposes regulation and seeks to interfere in other countries’ decision-making under the tired banner of “persecution of American companies.”
The battle over regulation – and its purpose, scope, flexibility, balance, and international coordination – is pivotal to the evolution of our society, and much of our near future will depend on its outcome.
linkedin.com/in/jorgecostaoliveira






No Comments