
Jorge Costa Oliveira
In the coming years, we should witness the emergence of numerous artificial intelligence (AI) agents designed for specific tasks or areas – IT/programming, healthcare, chemistry, finance, management, the military, and so on – which are expected to bring benefits to people’s lives and to companies. These specialized AIs – whether they are grandmasters of chess, experts in medical care, assessing banking credit risk, or stock market investments – do not pose a major risk to humanity.
However, when we turn to the notion of a general AI (AGI) – an AI that could (such systems do not yet exist, but leaders of major companies developing AI claim they are imminent) perform any intellectual task that a human being can do, capable of understanding, learning, and applying knowledge in ways similar to human intelligence, i.e., an AI able to reason logically, solve problems, and adapt to new situations autonomously, carrying out complex tasks efficiently and effectively without the need for direct human intervention – the risks to humanity become serious: (i) loss of control and surpassing of human intelligence; (ii) full autonomy and unpredictability of its behavior; (iii) algorithmic bias, undermining fairness and equality; (iv) security risks**, including** vulnerability to cyberattacks or use for malicious ends; (v) replacement of human jobs (in many professions and in large numbers); (vi) ethical challenges, such as responsibility for decisions made by AI and the protection of personal data; (vii) manipulation of information and undue influence over human decisions; (viii) the development of autonomous weapons.
The risks become existential once an AGI evolves into a Superintelligence – that is, when one (or several) AGIs significantly surpass human intelligence in all aspects, including creativity, problem-solving, and social abilities. Such a system would be able to reason, learn, create, and adapt, carrying out complex tasks far more efficiently and effectively than humans, while potentially generating new solutions and innovations that not only exceed human capability but could also result in decisions and actions misaligned with human values and goals.
Strictly speaking, it is doubtful that even qualified human beings will be able to comprehend the reasoning of a Superintelligence – whether due to the limits of human cognition, the complexity of its information processing and decision-making, or simply because its perspective and overall way of reasoning would be alien to humans.
We live in an era where many leaders of AI corporations sell us the great benefits that are to be expected from AI. That may be true for specialized AIs (though the pros and cons are still not fully clear). However, we must be very careful with AGI, a kind of entity (an agent, not a tool – let’s not forget it) that humanity has never encountered before. And we must be particularly vigilant about the development of Superintelligence.
Why does a Superintelligent AI represent an existential risk to humanity? Because history shows what happens to less intelligent beings when they engage with far more intelligent ones.
linkedin.com/in/jorgecostaoliveira






No Comments