
Jorge Costa Oliveira
Artificial general intelligence (AGI) – which can be understood as the point at which a machine is capable of performing any intellectual task a human can – is Silicon Valley’s “Holy Grail.” While there is no shortage of voices warning of existential risks, for most leaders of companies developing artificial intelligence (AI) it is seen as the solution to humanity’s problems.
Sam Altman, CEO of OpenAI, is one of its most vocal enthusiasts, describing AGI as the most powerful tool ever created by mankind, capable of dramatically raising living standards and solving climate crises or diseases. He has also publicly stated, however, that “the worst case is the end of everything” (extinction), and has therefore argued for the creation of a global governance body to monitor AGI systems, similar to the oversight applied to nuclear energy.
For Demis Hassabis, CEO of Google DeepMind, AGI will be the “ultimate accelerator” of science, with a focus on how AI can discover new materials, medicines, and understand fundamental physics in ways humans alone cannot. Jensen Huang, CEO of NVIDIA, sees AGI – which he defines as the ability to pass a medical, law, or engineering exam with distinction – as the foundation of the “AI factory,” where AI agents improve and recreate themselves autonomously, transforming all productive industries. Elon Musk, CEO of Tesla, X, SpaceX, and xAI, founded xAI to create an “AI that seeks ultimate truth” and frequently describes AGI as the greatest threat to civilization if it is not aligned with human values.
Yann LeCun, Meta’s chief AI scientist, argues that we are still very far from AGI because current models (LLMs) do not understand physical reality, lack persistent memory, and are unable to plan.
The transition from current models (known as Narrow AI) to AGI represents a leap from a tool that “predicts the next step” to an agent that “understands and reasons.” Today’s models, such as ChatGPT or Gemini, are based on statistical probability and are extremely good at predicting the next word in a sequence based on patterns drawn from trillions of data points. They operate through “statistical intuition.” If the training data does not contain the logic of a new problem, the model may “hallucinate.”
LeCun argues that before reaching “human-level” intelligence, it is necessary to achieve intelligence “at the level of a cat or a rat,” something AI has not yet mastered in terms of autonomy and common sense. AGI is expected to possess deductive and systemic reasoning, going beyond pattern recognition to understand the underlying rules – physical, mathematical, or logical – in order to solve problems it has never encountered before. LeCun believes we are still far from such an event.
Until a few years ago, estimates suggested that AGI might exist around 2060. The event date has been pushed back, with deadlines being shortened as massive funds are raised for its creation.
The American tech executives raising billions of dollars for AGI development promise a radiant future for all humanity. Yet the time has come to question whether it is right to consider the creation of AGI as inevitable. With each passing year, more investors are asking whether the miraculous AGI may, after all, belong to the realm of fantasy. Let’s hope that, in the meantime, this chimera doesn’t cause a crisis in the financial markets.
linkedin.com/in/jorgecostaoliveira





No Comments