
Jorge Costa Oliveira
Although achieving Artificial General Intelligence (AGI) – intelligence comparable to humans across a wide range of domains – has become a priority for some leading American technology companies, there is still no consensus among experts about when AGI might arrive.
Only a few years ago, before the advances in large language models (LLMs), many scientists predicted AGI would emerge around 2060. A 2023 AIMultiple survey of 2,778 AI researchers found that many now expect AGI around 2040.
Ilya Sutskever, co-founder and former chief scientist at OpenAI, has suggested AGI could emerge within the next five to ten years, though he acknowledges uncertainty. More recently, Roman Yampolskiy, director of the Cyber Security Lab at the University of Louisville, predicted in September 2025 that by 2030 “we will likely have humanoid robots with sufficient flexibility and dexterity to compete with humans in all domains.” Some technology entrepreneurs are even more optimistic. Elon Musk has said that “an AI smarter than the smartest human” could arrive as early as 2026.
Yet an AGI capable of understanding physical reality, maintaining persistent memory and performing deductive reasoning – going beyond pattern recognition to grasp underlying rules of physics, mathematics or logic and solve unfamiliar problems – is something Yann LeCun, chief AI scientist at Meta Platforms, believes remains decades away.
For LeCun, LLMs are not a direct path to AGI because current AI systems lack a “world model.” Humans and animals learn how the world works through observation and physical interaction, grasping concepts such as causality, gravity and object permanence.
Models like GPT-4 do not understand the physical reality behind those patterns. Without such a world model, AI cannot reliably plan complex actions or foresee the real-world consequences of its decisions.
LeCun highlights the contrast between biological and artificial learning. A four-year-old child has seen roughly 109 bytes of visual information and already grasps basic physical and social concepts; a teenager can learn to drive in about twenty hours. By contrast, an AI requires billions of text tokens merely to reach reasonable fluency and still makes logical mistakes a child would not.
Moreover, intelligence based solely on language has limits because, as LeCun argues, “most human knowledge is not linguistic.” Attempting to reach human-level intelligence purely through text is like trying to explain the color blue to someone who has never seen it.
It is easy to understand the optimism reflected in increasingly shorter AGI timelines – likely driven in part by the need to sustain investor enthusiasm so that massive funding continues to flow into AI development. Yet when I read predictions of radical transformations in the near future, I am reminded of the film “Jonas Who Will Be 25 in the Year 2000.”
In the opening scenes of Alain Tanner’s 1976 film, which I saw in a theater that same year, the camera wanders through the London we knew in 1975 before a caption appears: “London, year 2000.”
Beyond LeCun’s skepticism, Tanner reminds us that social inertia is powerful and that change in societies unfolds slowly – far more slowly than many theorists or prophets predict. This is especially true when bold predictions conveniently help justify raising billions for AI companies whose progress toward AGI remains slow and whose AI financial results are often disappointing.
linkedin.com/in/jorgecostaoliveira














No Comments