The pursuit of artificial general intelligence (AGI)—machines capable of understanding and performing any intellectual task that a human can—has been a longstanding goal within the field of artificial intelligence (AI). However, a growing chorus of experts now contends that the current methodologies employed in AI research are unlikely to lead us to this objective. They argue that an overemphasis on scaling existing models and reliance on specific technologies may be steering the field away from achieving truly human-like intelligence.
Surveying Expert Opinions
A comprehensive survey conducted by the Association for the Advancement of Artificial Intelligence (AAAI) in 2025 revealed significant skepticism among AI researchers regarding the current trajectory toward AGI. The report highlighted that 79% of respondents believe public perceptions of AI’s capabilities are misaligned with the reality of AI research and development. Furthermore, 90% indicated that this misconception hampers progress, with 74% asserting that research directions are disproportionately influenced by hype rather than grounded scientific objectives. Gizmodo
Similarly, a study published in Nature reported that more than three-quarters of surveyed AI professionals doubt that merely enlarging current AI systems will lead to AGI. An even greater proportion expressed skepticism that neural networks—the cornerstone of many contemporary AI models—can, by themselves, achieve or surpass human intelligence. Reddit+1Nature+1
Critiques of Current Approaches
One of the central critiques centers on the limitations of large language models (LLMs). Emily M. Bender, a professor at the University of Washington, has been vocal about the shortcomings of these models. She argues that while LLMs can generate human-like text, they lack genuine understanding and intentionality, essentially functioning as “stochastic parrots” that mimic language patterns without comprehension. This raises concerns about the reliability and ethical implications of deploying such models in real-world applications. Le Monde.fr
Yann LeCun, Chief AI Scientist at Meta and a pioneer in the field, has also highlighted the inadequacies of current AI systems. He points out that despite advancements, today’s AI lacks the ability to model the physical world and predict outcomes, which are essential components of human intelligence. LeCun emphasizes that achieving AGI will require fundamentally new approaches that go beyond scaling existing architectures. The Guardian+1WSJ+1WSJ+1Time+1
Alternative Perspectives and New Directions
In response to these challenges, some researchers are advocating for alternative methodologies. François Chollet, a software engineer at Google and creator of the Keras deep learning library, has launched initiatives like the ARC Prize. This competition encourages the development of AI systems capable of solving complex reasoning problems that are currently easy for humans but challenging for machines. Chollet’s approach underscores the need for AI to develop abstract reasoning abilities rather than relying solely on pattern recognition. Time
Additionally, companies like OpenAI are exploring new techniques such as “test-time compute,” which enhances AI models during their usage phase. This approach aims to imbue AI systems with human-like reasoning capabilities, addressing some of the limitations inherent in current models. Reuters
Ethical and Philosophical Considerations
Beyond technical challenges, the quest for human-like AI raises profound ethical and philosophical questions. Iason Gabriel, a political theorist involved in AI ethics research at Google’s DeepMind, emphasizes the importance of aligning AI actions with human values. He advocates for clear guidelines to ensure that AI agents operate transparently and respect user autonomy, highlighting the necessity of ethical frameworks as AI systems become more integrated into society. Time
Conclusion
The consensus among many AI experts is clear: the current path, heavily reliant on scaling existing models and technologies, is unlikely to lead to the development of human-like artificial intelligence. Addressing this challenge requires a paradigm shift toward approaches that prioritize genuine understanding, reasoning, and ethical alignment. As the field progresses, it is imperative to critically assess and adapt our methodologies to ensure that the pursuit of AGI remains both scientifically grounded and ethically responsible.