


Is The Obsession With Attaining AGI And AI Superintelligence Actually Derailing Progress In AI?
Jul 07, 2025 am 11:13 AMLet’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some basics are needed to frame this important discussion.
There is a lot of research aimed at advancing AI further. The overall aim is to either achieve artificial general intelligence (AGI), or perhaps even reach the more ambitious possibility of artificial superintelligence (ASI).
AGI refers to AI that matches human-level intelligence and can seemingly replicate our cognitive abilities. ASI would surpass human intelligence and be superior in many, if not all, ways. The idea is that ASI could outthink humans at every level. For a deeper dive into conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet achieved AGI.
In fact, whether we will ever reach AGI remains uncertain. Some believe it may take decades, others centuries. Predictions about when AGI might be realized vary wildly and lack solid evidence or logical foundation. ASI remains even more distant from where current AI stands today.
Obsession With AGI As A North Star
Not everyone believes that chasing AGI is beneficial.
In fact, some argue that focusing too much on AGI can be harmful. In a research paper titled “Stop Treating ‘AGI’ As The North-Star Goal Of AI Research” by Borhane Blili-Hamelin, Christopher Graziul, Leif Hancox-Li, Hananel Hazan, El-Mahdi El-Mhamdi, Avijit Ghosh, Katherine Heller, Jacob Metcalf, Fabricio Murai, Eryk Salvaggio, Andrew Smart, Todd Snider, Mariame Tighanimine, Talia Ringer, Margaret Mitchell, Shiri Dori-Hacohen, arXiv, February 7, 2025, they make several compelling arguments (excerpts):
- “In this position paper, we argue that focusing on the highly contested topic of ‘a(chǎn)rtificial general intelligence’ (‘AGI’) undermines our ability to choose effective goals.”
- “We identify six key traps—obstacles to productive goal setting—that are exacerbated by AGI discourse: Illusion of Consensus, Supercharging Bad Science, Presuming Value-Neutrality, Goal Lottery, Generality Debt, and Normalized Exclusion.”
- “To avoid these traps, we argue that the AI research community needs to (1) prioritize specificity in engineering and societal goals, (2) center pluralism about multiple worthwhile approaches to multiple valuable goals, and (3) foster innovation through greater inclusion of disciplines and communities.”
These six traps warrant serious attention.
Unpacking The Key Traps
I’ll briefly summarize the six traps in my own words. I encourage you to read the full research paper for their detailed explanations.
First, the illusion of consensus suggests that AI developers mistakenly believe they are all working toward the same objective—AGI. But there is no universal agreement on what AGI actually means. Some redefine it to suit their own agendas, diluting its original intent (see my thoughts at the link here).
Second, the frantic race to be the first to achieve AGI has led to poor scientific practices. Researchers often try random approaches without rigorous methodology, aiming instead for flashy announcements rather than meaningful progress.
Third, the pursuit of AGI is framed as purely technical, but it's deeply influenced by political, economic, and social motivations. Countries and corporations may use AGI as a tool for global dominance. These underlying interests are masked behind technological rhetoric.
Fourth, under the banner of AGI, companies can justify nearly any action or investment. For instance, claiming that new hardware is essential for reaching AGI can attract funding, even if it doesn’t clearly support AGI development.
Fifth, impressive but narrow AI achievements—like beating humans at chess—are used to suggest that AGI is near. However, true AGI should generalize across many domains, not just excel in one area.
Sixth, the drive toward AGI often sidelines concerns about safety, ethics, and existential risks. These issues are downplayed in favor of promoting the benefits of AI advancement.
Hearing From The Other Side
As with most complex topics, there are two sides. Supporters of AGI argue that despite the mentioned pitfalls—which they acknowledge—there are strong reasons to continue pursuing AGI.
First, having a long-term aspirational goal helps motivate researchers and developers. Even if AGI isn’t precisely defined, the concept of AI matching human intelligence serves as a powerful vision for guiding innovation.
Second, many researchers are genuinely committed to responsible and rigorous AI development. Painting all efforts as reckless or misguided is unfair and dismissive of real progress being made.
Third, without AGI as a unifying goal, AI research could become fragmented. Without direction, the field might focus too narrowly on specific applications like physics or genetics, neglecting broader intelligence challenges. Alternatively, it could splinter into thousands of disconnected efforts.
Practicality Will Prevail
Trying to shift the AI community away from AGI as a guiding star is unlikely.
The dream of AGI continues to inspire both professionals and the public. Momentum is hard to stop unless progress stalls and people lose faith. If AGI fails to materialize, then yes, it may eventually be abandoned.
But what would replace it?
Given humanity’s tendency to hope and rename ideas, a new label would likely emerge to take AGI’s place. It would be essentially the same goal with a different name. Perhaps the best path forward is to accept AGI as a symbolic target while pushing the AI community to recognize and mitigate the associated risks and drawbacks.
As Jimmy Dean once said: “I can’t change the direction of the wind, but I can adjust my sails to always reach my destination.”
The above is the detailed content of Is The Obsession With Attaining AGI And AI Superintelligence Actually Derailing Progress In AI?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

But we probably won’t have to wait even 10 years to see one. In fact, what could be considered the first wave of truly useful, human-like machines is already here. Recent years have seen a number of prototypes and production models stepping out of t

Built on Leia’s proprietary Neural Depth Engine, the app processes still images and adds natural depth along with simulated motion—such as pans, zooms, and parallax effects—to create short video reels that give the impression of stepping into the sce

Until the previous year, prompt engineering was regarded a crucial skill for interacting with large language models (LLMs). Recently, however, LLMs have significantly advanced in their reasoning and comprehension abilities. Naturally, our expectation

Picture something sophisticated, such as an AI engine ready to give detailed feedback on a new clothing collection from Milan, or automatic market analysis for a business operating worldwide, or intelligent systems managing a large vehicle fleet.The

A new study from researchers at King’s College London and the University of Oxford shares results of what happened when OpenAI, Google and Anthropic were thrown together in a cutthroat competition based on the iterated prisoner's dilemma. This was no

Scientists have uncovered a clever yet alarming method to bypass the system. July 2025 marked the discovery of an elaborate strategy where researchers inserted invisible instructions into their academic submissions — these covert directives were tail
