Artificial general intelligence is a superior kind of artificial intelligence that can perform all human tasks better than humans.
The problem with AGI is that we do not have a scientifically accepted definition for it. Different experts may understand different AI capabilities under the AGI moniker.
Some people think they have spotted glimpses of AGI in GPT4, the next iteration of OpenAI’s technology that currently powers chatGPT.
AGI’s implications are even further-reaching and more complex than those of ChatGPT’s current capabilities. And the world already has problems digesting the presence of non-general artificial intelligence.
Why Some Researchers Think GPT4 May Have AGI
When specialists tasked with the integration of GPT4 into Bing Search asked the text-based AI to create a program that would depict a unicorn, it came up with code that matched geometrical shapes into a unicorn-like pattern.
To achieve such a feat, the AI must be able to transfer concepts between modalities. And GPT4 proved that it could do that. To qualify as a true AGI, however, it would have to fulfill additional requirements.
- It would have to master abstract thinking.
- It would have to be able to build background knowledge.
- AGI can master common sense.
- Advanced artificial intelligence can identify cause and effect.
- AGI would have to be able to plan for the future.
So far, none of the GPT versions have been able to master all these elements of general intelligence. They have shown glimpses of transfer learning, however. And that is impressive in and of itself.
Others Disagree
Attributing AGI-like characteristics to GPT4 may be a commercial ploy to hype the capabilities of the platform, according to AI experts and researchers. The paper that extolled GPT4’s capabilities also downplayed its shortcomings. Microsoft has invested over $10 billion in the development of the generative pre-trained transformers, so it wouldn’t be farfetched to speculate that they may want to protect their investment.
Trying to prove or disprove the AGI claims about GPT4 is scientifically unfeasible. Experts have pointed out that it is impossible to repeat experiments interacting with AI.
The problem with hyping AI systems is that it encourages users to trust these large language models, even when they are far from infallible. We are already overreliant on often unreliable sources of information. Blindly trusting chatbots like ChatGPT would further erode critical thinking and our ability to make sense of the world around us.
What about Self-Awareness?
Despite its advanced capabilities, AGI is not self-aware artificial superintelligence. It would be merely a stepping stone in that direction. However, many researchers believe that AGI may not be possible at all.
If AGI turns out to be impossible to accomplish, self-aware AI will become impossible as well. That would lay to rest the fears prominent scientists have voiced concerning the dangers a superintelligent and self-aware AI would pose to humanity.
What Can AGI Accomplish?
If an organization manages to develop AGI, what would such a system be capable of accomplishing? How would we recognize it for what it is?
Artificial general intelligence would have no problems accomplishing the following deeds.
- Creativity. AGI can understand art or code humans create, and it can improve or modify it. It can treat it as a source of inspiration.
- Mastering fine motor skills. Scratching the tip of your nose or grabbing some coins from your pocket may seem like mundane deeds. They require imaginative perception, however. And they are difficult for machines to replicate.
- Understanding symbol and belief systems.
- Using different kinds of knowledge.
- Understanding natural language. Being able to carry a conversation through typed sentences is impressive enough. Understanding context-heavy spoken language often rife with humor is on an entirely different level, however.
- Subjective sensory perception.
Is Human-like AI possible?
ChatGPT, the most human-like manifestation of AI we can currently access, doesn’t work like the human mind at all. Its human-like answers to questions and mastery of generated language may create that impression, but the reality is different.
These AI versions do not learn like humans. They have no use for meaningful interaction and educational dialog. Their creators train them by shoveling the knowledge available on the internet into them. They use a ridiculous amount of data to get to a point where they can mimic human interaction.
The currently available systems have glaring flaws in the following areas.
- Understanding social relationships and interaction
- Understanding how the physical world works
- Figuring out how people think and what drives their actions
Unlike the AI we currently have, humans have motivation and foresight. They invent goals for themselves that can improve their lives in the future. Experts believe that with the current technologies used in its development, AI cannot master these capabilities.
By going beyond machine learning to develop AI, developers may create AGI in the future. The implications of such a technology are so far-reaching, however, we probably cannot currently comprehend them. At that point, the balance between its benefits and “side effects” may not justify its existence.
We See AGI because It is in Our Nature
The way humans use language assumes the existence of an equally intelligent entity as an interlocutor. We can only converse meaningfully with others endowed with human intelligence.
When we receive deceivingly human-like answers from a machine, we tend to allow our imagination to endow it with humanity. That would explain why many see glimpses of AGI in GPT4. By any scientifically substantiated, objective measure, however, those glimpses do not exist for now.