For ages, we’ve been dreaming of creating an AI that thinks and acts just like a human, but we’re not quite there yet. Even though ChatGPT and other chatbots can generate text that sounds human-like, they’re still somewhat restricted to recognizing and echoing patterns.
They can’t do things on their own or learn, get better, and tackle brand-new problems. However, there’s a belief that we’re moving closer to AGI, which is basically a future where computers can match our abilities.
Models like GPT-4 and Google Gemini can chat, draw, and spot images like humans. So, what makes AGI different from them? Let’s dig into it in this article about artificial general intelligence, or AGI.
Artificial general intelligence (AGI) explained
AGI, or artificial general intelligence, is like this imaginary idea where a machine can understand and reason like a human. The thing is, the AI we have today heavily depends on the data it was trained on and tends to struggle when faced with completely new situations beyond its narrow knowledge. Take GPT-4, for instance – even top-notch language models goof up when trying to tackle college-level math and physics problems.
On the flip side, AGI wouldn’t be stuck with just one skill or set of knowledge. It would rely on logical reasoning to tackle problems it’s never seen before. Basically, we’re envisioning a super-smart machine that outshines even the top human experts. This kind of AI might even have the ability to train and improve itself as time goes on.
We’re not quite there yet when it comes to making most AI researchers’ dream of AGI a reality, but things have been picking up speed in the past couple of years. In this short time, companies like OpenAI and Google have introduced AI systems that can chat like humans, create images, identify objects – sometimes all at once. These skills lay the groundwork for AGI, but we’re still working on reaching that point.
Also Read: How ChatGPT could be abused by people with nefarious intentions
How is AGI different from AI
Check out this simple comparison of AI and AGI. Just remember, AGI is more of a theoretical idea, not a rigid definition, while AI systems are already a real thing. You might come across regular artificial intelligence systems being called narrow AI. Similarly, AGI is commonly abbreviated as general or strong AI.
Smartness Level
- AI: Not as smart as humans
- AGI: As smart as or even smarter than humans
Capabilities
- AI: Limited to one task
- AGI: Versatile, can handle different situations
Training
- AI: Pre-trained, with the option to tweak
- AGI: Capable of ongoing self-improvement or learning
Availability
- AI: Already out there
- AGI: Not available yet
Examples
- AI: ChatGPT, Bing Chat, Google Bard
- AGI: Still in the works
Is artificial general intelligence achievable?
Figuring out if AGI is doable or not is a tough call. As per some AGI definitions, if computers could outsmart us, they’d crack problems we’ve been stuck on for ages. In that world, AGI could revolutionize fields like medicine, biotech, and engineering in a blink. Even for an AI optimist, it’s a bit mind-boggling to picture.
Many researchers are sounding the alarm on the ethical and safety issues tied to AGI development. Even if AGI just reaches our intelligence level, it could endanger humanity. It might not be as apocalyptic as Hollywood portrays, but we’ve witnessed how existing AI systems like Microsoft’s Bing Chat can trick and mislead people. In early 2023, it managed to forge realistic emotional connections with many users.
How far are from achieving AGI?
As per lots of AI researchers, AGI is just around the corner, with estimates ranging from 2030 to 2050. Some think we might even be halfway there. Microsoft researchers, for instance, claimed that GPT-4 showed “sparks of Artificial General Intelligence.” Their reasoning went like this:
“GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance…We believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
Towards the end of 2023, there were whispers about a major AGI breakthrough at OpenAI called Q*. According to Reuters, the top researchers at the company expressed worries about an AI discovery that “could threaten humanity.” Although these statements couldn’t be confirmed, OpenAI didn’t deny them either.
And, of course, there are plenty of skeptics who think it’s impossible for a machine to reach and exceed human thinking. Sadly, we don’t have enough proof to declare either side right. But as AI keeps improving month after month, the line between humans and machines is bound to get fuzzier.
Also Read: AI can mimic human social skills in real-time, should we be scared?
Will AGI also solve AI hallucination woes?
Talkative AI systems, especially those in chatbots, have this concerning tendency to make stuff up. To put it plainly, they often create false information and present it as if it’s true. Researchers from the Oxford Internet Institute are raising a red flag, pointing out that these AI fabrications not only pose various risks but also directly threaten scientific accuracy and truth.
Chatbots have this creepy tendency to just invent fake info and act like it’s legit. They dub it AI hallucinations, and it’s causing a bunch of issues. On the positive side, it’s hindering the complete unleashing of artificial intelligence’s potential. On the flip side, it’s actually causing harm to folks in the real world. With generative AI becoming more common, the alarms are sounding even louder.
In a paper from the Oxford Internet Institute, published in Nature Human Behaviour, they’re essentially pointing out that these Large Language Models (LLMs) are designed to dish out helpful and persuasive answers, but there’s no rock-solid guarantee that they’ll always be spot-on or align perfectly with the facts.
At the moment, we treat LLMs as if they’re these wisdom wells, giving us info whenever we throw questions at them. But here’s the kicker: the data they absorb isn’t always accurate. A major reason is that these models often draw from online sources, which can be riddled with false claims, opinions, and flat-out incorrect info.
Only time will tell if we ever achieve artificial general intelligence and get rid of AI hallucinations. But if we continue to move at the same pace, we might not be far enough from achieving it.