Humanity has not always been at the top of the food chain, but we have always occupied top spot on Earth’s innovation chain; as the cleverest things on two or four legs, homo sapiens has successfully carved out a dominant position for itself on the planet based not on strength, but on smarts.
Alarmingly, that dominant position now seems under threat thanks to our latest and potentially most impressive innovation, artificial intelligence. Many are concerned that we're rapidly approaching the next stage of AI development, known as AGI (artificial general intelligence), and are calling for an immediate pause to all development of advanced AI systems, to allow time for regulation and safeguarding. So, what exactly is AGI, and how concerned should we be?
MAJOR INTELLIGENCE, SINGULAR THREATS
As ever with the field of AI, there is some disagreement on the precise definition of AGI, but “an intelligent agent that can understand or learn any intellectual task that human beings can” broadly covers the concept’s meaning. It’s a phrase that’s been kicked around ever since the first rush of AI research in the 1950s and early 60s; pioneers in the field such as Marvin Minsky believed AGI would be developed “within a generation”.
This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.
Humanity has not always been at the top of the food chain, but we have always occupied top spot on Earth’s innovation chain; as the cleverest things on two or four legs, homo sapiens has successfully carved out a dominant position for itself on the planet based not on strength, but on smarts.
Alarmingly, that dominant position now seems under threat thanks to our latest and potentially most impressive innovation, artificial intelligence. Many are concerned that we're rapidly approaching the next stage of AI development, known as AGI (artificial general intelligence), and are calling for an immediate pause to all development of advanced AI systems, to allow time for regulation and safeguarding. So, what exactly is AGI, and how concerned should we be?
MAJOR INTELLIGENCE, SINGULAR THREATS
As ever with the field of AI, there is some disagreement on the precise definition of AGI, but “an intelligent agent that can understand or learn any intellectual task that human beings can” broadly covers the concept’s meaning. It’s a phrase that’s been kicked around ever since the first rush of AI research in the 1950s and early 60s; pioneers in the field such as Marvin Minsky believed AGI would be developed “within a generation”.
They were also aware of the dangers AGI could pose, and during this period the brilliant mathematician John von Neumann devised a similar concept: the “singularity”, where human technological development reaches an inflection point “beyond which human affairs, as we know them, could not continue.”
Over time, this has come to be closely associated with AIs, and in particular a scenario where AIs are able to improve their own capabilities; many believe that once this point is reached, it will become impossible to halt the spread of AI as they augment and replicate themselves. AGI is seen by many as the tipping point for the singularity; once we pass that point, there may be no turning back, and the consequences for the human race could be catastrophic.
POWER SURGES
Thanks to the loose and shifting definition of AGI, some argue that we’re already at that point, or teetering on the brink of it. Some of the tests for AGI that were once considered benchmarks have now been passed with ease by modern AI systems. The Turing test - in which an interrogator is tasked with determining which of two subjects is an AI, and which human - fell a long time ago, but more modern hurdles have also been comfortably cleared by the latest generation of AIs.
In 2012, Ben Goertzel proposed the Robot College Student Test, stating that “when a robot can enroll in a human university and take classes in the same way as humans, and get its degree, then I’ll consider we’ve created a human-level artificial general intelligence.” Goertzel predicted this would happen within 10 to 15 years, and just over a decade later, GPT-4 has the ability to pass all manner of high-level examinations, including the Uniform Bar Exam.
There are still many things that GPT-4 cannot do, and as such it is still far from meeting the strict definition of AGI, which requires an AI to be able to master any human-capable task, rather than just some.
Crucially, one area where humans excel but GPT-4 fails is power-seeking - the ability to understand, desire, and pursue influence over others. OpenAI tested GPT-4's ability to exert influence on the world by assigning tasks such as self-replication, covering its tracks, and hiring accomplices online, and the AI performed poorly; you can read more about these trials here. As things currently stand, even if GPT-4 had the desire to seek power, it wouldn’t get very far.
WHY SO GLOOMY?
We’re clearly some way from an AGI capable of overthrowing its fleshy masters, but many prominent figures in AI research believe that we will never arrive at that point. Yann LeCun - one of the three “Godfathers” of AI, now chief scientist at Meta, and also one of the signatories of the open letter calling for a pause on AI development - has dismissed AGI as a fantasy, saying “There is no such thing as Artificial General Intelligence because there is no such thing as General Intelligence. Human intelligence is very specialized.”
Many also argue that the way current AI models like GPT-4 are constructed, using an opaque black box model, limits their potential for long-term growth.
There is also the more optimistic view that if AGI is achieved, it’s more likely to be a positive than a negative for humanity. The potential of such computing power to revolutionize everything from medical research to the battle against climate change cannot be overstated, and there is a convincing argument to say that AGI is more likely to save humanity than harm it. It can be viewed as analogous to another great tipping point in human history, where humanity began farming instead of hunting, and harnessed the enormous power of animals to help us work the fields.
THE HUMAN VARIABLE
Ultimately, this can only be speculation, but very few researchers believe that AGI will inevitably lead to catastrophe, and much of the debate centers around the timescale over which these anticipated events may play out.
The remarkable pace of development in AI in recent times has caused alarmed scientists to revise their predictions, with one of the most well-known, Joseph Carlsmith, upping his assessment of the likelihood of a “full disempowerment of humanity” happening by 2070 from 5 percent to 10 percent. These are worryingly short odds, but are still long enough to be considered an unlikely outcome.
Whether that outcome comes to pass depends on the actions that humans take to safeguard ourselves, and it seems sensible to consider a pause in AI development to establish a solid and safe basis on which to proceed.
This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.
Gadgets & Gifts
Put your spy skills to work with these fabulous choices from secret notepads & invisible inks to Hacker hoodies & high-tech handbags. We also have an exceptional range of rare spy books, including many signed first editions.
We all have valuable spy skills - your mission is to discover yours. See if you have what it takes to be a secret agent, with our authentic spy skills evaluation* developed by a former Head of Training at British Intelligence. It's FREE so share & compare with friends now!