It’s been a busy week for AI regulation. In the States, President Biden has signed an Executive Order on the “Safe, Secure, and Trustworthy Development and Use” of AI, while in the UK, 29 countries signed up to the Bletchley Declaration, a joint statement of intent for an international effort to combat “safety risks posed by frontier AI.” 

So, what exactly is “frontier AI”? As is often the case with AI, this depends who you ask. The term first appeared in Chinese newspaper reports in 2018, and referred specifically to the race between leading US AI developers and their Chinese counterparts; more of a geopolitical term than a technological one. More recently, anglophone AI safety researchers have begun using the phrase - possibly because it’s harder to pin down than similar expressions such as AGI - and it made its debut in a paper on popular AI academic preprint site arXiv in July of 2023. This paper defines the phrase as referring to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” The Bletchley Declaration takes the definition to even vaguer places, citing AIs that are capable of a “wide variety of tasks… which match or exceed the capabilities present in today’s most advanced models,” but international agreements with almost 30 signatories tend to be short on specifics, for obvious reasons. 

Non-fiction or sci-fi?

So, we’ve heard from the people who are warning us about frontier AI, and it’s fair to say those warnings are short on specifics. It isn’t hugely surprising that the Bletchley Declaration uses loose language; international agreements tend to be vague, but even the academic discourse around frontier AI suffers from a lack of specificity. The only consistent factor in this discussion is the idea that the problems with AI are in our future, not the present.  

Threat Assessment: Frontier AI, the Bletchley Declaration, and You

SPYSCAPE
Share
Share to Facebook
Share with email

It’s been a busy week for AI regulation. In the States, President Biden has signed an Executive Order on the “Safe, Secure, and Trustworthy Development and Use” of AI, while in the UK, 29 countries signed up to the Bletchley Declaration, a joint statement of intent for an international effort to combat “safety risks posed by frontier AI.” 

So, what exactly is “frontier AI”? As is often the case with AI, this depends who you ask. The term first appeared in Chinese newspaper reports in 2018, and referred specifically to the race between leading US AI developers and their Chinese counterparts; more of a geopolitical term than a technological one. More recently, anglophone AI safety researchers have begun using the phrase - possibly because it’s harder to pin down than similar expressions such as AGI - and it made its debut in a paper on popular AI academic preprint site arXiv in July of 2023. This paper defines the phrase as referring to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” The Bletchley Declaration takes the definition to even vaguer places, citing AIs that are capable of a “wide variety of tasks… which match or exceed the capabilities present in today’s most advanced models,” but international agreements with almost 30 signatories tend to be short on specifics, for obvious reasons. 

Non-fiction or sci-fi?

So, we’ve heard from the people who are warning us about frontier AI, and it’s fair to say those warnings are short on specifics. It isn’t hugely surprising that the Bletchley Declaration uses loose language; international agreements tend to be vague, but even the academic discourse around frontier AI suffers from a lack of specificity. The only consistent factor in this discussion is the idea that the problems with AI are in our future, not the present.  

This assertion annoys a great many people on all sides of the discussion, and has fuelled accusations of distraction tactics levied at the major western AI companies. There are echoes of the last big discussion around AI safety following GPT-4’s launch in March, where an open letter from the “Future of Life Institute” called for a six month pause in AI development of “systems more powerful than GPT-4.”  Many claimed that this was not just an attempt to portray existing AI models as harmless, but would also allow major AI companies to close the door behind them. One of the loudest voices campaigning against this has been Yann LeCun, who is Meta’s Chief AI Scientist, but also more notable as one of the three “Godfathers of AI.”`

Threat Assessment with The Godfathers

LeCun makes no bones about his belief that calls for AI regulation are mainly driven by tech firms attempting to gatekeep their hugely profitable new business models. He is also engaged in active discussion with his fellow Godfathers on social media, and a recent exchange with Geoffrey Hinton - the most pro-regulation of the three - has proven highly revealing. Hinton asks what the probability is that “if AI is not strongly regulated it will lead to human extinction in the next 30 years? … My current estimate is 0.1. I suspect Yann's is <0.01”, to which Yann replies ‘My estimate is "considerably less than most other potential causes of human extinction”.  Because we have agency in this. It's not like some sort of natural phenomenon that we can't stop. Conversely, AI could actually save humanity from extinction. What is your estimate for that probability?” Sadly, Hinton has not yet replied to this question.

It’s a fascinating exchange that reveals the complexity of the debate. It’s particularly revealing because both LeCun and Hinton share a negative assessment of Silicon Valley’s influence; as often happens in these discussions, people with broadly similar positions can end up on opposite sides, while parties who come from diametrically opposed motives can often find themselves in agreement. Then you have the irony of two of the principal innovators of machine learning being so far apart on an assessment of probability, but this is largely due to an absence of reliable data on the subject. LeCun would later post that one aspect of the Bletchley summit that he warmly welcomed was the foundation of the UK’s new “AI Safety Institute”; he believes it “is poised to conduct studies that will hopefully bring hard data to a field that is currently rife with wild speculations and methodologically dubious studies.” Until we get that data, it may be difficult to ascertain exactly what the threat from frontier AI is, and wise to focus on more current issues being created in the present day.

Read mORE

RELATED aRTICLES

This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.

Gadgets & Gifts

Put your spy skills to work with these fabulous choices from secret notepads & invisible inks to Hacker hoodies & high-tech handbags. We also have an exceptional range of rare spy books, including many signed first editions.

Shop Now

Your Spy SKILLS

We all have valuable spy skills - your mission is to discover yours. See if you have what it takes to be a secret agent, with our authentic spy skills evaluation* developed by a former Head of Training at British Intelligence. It's FREE so share & compare with friends now!

dISCOVER Your Spy SKILLS

* Find more information about the scientific methods behind the evaluation here.