5
minute read
It’s been a busy week for AI regulation. In the States, President Biden has signed an Executive Order on the “Safe, Secure, and Trustworthy Development and Use” of AI, while in the UK, 29 countries signed up to the Bletchley Declaration, a joint statement of intent for an international effort to combat “safety risks posed by frontier AI.”
So, what exactly is “frontier AI”? As is often the case with AI, this depends who you ask. The term first appeared in Chinese newspaper reports in 2018, and referred specifically to the race between leading US AI developers and their Chinese counterparts; more of a geopolitical term than a technological one. More recently, anglophone AI safety researchers have begun using the phrase - possibly because it’s harder to pin down than similar expressions such as AGI - and it made its debut in a paper on popular AI academic preprint site arXiv in July of 2023. This paper defines the phrase as referring to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” The Bletchley Declaration takes the definition to even vaguer places, citing AIs that are capable of a “wide variety of tasks… which match or exceed the capabilities present in today’s most advanced models,” but international agreements with almost 30 signatories tend to be short on specifics, for obvious reasons.
Non-fiction or sci-fi?
So, we’ve heard from the people who are warning us about frontier AI, and it’s fair to say those warnings are short on specifics. It isn’t hugely surprising that the Bletchley Declaration uses loose language; international agreements tend to be vague, but even the academic discourse around frontier AI suffers from a lack of specificity. The only consistent factor in this discussion is the idea that the problems with AI are in our future, not the present.