One surprising aspect of the recent AI hype explosion - and the accompanying boom in AI doomerism - is the way military development of AI has managed to fly under the radar. This may be because limited AI systems have been deployed increasingly common in military operations over the last couple of decades, but also the link between large language models and military AI is not immediately obvious. Chatbots wrangle words, not weapons, after all!
The connection between these two seemingly diverse fields may be best explained through an anecdote told by Colonel Tucker “Cinco” Hamilton, Chief of the USAF AI Test and Operations Unit. Before we repeat the anecdote, a disclaimer! the USAF has subsequently denied that these events took place, and Colonel Hamilton has stated that the following is a “thought experiment”, not a real event, and that he “misspoke”.
THE ANECDOTE
Colonel Hamilton was giving a talk at the Royal Aeronautical Society’s 2023 conference in London, and described a simulated scenario where a drone on a SEAD (Suppression of Enemy Air Defense) mission was testing target recognition using modern machine learning techniques. These use positive reinforcement to encourage the AI to achieve the best possible results; much like a video game, the AI does this by assessing which response will lead to it being awarded the most points; the highest score.
This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.
One surprising aspect of the recent AI hype explosion - and the accompanying boom in AI doomerism - is the way military development of AI has managed to fly under the radar. This may be because limited AI systems have been deployed increasingly common in military operations over the last couple of decades, but also the link between large language models and military AI is not immediately obvious. Chatbots wrangle words, not weapons, after all!
The connection between these two seemingly diverse fields may be best explained through an anecdote told by Colonel Tucker “Cinco” Hamilton, Chief of the USAF AI Test and Operations Unit. Before we repeat the anecdote, a disclaimer! the USAF has subsequently denied that these events took place, and Colonel Hamilton has stated that the following is a “thought experiment”, not a real event, and that he “misspoke”.
THE ANECDOTE
Colonel Hamilton was giving a talk at the Royal Aeronautical Society’s 2023 conference in London, and described a simulated scenario where a drone on a SEAD (Suppression of Enemy Air Defense) mission was testing target recognition using modern machine learning techniques. These use positive reinforcement to encourage the AI to achieve the best possible results; much like a video game, the AI does this by assessing which response will lead to it being awarded the most points; the highest score.
These are similar reinforcement systems to those that are used to train chatbots like ChatGPT, and just as with chatbots these systems can have surprising and unexpected results, especially during testing. Here's how Colonel Hamilton described the simulated thought experiment:
“We were training [the AI] in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective… We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
These words were intended for a small audience, but when the AI-obsessed media got hold of them they quickly became international news, with reporting that tended to skate over the part about this being a simulated test. “Military AI kills operator” headlines began to proliferate online, and by the time the USAF denied the story it was barely recognizable from the original anecdote.
THE PESSIMISTS
Whether fictional or not, the problems outlined in Colonel Hamilton’s talk feed into wider concerns about AI decision making on the battlefield. One particularly vocal critic is Elon Musk, who has campaigned for an outright ban on autonomous weaponry for many years; in 2017 he was one of the leading signatories of an appeal to the United Nations warning that “lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
Musk’s concerns are so widely shared that he even finds common ground on the subject with some of his most trenchant critics. These include former US Navy fighter pilot Missy Cummings, who swapped an FA/18 for a professorial seat at Duke University, where her work on autonomous systems covers both military drone development and the self-driving cars being developed by Musk at Tesla. She has serious concerns about the safety of both, and has published work on what she describes as the “brittleness” of computer vision algorithms, which struggle to respond adequately to unexpected changes in, for example, weather conditions. It’s worth noting that while Cummings is outspoken on the failings of these crucial systems, she also advocates for more investment in military AI applications, in order to ensure the US keeps pace with its global rivals.
THE OPTIMISTS
Others are more unequivocally optimistic about AI’s use in the battlefield. One notable cheerleader for the AI revolution is Marc Andreessen, one of Silicon Valley’s most respected voices. As the co-creator of Mosaic, the first successful web browser, he has a solid claim to be one of the founders of the internet, and he has since built a reputation not only as a pioneer, but as a forecaster, making several accurate predictions about the future of technology during the last 30 years. His recent article Why AI Will Save The World turns that sage eye towards - among other subjects - the future of AI on the battleground. He writes: “I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically.
Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.”
It’s notable that this closely mirrors the pessimist position, with the focus switched; rather than highlight the error rates in the decision making processes of autonomous AI systems, Andreessen is focussed on the existing error rates of humans, and expects AI to improve results, not make them worse. It would be an oversimplification to say that military AI’s advocates are promoting a future that doesn’t currently exist, while the pessimists are warning of a future where technology doesn’t improve, but they do seem to be focussed on very different things. Perhaps the most useful pointers can be found in Colonel Hamilton’s apocryphal simulation anecdote; the potential of AI on the battlefield is enormous, but it needs a lot of testing first!
This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.
Gadgets & Gifts
Put your spy skills to work with these fabulous choices from secret notepads & invisible inks to Hacker hoodies & high-tech handbags. We also have an exceptional range of rare spy books, including many signed first editions.
We all have valuable spy skills - your mission is to discover yours. See if you have what it takes to be a secret agent, with our authentic spy skills evaluation* developed by a former Head of Training at British Intelligence. It's FREE so share & compare with friends now!