Do We Need A Break? Pros And Cons Of A Pause In AI Development

Last month, an organization called The Future Of Life Institute posted an open letter, co-signed by several star names in the world of technology, calling for a six month pause in the development of AI systems. The letter lays out the case for viewing AI as a potential existential threat to the human race, and demands a moratorium in order to better prepare for - and guard against - that outcome. It has sparked a furious debate in the AI community, with arguments ranging from “AI will save humanity, not destroy it” to “we should be ready to launch nuclear strikes on AI data centers.” Given the speculative nature of this debate, it’s impossible to say who is correct, but this guide to the pros and cons of pausing AI development should help you make your own mind up.

THE ELEPHANT IN HUMANITY’S ROOM 

Let’s start with the main event: the prospect of the complete overthrow of humanity by sentient AI. This is definitely something that could happen, but is far from a guaranteed outcome, and much of the debate that follows depends on your tolerance for risk. Nobody can agree on the level of risk humanity faces in the long-term, and to complicate matters further, many who believe that AIs pose a long-term risk also believe that the Future of Life Institute is trying to distract attention from the issues currently being posed by AI in 2023. 

Do We Need A Break? Pros And Cons Of A Pause In AI Development

SPYSCAPE
Share
Share to Facebook
Share with email

Last month, an organization called The Future Of Life Institute posted an open letter, co-signed by several star names in the world of technology, calling for a six month pause in the development of AI systems. The letter lays out the case for viewing AI as a potential existential threat to the human race, and demands a moratorium in order to better prepare for - and guard against - that outcome. It has sparked a furious debate in the AI community, with arguments ranging from “AI will save humanity, not destroy it” to “we should be ready to launch nuclear strikes on AI data centers.” Given the speculative nature of this debate, it’s impossible to say who is correct, but this guide to the pros and cons of pausing AI development should help you make your own mind up.

THE ELEPHANT IN HUMANITY’S ROOM 

Let’s start with the main event: the prospect of the complete overthrow of humanity by sentient AI. This is definitely something that could happen, but is far from a guaranteed outcome, and much of the debate that follows depends on your tolerance for risk. Nobody can agree on the level of risk humanity faces in the long-term, and to complicate matters further, many who believe that AIs pose a long-term risk also believe that the Future of Life Institute is trying to distract attention from the issues currently being posed by AI in 2023. 

The Institute is regarded with mistrust by many due to its association with the “Longtermism” movement, popular with many CEOs in Silicon Valley who adhere to the concept of “effective altruism”. Critics of the movement note that it places responsibility for humanity’s future in the hands of a small group of wealthy philanthropists, and can be used to justify actions that will harm people in the short term. Critics - including researchers cited by the Institute - have alleged that their goal is to deflect attention from the short-term harms of AI, such as bias against minority groups.

It is of course possible for AI to simultaneously be a long-term threat and a short-term harm; both factions may well be correct! So far, we’ve only heard from the pessimists. What do optimists have to say?

AI UTOPIA, OR AI WINTER?

Nobody is denying the possibility of an extinction-level event being triggered by AI, but many feel that the risk is worth taking on due to a more pressing threat facing humanity: climate change. The hope is that rapid AI development will provide us with the tools we need to avert climate disaster, and there are signs that this process is already happening: AI has accelerated development of science that helps reduce emissions, optimize energy usage and develop renewable energy sources. Even if you ignore other scientific benefits, particularly in the field of medical research, it’s clear that the potential benefits of AI may cancel out the potential risks. 

Prompt: "An AI Saves Humanity" (Midjourney v5)

Another, less optimistic argument is that the current pace of AI development is not sustainable, and this is a viewpoint backed by precedent. The history of practical AI began in the 1950s with pioneers like Alan Turing and Marvin Minsky, whose work led to a goldrush of investment in the burgeoning field and breathless predictions that human-level AI would be achieved before 1980. The reality was very different; unexpected problems hampered progress, and a series of failed experiments led to what is now called the first “AI Winter”, where investment and interest in AI disappeared between 1974 and 1980. A second cold snap put the chills on AI from 1987 to 1993, and there have been many other smaller lulls in development since. The history of the field is characterized by dramatic spurts of progress, mixed with periods of dormancy, and for this reason many feel the hype generated by the spurt of the last 12 months is unjustified. 

One other interesting perspective on this comes from Yann LeCun, one of the three “Godfathers of AI” whose pioneering work on deep learning helped to kickstart AI’s current growth spurt. He is highly skeptical of the hype around the current generation of large language model AIs, which he believes are fundamentally limited and will never achieve any form of meaningful sentience. While he has stated that “better systems will come”, he has also described the current fears around existential threats as “insanely overstated”, and has said “thinking that somehow we’re smart enough to build systems to be super-intelligent, and not smart enough to design good objectives so they behave properly… is a strong assumption that is very low probability”.  

OPENAI IS A CLOSED BOOK

Finally, we come to practical considerations of how to implement a pause, and who regulates it. Many leading voices in the AI community oppose the pause because they reject state interference, one of the most prominent being Andrew Ng, founder of DeepLearning.ai, who has said “having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy.”

This is an interesting question, because historically state intervention has tended to be a boost to emerging technology, most notably with the development of the early internet. When the fundamental technology of network computing, packet switching, was being developed in the early 1960s, telecom companies in the US were approached to invest and develop the technology, but they failed to see the potential and chose to stick with their existing, successful business model. Consequently, internet infrastructure and technology was almost entirely developed by US and UK universities and state bodies. One consequence of this is that much of the development of the internet was open-source; anyone with the necessary tech and knowledge could see exactly what was being developed. 

Silicon Valley is not struggling to see the financial benefits of modern AI systems, and ChatGPT developer OpenAI has become increasingly less open as the stakes have risen. With the recent launch of GPT-4 they have tightly controlled information about their code, and particularly details of how and on what the model was trained. OpenAI says that this is partly an attempt to prevent an arms race with other AI developers, and this is a reasonable argument, but many other AI developers continue to make their code open-source. 

Another argument against a pause is related to a wider form of AI arms race; some claim that delaying development will cause Western countries to lag behind rival nations. Recent developments in China - where state interference is far more widespread - have undermined this argument, as the Chinese authorities have stepped in to regulate their own AI developers to ensure “legitimacy of the source of pre-training data”, as well as demanding that AI outputs “reflect the core values of socialism.” Meanwhile, OpenAI also faces immediate regulatory challenges in the West, with the ChatGPT service already banned in Italy, and several other countries considering similar action over data privacy concerns. The AI industry may wish to avoid regulation, but that may not be an option in the short or long term. 

Read mORE

RELATED aRTICLES

This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.

Gadgets & Gifts

Put your spy skills to work with these fabulous choices from secret notepads & invisible inks to Hacker hoodies & high-tech handbags. We also have an exceptional range of rare spy books, including many signed first editions.

Shop Now

Your Spy SKILLS

We all have valuable spy skills - your mission is to discover yours. See if you have what it takes to be a secret agent, with our authentic spy skills evaluation* developed by a former Head of Training at British Intelligence. It's FREE so share & compare with friends now!

dISCOVER Your Spy SKILLS

* Find more information about the scientific methods behind the evaluation here.