OpenAI’s Text-to-Video Tool Sora Met With Shock & Awe

“We are working with experts in areas like misinformation, hateful content, and bias, who are testing Sora,” OpenAI says.


OpenAI’s cutting-edge AI tool called Sora can produce highly realistic and impressive 60-second videos based on text prompts but there are concerns about potential misuse and political manipulation in a year when dozens of elections are being held globally, including the November 2024 US election.

Sora shock

“This is simultaneously really impressive and really frightening at the same time and it is hitting me in ways I didn't really expect," YouTuber Marques Brownlee, told his 18.4M subscribers. Brownlee reminded viewers how far AI has come since 2023 when Will Smith’s spaghetti-eating deepfake went viral.


'Revolutionary' text-to-video tool

While OpenAI hasn’t rolled out the tool widely yet - it’s on limited release to a handful of visual artists and developers - the company demonstrated Sora (which means ‘sky’ in Japanese) through sample videos, including a life-like scene of a woman strolling through a snowy Tokyo street. When OpenAI’s Sam Altman asked his X followers to come up with suggestions, Sora immediately created a Bling Zoo and a bicycle race on the ocean. 

To build Sora, OpenAI adapted DALL-E 3 tech, the latest version of OpenAI’s flagship text-to-image model. DALL-E 3 uses a diffusion model trained to turn random pixels into a picture. Wired called it ‘an impressive first step’ while MIT Technolgy Review called it ‘amazing’ and TechMonitor described it as ‘revolutionary’.

OpenAI’s Text-to-Video Tool Sora Met With Shock & Awe

SPYSCAPE
Share
Share to Facebook
Share with email

“We are working with experts in areas like misinformation, hateful content, and bias, who are testing Sora,” OpenAI says.


OpenAI’s cutting-edge AI tool called Sora can produce highly realistic and impressive 60-second videos based on text prompts but there are concerns about potential misuse and political manipulation in a year when dozens of elections are being held globally, including the November 2024 US election.

Sora shock

“This is simultaneously really impressive and really frightening at the same time and it is hitting me in ways I didn't really expect," YouTuber Marques Brownlee, told his 18.4M subscribers. Brownlee reminded viewers how far AI has come since 2023 when Will Smith’s spaghetti-eating deepfake went viral.


'Revolutionary' text-to-video tool

While OpenAI hasn’t rolled out the tool widely yet - it’s on limited release to a handful of visual artists and developers - the company demonstrated Sora (which means ‘sky’ in Japanese) through sample videos, including a life-like scene of a woman strolling through a snowy Tokyo street. When OpenAI’s Sam Altman asked his X followers to come up with suggestions, Sora immediately created a Bling Zoo and a bicycle race on the ocean. 

To build Sora, OpenAI adapted DALL-E 3 tech, the latest version of OpenAI’s flagship text-to-image model. DALL-E 3 uses a diffusion model trained to turn random pixels into a picture. Wired called it ‘an impressive first step’ while MIT Technolgy Review called it ‘amazing’ and TechMonitor described it as ‘revolutionary’.

Concerns about Sora & OpenAI Video

It's not the world's first text-to-video AI tool. Google and Runway have their own models with similar functions. But OpenAI’s impressive debut was met with applause and concern by experts who worry the tech could be exploited to create deepfake videos and chatbots to spread misinformation.

A future of misinformation and deepfakes?

Ethical hacker Rachel Tobac, a member of the technical advisory council of the US government’s Cybersecurity and Infrastructure Security Agency (CISA), tweeted about the potential for trickery and manipulation. Adversaries might, for example, use Sora to fabricate scenarios such as videos that show non-existent vaccine side effects or exaggerate adverse weather conditions to discourage voting.

OpenAI said it is working with experts in misinformation, hateful content, and bias to test the model adversarially. It is also implementing safety measures to prevent harmful use such as restricting extreme violence, celebrity likeness, and hateful imagery. 

Skeptics aren’t convinced though. Tobac called on OpenAI to collaborate with social media platforms to develop automatic recognition and labeling for AI-generated videos, along with establishing guidelines for labeling content.

Read mORE

RELATED aRTICLES

This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.

Gadgets & Gifts

Put your spy skills to work with these fabulous choices from secret notepads & invisible inks to Hacker hoodies & high-tech handbags. We also have an exceptional range of rare spy books, including many signed first editions.

Shop Now

Your Spy SKILLS

We all have valuable spy skills - your mission is to discover yours. See if you have what it takes to be a secret agent, with our authentic spy skills evaluation* developed by a former Head of Training at British Intelligence. It's FREE so share & compare with friends now!

dISCOVER Your Spy SKILLS

* Find more information about the scientific methods behind the evaluation here.