5
minute read
“We are working with experts in areas like misinformation, hateful content, and bias, who are testing Sora,” OpenAI says.
OpenAI’s cutting-edge AI tool called Sora can produce highly realistic and impressive 60-second videos based on text prompts but there are concerns about potential misuse and political manipulation in a year when dozens of elections are being held globally, including the November 2024 US election.

Sora shock
“This is simultaneously really impressive and really frightening at the same time and it is hitting me in ways I didn't really expect," YouTuber Marques Brownlee, told his 18.4M subscribers. Brownlee reminded viewers how far AI has come since 2023 when Will Smith’s spaghetti-eating deepfake went viral.
'Revolutionary' text-to-video tool
While OpenAI hasn’t rolled out the tool widely yet - it’s on limited release to a handful of visual artists and developers - the company demonstrated Sora (which means ‘sky’ in Japanese) through sample videos, including a life-like scene of a woman strolling through a snowy Tokyo street. When OpenAI’s Sam Altman asked his X followers to come up with suggestions, Sora immediately created a Bling Zoo and a bicycle race on the ocean.
To build Sora, OpenAI adapted DALL-E 3 tech, the latest version of OpenAI’s flagship text-to-image model. DALL-E 3 uses a diffusion model trained to turn random pixels into a picture. Wired called it ‘an impressive first step’ while MIT Technolgy Review called it ‘amazing’ and TechMonitor described it as ‘revolutionary’.