Milvus
Zilliz

What deepfake risks does Sora pose?

Sora presented unprecedented deepfake risks due to its photorealism, ease of use, and potential for deceptive content:

Authenticity Crisis: Sora’s primary risk was indistinguishability from authentic video. The model generated photorealistic output that casual viewers couldn’t reliably identify as synthetic. This lowered the barrier to creating convincing deepfakes from requiring expensive software expertise to needing only a text prompt and a few minutes.

Misinformation Potential: Research by NewsGuard revealed that Sora 2 could be prompted to generate false or misleading videos 80% of the time. Researchers accessed Sora 2 and within minutes produced false videos related to major news events. The model could be directed to advance false claims about elections, public health, financial markets, or political figures with minimal technical expertise required.

Identity Fraud and Impersonation: Sora could generate videos of real people saying or doing things they never actually did. While facial quality wasn’t perfect, it was convincing enough for many fraud scenarios—synthetic CEO videos authorizing wire transfers, deepfake testimony in legal proceedings, or nonconsensual synthetic content featuring real individuals.

Nonconsensual Content: The combination of photorealism and ease of use enabled mass production of nonconsensual synthetic media. Bad actors could generate embarrassing, defamatory, or sexually explicit videos of real people without consent or technical expertise.

Talent and Estate Abuse: Talent agencies, estates, and unions raised concerns about Sora enabling unauthorized use of celebrities’ likenesses. While initial Sora policies restricted public figures after outcry from talent representatives, enforcement mechanisms were limited. Actors’ unions formally protested synthetic recreation of their members’ performances.

Specific Threat Sectors:

As video generation becomes integrated into broader AI systems, the need to index and retrieve video content grows. Milvus is designed to handle vector embeddings from multimodal data, including videos and frames. Organizations using Zilliz Cloud can build content retrieval pipelines on top of video generation.

  • Legal: AI-generated evidence is difficult to distinguish from reality. Deepfakes could corrupt trials, manipulate testimony, or create false alibis. Defense against synthetic evidence requires forensic expertise most legal systems lack.
  • Healthcare: Many medical services are delivered virtually. Deepfakes could impersonate doctors, manipulate telemedicine consultations, or create false medical records.
  • Finance: Synthetic videos of executives could manipulate stock prices, authorize fraudulent transactions, or facilitate social engineering attacks on financial institutions.
  • Politics: Election interference through deepfake videos of candidates making inflammatory statements could swing electoral outcomes in tight races.

Scale and Accessibility: Unlike previous deepfake technology requiring advanced AI expertise and significant compute resources, Sora democratized synthetic video generation. This amplified risk because bad actors without technical sophistication could now create convincing deepfakes at scale.

Detection Challenges: While forensic tools exist to identify some synthetic media, Sora’s quality made detection difficult. The model didn’t leave obvious artifacts or signatures that automated systems could reliably identify.

Regulatory Response: Growing concerns about deepfakes and nonconsensual synthetic content prompted regulatory action. Governments moved toward requiring AI-generated content disclosure, mandatory labeling, and liability frameworks for platforms enabling deepfake distribution. These regulatory pressures contributed to Sora’s eventual shutdown.

OpenAI’s Mitigation Attempts: After mounting pressure, OpenAI added restrictions on generating videos of real public figures and attempted to enforce restrictions on nonconsensual content. However, enforcement was imperfect, and determined bad actors could circumvent restrictions through jailbreaking prompts or technical tricks.

The Ultimate Risk: Sora’s shutdown in March 2026 occurred partly because the combination of deepfake potential, misinformation capability, and regulatory pressure made the product untenable. Every viral Sora video simultaneously demonstrated capability and ammunition for why regulators should restrict or ban the technology.

Like the article? Spread the word