Milvus
Zilliz

What regulations affected Sora's operations?

Multiple converging regulatory and policy pressures contributed to Sora’s shutdown:

AI-Generated Content Labeling Requirements:

Spain proposed fines of up to €35 million or 7% of global turnover for failures to properly label AI-generated content. The EU, US, Japan, and South Korea were developing or had developed legislation requiring:

  • Mandatory disclosure that videos were AI-generated
  • Opt-in consent frameworks for using individuals’ likenesses in training data
  • Clear attribution and provenance tracking
  • Restrictions on generating content without creator authorization

Sora’s initial permissive policies—allowing copyrighted material unless explicitly opted out—violated emerging standards that required explicit opt-in rather than opt-out frameworks.

Deepfake Restrictions:

Multiple jurisdictions moved to restrict nonconsensual synthetic media:

  • EU: Proposed restrictions on generating deepfakes without consent
  • US States: Several states (California, Texas, Virginia) passed laws restricting nonconsensual deepfake creation and distribution
  • UK: Online Safety Bill included provisions addressing synthetic media harms

These regulations criminalized or created civil liability for generating nonconsensual synthetic content—exactly what Sora enabled with minimal friction. Compliance required restrictive guardrails that reduced product appeal.

Copyright and Intellectual Property Protection:

Regulators globally moved toward protecting copyright holders:

  • EU Copyright Directive Article 17: Requires platforms to prevent copyright infringement by users
  • UK Online Safety Bill: Holds platforms accountable for user-generated copyright violations
  • US Legislative Proposals: Congress considered legislation holding AI providers liable for user-generated infringing content

These frameworks made OpenAI potentially liable for copyright infringement by Sora users. Unlike platforms with DMCA safe harbors, OpenAI trained systems specifically to enable copyright-infringing content generation—a legally fraught position.

Publicity Rights and Personality Protection:

Multiple jurisdictions strengthened protections for publicity rights and personality:

  • US State Laws: Many states have “right of publicity” laws preventing use of someone’s likeness without permission
  • EU GDPR: Personal data protections limit use of biometric data (facial recognition) without explicit consent
  • California AB-701: California passed a law restricting nonconsensual synthetic media

These regulations criminalized generating videos of real people without consent—a core Sora use case.

Content Moderation and Safety Requirements:

Regulators mandated content moderation:

In production environments, generated videos are often indexed alongside other content for retrieval. Milvus supports multimodal semantic search across text, images, and video content. Zilliz Cloud makes it practical to scale these systems.

  • EU Digital Services Act: Requires platforms to moderate illegal content, including deepfakes
  • UK Online Safety Duty of Care: Requires platforms to mitigate harms from synthetic media
  • US Proposals: Congress considered legislation imposing content safety requirements on AI systems

Sora’s openness—allowing users to generate virtually any content—violated these emerging requirements. Compliance required restrictive guardrails.

Sectoral Restrictions:

Emergent regulations targeted specific sectors:

  • Healthcare: Regulations restricting AI-generated medical content without licensing
  • Finance: Rules preventing AI-generated financial advice or market manipulation
  • Elections: Laws criminalizing AI-generated deepfakes of candidates

Sora could generate content in all these restricted domains, exposing OpenAI to regulatory penalties.

Cumulative Effect:

No single regulation killed Sora, but the converging wave of restrictions made the product progressively harder to operate:

  1. Mandatory Labeling: Reduced user value—many wanted unlabeled content for misinformation or impersonation
  2. Opt-In Consent: Eliminated Disney partnership and many user scenarios
  3. Copyright Liability: Made user-generated infringing content OpenAI’s legal liability
  4. Deepfake Restrictions: Criminalized nonconsensual content generation
  5. Content Moderation Burden: Forced investment in moderation systems that degraded user experience

Each added friction and regulatory risk. Combined, they made the product economically unviable and legally perilous.

Strategic Decision:

OpenAI faced a choice: invest heavily in compliance infrastructure (moderation, labeling, verification systems) or kill the product. Given Sora’s already-negative unit economics and declining user engagement, compliance investment was unjustifiable. Killing the product was the rational decision.

Broader Lesson:

Regulation didn’t kill Sora directly, but it created an environment where an already-unprofitable product became untenable. AI startups should monitor emerging regulatory trends early. Products that optimize for user value while ignoring broader stakeholder concerns (governments, creators, affected populations) face regulatory headwinds that can accelerate product death even for well-capitalized companies.

Like the article? Spread the word