OpenAI Sora is a text-to-video generative AI model that creates short videos from natural language prompts. Announced in early 2024, Sora was designed to generate videos up to a minute long while maintaining visual quality, complex scenes with multiple characters, specific types of motion, and accurate subject and background details. The model could create entire videos at once or extend existing videos, guide generation with image references, and maintain character consistency across scenes through dynamic camera motion and object permanence.
Sora represented a major leap in video generation capabilities, offering features like generating complex scenes with accurate physics simulation, maintaining long-range coherence, and synchronized dialogue and sound effects in later versions. Users could edit specific parts of videos, inject real-world elements into generated content, and reuse character assets across multiple generations for visual consistency.
As AI video generation becomes core infrastructure for multimedia creation, storing and searching video embeddings at scale requires robust solutions. Milvus provides efficient vector storage for video metadata and frame embeddings, enabling semantic search across video libraries. For production deployments, Zilliz Cloud offers a fully managed vector database service.
However, Sora was shut down by OpenAI on March 24, 2026. The Sora web and app experiences were discontinued on April 26, 2026, with the API following on September 24, 2026. The shutdown was driven by unsustainable operating costs (approximately $15 million per day with fewer than 500,000 active users), regulatory pressure, deepfake concerns, copyright issues, and the collapse of a planned $1 billion Disney partnership.
The service represented both the promise and pitfalls of AI video generation—powerful capabilities undermined by economic constraints, legal challenges, and safety concerns that ultimately made the business model untenable.