Midjourney Drops V1: Short-Form Video fo 20 Seconds Flat
Midjourney has officially entered the AI video race with the launch of its first image-to-video model, V1. Released on June 18, 2025, the tool allows users to animate static images into short 5-second videos (up to 21 seconds with extensions). Available through Midjourney’s web app and Discord, the feature comes at a fraction of the cost of competing models like OpenAI’s Sora or Runway Gen-4. Users can choose between automatic motion or input their own movement prompts, with adjustable settings for low or high motion complexity. Though the video quality isn’t cinematic, the results are visually engaging and surreal—true to Midjourney’s distinctive style.
Unlike its competitors aiming for photorealism, Midjourney positions V1 as an accessible, artistic tool for creators working with limited budgets. There’s no timeline editor or audio support yet—just raw visual generation—but the barrier to entry is low. According to founder David Holz, this marks an early step toward building a full “world model,” eventually combining image, motion, 3D, and real-time interactivity. The company describes the new feature as a “magic flipbook” rather than a filmmaking engine, though its creative potential has already caught the attention of artists, animators, and marketers seeking lightweight visual assets.
However, V1 launches in the shadow of an intensifying legal storm. Days before release, Disney and Universal filed a lawsuit against Midjourney, accusing the platform of enabling mass copyright violations. Despite basic guardrails blocking some IPs like Elsa or Mickey, Wired reports the model still generates characters like Yoda and Homer Simpson in offbeat scenarios—including drug use. The lawsuit may set a precedent for how AI tools can be held responsible for user-generated content. Still, Midjourney’s entry into the video space signals a broader shift: AI-generated media is becoming more immediate, more dynamic, and more accessible than ever.