Runway’s World Models Are Training Robots—Here’s Why 👀🤖

AI-generated

Hi Creatives! 👋 

This week’s AI updates show just how fast the field is branching out. Runway, the tool we know for videos, is now training robots. OpenAI is backing a full-length AI feature film, China’s giants are flexing trillion-parameter models, and Warner Bros. has officially sued Midjourney over AI-generated images of its iconic characters. ⚡

From robotics labs to Hollywood courtrooms, the reach of creative AI keeps expanding, and for us, that means the skills we’re building today may shape far more than just content.

This Week’s Highlights:

  • Runway’s not just generating videos anymore—it’s eyeing robotics.” 👀🤖

  • 🎥 OpenAI Backs an AI-Made Animated Film

  • 🚀 China’s Giants Drop Trillion-Parameter Models

  • ⚡️Warner Bros. sues Midjourney for AI images

  • 🎬 Creative Feature

    • The Chroma Awards

    • Runway’s 48-Hour AI Film

    • AI Music Video Experiment

Runway’s not just generating videos anymore—it’s eyeing robotics.” 🤖

AI-generated Image

Runway, best known for its creative AI video tools, is now stepping into the robotics and self-driving space. Why? Their powerful world models, originally built for content creation, turn out to be perfect for training robots in simulated environments.

What this means for us creatives:

  • Tools we love are scaling up: the same tech behind stunning videos is teaching machines how to move.

  • Crossovers are real: artistry and engineering are blending—your creative workflows may fuel real-world impact.

  • Big growth ahead: Runway has raised over $500M and is building a team dedicated to robotics.

❓ Will Runway stop making image & video tools?

NOPE! Runway isn’t ditching its creative roots. Image and video generation remains their core focus for creators.

What’s new is that they’ve discovered their world models are also perfect for training robots and self-driving systems. Instead of replacing creative tools, they’re expanding—serving both filmmakers and robotics companies with the same underlying tech.

Think of it as Runway doubling its stage: 🎬 for creators, 🤖 for robotics.

✨ Insight

Runway’s pivot into robotics signals a bigger trend—creative AI tools aren’t confined to entertainment anymore. They’re becoming infrastructure for industries far beyond content. For creatives, this means the skills you’re sharpening today—like crafting simulations, visuals, or environments—could translate into entirely new fields tomorrow, from designing digital worlds to shaping how autonomous machines see and move.

🎥 OpenAI Backs an AI-Made Animated Film

Photo: Critterz Film Ltd.

The Wall Street Journal reports that OpenAI is backing Critterz, an animated feature film created with heavy use of AI. Directed by Chad Nelson (OpenAI creative specialist) and co-developed with Nik Kleverov of Native Foreign, the project is being produced by Vertigo Films (UK) and Native Foreign (LA), with a script from James Lamont & Jon Foster (Paddington in Peru). Executive producers Allan Niblo, James Richardson, and Jane Moore are leading the production, supported by Federation Studios (Paris). The film is being created in just nine months on a budget of under $30 million

  •  Speed & affordability → A full-length film produced in months, not years, with a fraction of the budget.

  •  Collaboration over replacement → Human sketches and voices combined with AI enhancements keep creativity and copyright intactIndustry ripple effect → Hollywood is watching closely, expect more debates on ethics, jobs, and the future of storytellingOpportunities for independents → Lower costs and faster timelines could empower smaller studios and individual creators.

🚀 China’s Giants Drop Trillion-Parameter Models

Two of China’s biggest players just made bold moves in AI.

Alibaba unveiled Qwen-3-Max-Preview, a trillion-parameter model designed to rival OpenAI and DeepMind. It’s built for multilingual nuance, long-form storytelling, and complex prompts—and available through Alibaba Cloud and OpenRouter. Backed by a $52B investment in AI infrastructure, this isn’t just a flex, it’s a commitment to building serious creative tools.

👉 You can try Qwen yourself here.

Meanwhile, Moonshot AI released Kimi-K2-Instruct-0905, another trillion-parameter powerhouse. It uses a Mixture-of-Experts architecture with a massive 256K token context window, making it perfect for sprawling projects. It’s also strong at agentic coding, front-end generation, and tool-calling—helpful if you’re blending code with creative work. Access is live through Groq and OpenRouter.

Try it here.

Why Creatives Should Care 🎨

  •  Deeper comprehension → great for layered stories or multilingual projects

  •  Long-form ready → holds context across big scripts or novels

  •  Coding + creative blend → helps design, prototype, or integrate tools

  •  On-demand access → pay-as-you-go, easier to experiment

⚡️Warner Bros. sues Midjourney for AI images

Image Credits:Victor J. Blue/Bloomberg / Getty Images

Warner Bros. Discovery has just filed a lawsuit against Midjourney, claiming the AI image platform is generating unauthorized versions of Superman, Batman, Bugs Bunny, and more. This case could be a turning point for how AI platforms operate—and how creatives like us use them.

🔑 Key Highlights

  • The Lawsuit: Filed in LA federal court on Sept. 4, 2025, alleging Midjourney trained on Warner Bros. content without permission.

  • Character Use: Users can prompt “classic superhero battle” and still get DC heroes in high detail.

  • High Stakes: Warner Bros. is seeking up to $150,000 per infringing work plus an injunction to stop further use.

  • Bigger Picture: This follows similar lawsuits from Disney and Universal earlier this year—meaning the pressure is mounting across Hollywood.

For creatives, this isn’t just another legal headline—it’s a signal of where the industry is headed. If the studios succeed, we can expect AI tools to tighten their filters, restrict character generation, and introduce more guardrails. That means fewer shortcuts, but also a push for more original, distinctive work from artists who embrace AI as a creative partner rather than a cloning machine.

Read the full article here.

When AI Product Placement Gets Tricky 🎥

NanoBanana nails single-product shots—but struggles when more items are added, especially products with complex text labels.

Check it here.

🎬 Creative Feature

The Chroma Awards

The Chroma Awards is the first global competition celebrating AI-powered creativity across Film, Music Videos, and Games. Season 1 is now live, bringing together creators worldwide with over $150,000 in cash prizes and $1M in AI tool credits.

What’s Happening Now

Submissions are officially open until November 2, 2025. Winners will be announced in a livestream ceremony on December 7, 2025.

How to Join

  • Create an original project (after Feb 1, 2025).

  • Post it publicly on YouTube, TikTok, or Vimeo.

  • Submit your entry through the Chroma Awards portal.

The Chroma Awards is more than a competition, it’s a global stage for creatives to prove how AI can amplify storytelling, not replace it. With cash prizes, tool credits, and even career opportunities on the line, Chroma is showing that the future of creativity is collaboration between human imagination and AI tools.

💡 If you’ve been experimenting with AI in your films, music videos, or game concepts—this is your moment to step forward, share your work, and get recognized.

👉 Don’t just watch from the sidelines. Submit your project and join the movement at the Chroma Awards.

Runway’s 48-Hour AI Film

Ron Baranov recently won ( CONGRATULATIONS! ) Runway’s 48-hour film challenge, creating an entire short film from scratch in just two days. Unlike traditional filmmaking teams of dozens, Ron leaned on AI tools Runway, Midjourney, LTX Studio, ElevenLabs, and Epidemic Sound to bring his story, characters, and multiple locations to life.

We especially liked what he said about the importance of storytelling:
“The fundamentals of storytelling matter most. Tools will keep shifting, techniques will keep evolving. But script, logic, and visual storytelling are timeless. If you know how to tell a story, everything else can be learned.”

AI Music Video Experiment

Sara Nemati recently shared her latest experiment — a full AI-generated music video. Built with first-frame/last-frame sequences in NanoBanana, enhanced with PixVerse and Higgsfield AI, and scored with original music composed in Suno, the project shows how seamlessly multiple tools can come together. Final image cleanup in Photoshop and editing in DaVinci Resolve tied it all into one flowing piece.

💡 Industry Insight

What stood out to me this week is how creative AI is pushing into places we didn’t see coming. Runway’s world models are teaching robots, OpenAI is investing in films, China’s trillion-parameter releases are rewriting what’s possible, and now Warner Bros. is challenging Midjourney in court.

For creatives, that’s a clear signal: the projects we’re testing now, mock-ups, visuals, short films, aren’t just experiments. They’re the same skills that could shape new industries, new opportunities, and even new rules.

So keep creating, keep experimenting, and most importantly, keep your perspective at the center. That’s what makes these updates more than features—it’s what makes them matter.

That’s a wrap for this week!

🔔 Stay in the Loop!

Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.

Curios about C.A.S.H Camp  Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.

Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩‍🎨

Stay in the creative mood and harness the power of AI,

Moodelier and Claire 🌈✨