Tighter loops this week: Sora 2, ChatGPT Apps, Runway proves it in post

AI-generated video

Hi Creatives! 👋 

This week is about native audio, tighter control, and quicker handoffs. Sora 2 levels up motion realism and sound, plus a social-style app for fast remixing. ChatGPT Apps let you build inside the chat instead of tab-hopping. Runway’s case study shows AI nesting cleanly in a traditional post pipeline. Meta’s AI signals are shifting how work gets discovered. In short, better tools, shorter loops, clearer results.

This Week’s Highlights:

  • 🎬 Sora 2 is live — and there’s a new social-style Sora app

  • OpenAI DevDay: ChatGPT Apps + Apps SDK (built on MCP) 🚀

  • How Eggplant Picture & Sound finished Life After People with Runway 🎬

  • Meta will use your AI chats to shape your feed and ads

  • 🎬 Creative Feature

    • GOLF le FLEUR Maritime ⛴️

    • Generating three HD trailers.

    • Gucci’s THE TIGER

Chroma Awards — the global stage for AI-powered creativity

I’m proud to announce I’m judging the Chroma Awards finals for Short Form, Social, and Film — the global stage for AI-powered creativity across film, music videos, and games.

  • 💰 $175,000 in cash prizes

  • 🤝 Backed by ElevenLabs, Freepik, fal, Google Cloud, Dreamina, and CapCut

  • 🚀 Open to AI creators pushing what’s possible

🎬 Sora 2 is live — and there’s a new social-style Sora app

OpenAI rolled out Sora 2, a big step up in text-to-video, and paired it with a TikTok-like iOS app where you can make 10-second clips, remix others, and even add your own face and voice with a one-time “cameo” setup. Think short, swipeable, AI-generated videos built for play, prototyping, and quick storytelling.

What’s new in Sora 2

  • More realistic motion and physics. Fewer weird warps on action shots.

  • Native audio. Synchronized dialogue, sound effects, and ambient tracks generated with the video.

  • Better control. Clearer adherence to prompts and continuity across shots.

What’s inside the Sora app

  • Vertical feed with like, comment, and remix. All videos are AI-generated. Clips are up to 10 seconds.

  • Cameos. Verify your likeness once, then drop yourself or friends into scenes. You’ll be notified when your cameo is used.

  • Early rollout on iOS in the U.S. and Canada. Invite-based access for now. Free to start.

Who’s already tapping into it

  • Krea  added “Sora 2” as a selectable model in its Video tool; notes a promo for Pro/Max users.

  • Picsart  blog post walks through using “Sora 2” and “Sora 2 Pro” inside the Picsart AI Video Generator on web and mobile.

  • Artlist  “Sora 2 lands on Artlist” with a model dropdown for Sora 2 / Sora 2 Pro in its Image & Video Generator; included on certain plans.

  • Artificial Studio says Sora 2 and Sora 2 Pro are available in-app, plus an API for embedding.

  • Mattel is piloting Sora 2 so designers can turn toy sketches into shareable video concepts faster. This is a real signal that big brands see Sora as a rapid pre-viz tool.

  • Freepik: Sora 2 is listed on pricing with “1,600 credits per 4s” for paid tiers.

Creative caution checklist

  • Copyright. If your concept touches known characters, franchises, or artist styles, do a rights pass first and document approvals. The opt-out system means owners may need to take action, and policies could evolve.

  • Brand safety. Early feeds on new platforms can surface messy content before moderation catches up. Keep a human review layer on anything public-facing.

  • Territory and access. The iOS app rollout is staged. Plan for mixed team access and keep alt workflows ready.

Read full details here.

OpenAI DevDay: ChatGPT Apps + Apps SDK (built on MCP) 🚀

ChatGPT is now an app platform. You can call apps like Canva right in the chat, and developers can build their own using the new Apps SDK, which sits on the Model Context Protocol (MCP).

  • Apps run inside ChatGPT with interactive UIs. Early partners available today include Canva, Spotify, Zillow, Booking.com, Coursera, Expedia, Figma. Available to logged-in users outside the EU on Free, Go, Plus, and Pro.

  • Apps SDK is in preview and is built on MCP, the open spec for connecting LLMs to tools and data. OpenAI says the SDK is open source so apps can run anywhere that adopts the standard. App submissions and monetization come later this year via an Agentic Commerce Protocol.

  • Live demo used Canva to generate designs directly in chat.

Why it matters for creatives 🎨

  • Fewer context switches. Brainstorm in chat, then have Canva turn an outline into slides or posters without leaving the thread.

  • New distribution. Apps surface contextually inside conversations, giving builders access to a very large ChatGPT audience.

  • Commerce is coming. Devs will be able to charge for in-chat experiences once monetization opens.

MCP in one minute 🔧

  • What it is: An open spec that lets a model call your backend “tools” and optionally render your UI inline.

  • How Apps SDK uses it: Your MCP server lists tools, handles call_tool requests, and can return structured data plus a component reference that ChatGPT renders in an iframe. Transport is SSE or streaming HTTP.

  • Official SDKs: Python and TypeScript are provided, with guides to design components, auth, and state.

Build notes for your devs 🛠️

  • Spin up an MCP server in Python or TypeScript and define tools with JSON Schemas.

  • Return a UI component with your tool response so ChatGPT renders it inline.

  • Test locally, expose with ngrok, enable Developer Mode, then add your app in Settings → Connectors. Sample repos are provided.

Test locally

Read full details here: OpenAI’s announcement and docs. OpenAI Developers

How Eggplant Picture & Sound finished Life After People with Runway 🎬

Eggplant Picture & Sound is a Toronto-based full-service post-production studio offering picture, sound, motion graphics, animation, and VFX. They helped bring an 8-episode History Channel series back from the shelf using Runway’s Gen-4, Aleph, and reference tools. The team cut weeks of VFX time, kept quality high, and slotted AI into a traditional pipeline without blowing budget or schedule.

Concrete wins

  • 1 month of post-production time saved overall.

  • 4 hours of compositing saved on a single “roof damage” shot by asking Aleph for “more damage on the roof, please.” It went straight into the final episode.

  • Project unshelved: AI made a previously unaffordable vision achievable

The workflow, simply

  • Runway + classic VFX: They combined Runway generations with matte painting, Nuke, and compositing for polish. Example: adding practical trees into a hurricane scene.

  • Prompt craft matters: wording and tense changed results. “The mall is closed” removed people because the model inferred that closed malls are empty.

  • Why Runway: quality plus enterprise/legal support that fits their pipeline.

Quick Insight 💡

Use AI for fast first passes, then finish in comp to keep polish. Prototype tricky shots early to unlock scenes and save weeks. Create a prompt playbook, since tiny wording tweaks can change motion, crowds, and destruction levels. Treat AI outputs like plates and plan cleanup in Nuke/AE, pick enterprise-ready tools for smoother approvals, and keep a small R&D budget so a few AI-fluent artists can lift the entire schedule.

🔗 Read full details here.

Meta will use your AI chats to shape your feed and ads

Meta just confirmed it will start using what you talk about with its AI assistant to personalize what you see across Facebook and Instagram, starting December 16.

Key points 🔎

  • Affects suggested posts, Reels, groups, and ads.

  • Cross-app effects if your accounts are linked in Accounts Center.

  • No specific opt-out for this new signal, though you can still tweak general ad settings.

  • Not rolling out in the UK, EU, or South Korea.

  • Sensitive topics like religion, politics, health, sexual orientation, ethnicity, philosophical beliefs, and union membership are excluded.

Why it matters for creatives 🎨

  • Treat chat-style questions as keywords. Write hooks and captions like real queries your audience would ask.

  • Keep brand voice and visuals consistent across Facebook and Instagram to ride cross-app momentum.

  • Run quick A/Bs that answer a question vs inspire a mood, then watch saves and CTR.

Read full details here.
Official update from Meta.

🎬 Creative Feature

GOLF le FLEUR Maritime ⛴️

Arantxa Barcia and the MITO Studio team’s newest spec: a playful maritime sequel to their fan-favorite “Golf le Fleur Express.” This version uses real GOLF le FLEUR* products and branding, plus a tiny baby capsule — and even tests timing and continuity with a Tyler, The Creator lipsync.

Why it’s cool

  • Shows how much generative video quality jumped this year

  • Smart brand integration with authentic assets

  • Crisp lipsync and pacing that feel production-ready

Tools ⚙️ Freepik, Higgsfield AI, Kling AI, Runway, Google DeepMind, WAN 2.1 for lipsync

Watch the clip and meet the team in the links.

Generating three HD trailers.

Filmmaker Chad Nelson put Sora 2 Pro mode through its paces by generating three HD trailer concepts from single prompts. Sora handled the script, shot design, edit, music, and dialogue in one pass. He produced four variations for each, then did light swaps on title cards or a shot to lock a final cut. No color grade or mix.

  • Alice’s Wonderland — a modern, darker London-set take on Alice, blending wonder with horror and ending on a towering spider Queen.

  • Fathoms Below — a 1960s slapstick musical vibe with kitschy submarine miniatures, octopus mayhem, and a catchy theme.

  • The Cosmonaut — a tense, muted historical drama around Yuri Gagarin’s Vostok 1 training, launch, and white-knuckle re-entry.

Watch the three trailers here.

Gucci’s THE TIGER

Shared by Cartel & Co. Spotlighting AI artist Sam Finn for his work on THE TIGER, a short by Spike Jonze and Halina Reijn presented by Gucci. Starring Demi Moore, Edward Norton, Keke Palmer, Alia Shawkat, Elliot Page, and Ed Harris, the film blends cinematic craft with seamless AI-driven visuals led by Sam and collaborators.

Key credits: DoP Jasper Wolf. VFX Supervisor Janelle Croshaw Ralla. Color by Harbor Picture Company with Senior Colorist Damien Vandercuyssen. Produced by MJZ. Edited at Final Cut.
AI: Sam Finn @cartelandco with conartistproductions, electric_dreama, feremony, and partners Perlin, SSVFX, TEAM VFX ARTIST PVT LTD, Suplex FX.

Watch the post and full credits.

💡 Insight

Production is sliding from experiment to execution. Use Sora 2 for rapid pre-viz with audio baked in, then finish where it counts. Keep your team close to the work by testing Apps inside ChatGPT and handing off assets smoothly to post. Treat AI outputs like plates you’ll comp and grade. Watch distribution shifts as Meta learns from AI chats and tune captions like keyworded questions. Small proofs, fast iterations, cleaner finals.

That’s a wrap for this week!

🔔 Stay in the Loop!

Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.

Curios about C.A.S.H Camp  Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.

Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩‍🎨

Stay in the creative mood and harness the power of AI,

Moodelier and Claire 🌈✨