Meta’s New AI Video Feed + Kling AI 2.5

AI-generated image

Hi Creatives! 👋 

This week is about tighter loops and cleaner outputs. Runway’s A2D-VL speeds up captions and scene notes. Kling 2.5 Turbo levels up prompt-following, motion, and lighting, and drops 1080p pricing to 25 credits for 5 seconds, so you can iterate more without stressing the budget. Meta’s Vibes opens new remix and discovery paths. OpenAI’s Shared Projects and Pulse streamline handoffs and daily focus. With Llama cleared for U.S. federal use, your stack is getting sturdier. It’s a good moment to move from tests to real deliverables together.

This Week’s Highlights:

  • Kling AI drops 2.5 Turbo

  • Runway’s A2D-VL: Faster Vision-Language

  • Meta’s New AI Video Feed, Vibes 🎥✨

  • Teamwork just got easier: Shared Projects + Pulse 🧩

  • 🦙Llama Lands in Gov

  • 🎬 Creative Feature

    • Something In The Heavens

    • Hyper-real talking avatars

Chroma Awards — submissions still open

Chroma Awards is the global stage for AI-powered creativity across Film, Music Videos, and Games. I will also be the judge for the Social Media category and would love to see your work in the mix.

Why submit

  • Recognized by industry judges and media partners

  • Clear categories and guidelines for AI-assisted workflows

  • Visibility for both emerging and established creators

  • Cash prizes and features for standout entries

How to enter

Deadline to submit: November 3

Kling AI drops 2.5 Turbo

Kling just rolled out 2.5 Turbo, a new version of its video model tuned for better prompt-following, more natural motion, steadier style, and cleaner lighting and composition. In blind evaluations with creative pros, it came out ahead of several well-known peers on both text-to-video and image-to-video tasks, which is exactly the kind of signal you want if you’re pitching work or moving beyond tests into real deliverables. The pricing got friendlier too. A 5-second 1080p render now costs 25 credits instead of 35, which gives you more room to iterate, try alternates, and push look-dev without sweating the spend.

🎬 Sample made with Kling 2.5 Turbo

Filmmaker Dave Clark shared early tests using Kling 2.5 Turbo to pre-visualize a world he’s building called KYOKI. The motion in his piece is entirely generated with 2.5 Turbo, while still images were created in Seedream 4 4K to keep his title character consistent. His takeaway is clear for filmmakers and creative teams: this is a practical way to pre-vis ideas and explore worldbuilding on a weekend.

Read full details here.

Runway’s A2D-VL: Faster Vision-Language

Credit to Runway

Runway introduced Autoregressive-to-Diffusion VLMs (A2D-VL), a method that adapts a strong pretrained autoregressive model (Qwen2.5-VL 7B) so it can generate tokens in parallel like a diffusion model. That means you can trade a bit of quality for a lot of speed when you need it.

Why it matters.

  • Quicker loops. Faster captions, scene notes, and alt text when you’re storyboarding or building moodboards.

  • Longer, cleaner outputs. Better coherence for shot lists, product details, and edit briefs.

  • Less training cost. They adapt from a strong base instead of starting from scratch.

Creative uses right now.

  • Batch caption stills or keyframes for editors and clients.

  • Generate consistent product descriptions for lookbooks.

  • Improve accessibility with better alt text on posts and reels.

🔗 Read full details here. 

Meta’s New AI Video Feed, Vibes 🎥✨

Meta just launched Vibes, a short-form feed filled entirely with AI-generated videos inside the Meta AI app and on the web at meta.ai. It’s rolling out now and is designed for you to create from scratch, remix other people’s clips, add visuals and music, then post to Vibes or cross-share to Instagram and Facebook.

What’s new

  • Create, remix, share. Start with a prompt, upload existing footage, or hit Remix on any Vibes video to change style, visuals, or soundtrack.

  • Cross-posting built in. Publish to Vibes, then share to Reels and Stories across Instagram and Facebook with a couple taps.

  • Prompt transparency. Many previews display the original prompt that generated the video, which helps you reverse-engineer looks and learn fast.

  • Where it’s live. Available through the Meta AI app and meta.ai in multiple regions, with some exceptions noted by press coverage.

Things to watch 👀

  • Quality vs. quantity. Media outlets note the flood of odd or low-effort AI clips on social feeds. Cutting through means strong taste, tight pacing, and a clear idea each time.

  • Attribution and rights. If you remix, keep track of sources and music choices. Expect Meta to tune policies as the feed scales. We’ll share updates as guidelines evolve.

Insight

Meta’s new Vibes feed turns AI-video from a side tool into a front-row distribution channel. That matters because it links creation → remix → reach in one place, then lets you cross-post to Instagram/Facebook. For you, that means faster concept testing, built-in collaboration through remixes, and more chances to get discovered without shooting live footage.

Read full details here. 

Teamwork just got easier: Shared Projects + Pulse 🧩

Two fresh updates from OpenAI can actually smooth out how creative teams plan, brief, and ship work.

ChatGPT Pulse
Think of it like a quick morning brief that shows up inside ChatGPT with cards you can skim, then tap for detail. It pulls from your chats, your feedback, and optional connectors like calendar so you start the day already oriented, currently in preview for Pro on mobile.

Shared Projects in ChatGPT
A single space for your team’s files, instructions, and goals so every new chat starts with the right context. Great for keeping voice, tone, and references aligned across collaborators.

Why creatives should care 🎨

  • Cleaner mornings. Pulse can collect references, key dates, and open to-dos so your standup jumps straight to decisions.

  • Fewer revision loops. Shared Projects keep the latest brand notes, client preferences, and do/don’t lists in one place, so copy, cuts, and captions stay consistent.

  • Better handoffs. Producers, editors, and designers work from the same context, which reduces copy-paste chaos and lost links.

OpenAI blog: Introducing ChatGPT Pulse — read full details here.

OpenAI Academy resource: Shared Projects — read full details here.

🦙Llama Lands in Gov

📷️ Reuter

Meta’s open-source Llama just got the green light for use across U.S. federal agencies. The General Services Administration added Llama to its list of approved AI tools, which means teams inside government can start experimenting with it for things like contract review and IT troubleshooting. GSA has recently approved other providers too, including AWS, Microsoft, Google, Anthropic, and OpenAI, often at discounted rates if they meet security standards.

Why it matters for creatives

  • Stronger security and clearer guidance often trickle into commercial tools.

  • Expect more Llama-powered integrations for research, drafting, and media prep.

  • Big buyers push prices down, which can benefit studios and freelancers.

Bottom line for creatives
This is another signal that mainstream institutions are standardizing on a handful of AI options. That usually brings better documentation, steadier APIs, and clearer licensing language. All three help creative teams pitch AI-assisted workflows with more confidence.

Read full details here.

🎬 Creative Feature

Something In The Heavens

Lewis Capaldi’s new music video, “Something In The Heavens,” reimagines love, loss, and reunion in a surreal underwater afterlife. Led by Google DeepMind in partnership with YouTube, UMG, and Wonder Studios, the piece blends Veo 3 and Nano Banana to build scenes no camera could capture. It’s not about replacing artists. It’s about expanding the canvas and telling symbolic stories in new ways.

Core team

  • Creative lead: Matthieu Lorrain

  • Director: Justin Hackney

  • Producer: Harry Moore

  • Editor: Joseph Harvey

  • Creative: Andy Knox Knox

AI artists
Yigit Kirca, Max Larsson, Mark Isle, Giacomo Mallamaci, Issa Sissoko, Bruno DETANTE, Stephane Benini, Billy Boman, and more

Partners and collaborators
Tesäen C., Christian Haas, Jordan Schepps, Jessi Liang, Nicole Hunter, Pratik Gadamasetti, Vivien Lewit, Lyor Cohen, Jeff Chang, Cătălina Cangea, PhD, Eli Collins, Tom Hume, Douglas Eck

Hyper-real talking avatars

Sebastien Jefferies shows how to build hyper-real talking avatars without a studio team. Generate the face in Midjourney, import to Google Veo 3, script the lines, and you’ve got cinematic motion that reads almost human.

Check the full post here.

💡 Insight

The pattern is clear: better quality at lower friction. A2D-VL compresses the feedback cycle. Kling 2.5 Turbo improves consistency and gives you cheaper shots to explore look-dev and alternates. Vibes builds in distribution and collaboration. OpenAI’s updates keep teams aligned. Llama’s approval signals a more standardized base. Net result: faster decisions, cleaner reviews, and assets you can ship with confidence. Keep experimenting, keep it simple, keep shipping.

That’s a wrap for this week!

🔔 Stay in the Loop!

Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.

Curios about C.A.S.H Camp  Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.

Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩‍🎨

Stay in the creative mood and harness the power of AI,

Moodelier and Claire 🌈✨