- The AI Stuff Creators and Artists Should Know
- Posts
- Veo 3.1 in Flow is here: real audio, smarter control, faster edits
Veo 3.1 in Flow is here: real audio, smarter control, faster edits


AI-generated image

Hi Creatives! 👋
This week is all about control, sound, and speed. Veo 3.1 lands inside Flow by Google with richer native audio, stronger prompt adherence, and inline edits that feel like real post. OpenAI outlines a chips plan and long-range build that points to shorter queues and steadier fidelity. Microsoft debuts MAI-Image-1 for photoreal images that slot neatly into Copilot workflows. And Google rolls Nano Banana into Search via Lens and NotebookLM, with Photos following soon. Net result for your pipeline: cleaner first passes, fewer handoffs, and more time to shape the story instead of wrestling the tools
This Week’s Highlights:
Veo 3.1 lands in Flow: smarter controls, richer audio, more cinematic results
Big Build for Bigger Creativity: OpenAI’s new chips plan and a five-year roadmap
Microsoft’s first in-house image model just dropped: MAI-Image-1
Google’s “Nano Banana” lands in Search
🎬 Creative Feature
Veo 3.1 inside Flow by Google.
Playful mini-story
AI integrates into a rebranded product.
Kling delivers consistently.

Veo 3.1 lands in Flow: smarter controls, richer audio, more cinematic results
Google just rolled out Veo 3.1 and a big refresh to Flow, its AI-powered filmmaking workspace. If you’ve been experimenting with AI video for social, ads, or pitch films, this update levels up control, sound, and editability so your clips feel less “AI” and more “production-ready.”
What’s new
Rich, generated audio across Flow features, so clips don’t feel silent or temp. Works with Ingredients to Video, Frames to Video, and Extend.
Stronger narrative control and realism in Veo 3.1, with better prompt adherence and image-to-video quality.
Inline editing in Flow: Insert new elements with proper shadows and lighting; Remove is coming soon for clean object removal.
Where to use it: Flow, Gemini app, Gemini API, and Vertex AI.
Insight 💡
This update nudges AI video from “cool demo” to “usable draft.” Native audio gives your clips mood and pacing without a separate sound pass. Tighter control makes multi-shot pieces feel consistent. Inline edits in Flow mean you can tweak lighting or add an object instead of starting over. For social spots, product teasers, and pitch films, that’s real time saved. ✨
👉 Read full details here.
🚀 Big Build for Bigger Creativity: OpenAI’s new chips plan and a five-year roadmap

📷️OpenAI
OpenAI x Broadcom: 10 GW of custom AI accelerators
OpenAI announced a multi-year collaboration with Broadcom to design and deploy OpenAI-designed AI accelerators and Ethernet networking for next-generation clusters. The rollout targets first racks in the second half of 2026 and completion by the end of 2029. OpenAI says this helps move model know-how directly into silicon, and notes it now serves over 800 million weekly active users.
Why this matters to creatives
Faster queues and renders: more compute means shorter waits for image, video, and audio generations at higher fidelity.
Bigger canvases: expect longer video durations, richer motion, and more consistent subjects as training and inference capacity scales.
Better control: purpose-built accelerators can align hardware with model behavior, which often shows up as steadier styles, tighter motion control, and more reliable edits.
Read full details here.
💼 The bigger picture: a five-year plan for massive spend
Two days later, OpenAI is crafting a five-year plan to support more than one trillion dollars in pledged spending, exploring new revenue lines, debt partnerships, and additional fundraising to keep scaling infrastructure and products.
Why this matters to creatives
More tools in one place: sustained investment typically brings new features into familiar apps, so you can ideate, edit, and publish without bouncing across tools.
Reliability as you scale: stable funding plus long-range planning reduces outages and throttling during peak launches and client deadlines.
Wider access: diversified revenue and financing can support geographies beyond the usual hubs, which means better performance wherever your team is.
Read full details here.
Microsoft’s first in-house image model just dropped: MAI-Image-1
![]() 🖼️MAI-Image1 | ![]() 🖼️MAI-Image1 |
Microsoft AI unveiled MAI-Image-1, its first text-to-image model built entirely in-house. It focuses on photorealistic lighting, landscapes, and speed, and it’s already sitting in the Top 10 on LMArena. This signals Microsoft’s steady move to make more of its own creative models that can plug into Copilot and Windows workflows.
What happened
Microsoft announced MAI-Image-1 on October 13, 2025. It’s positioned as an image generator trained with feedback from creative pros to avoid “repetitive or generically stylized outputs.” Microsoft also calls out faster turnaround than many larger models.
Early signal of quality: MAI-Image-1 entered LMArena’s text-to-image Top 10 shortly after launch.
Context: This follows Microsoft’s August reveal of its first two in-house models, MAI-Voice-1 and MAI-1-preview, and its broader strategy of mixing models from multiple providers inside Microsoft 365.
👉 Read full details here
🍌 Google’s Nano Banana is now in Search — with NotebookLM today and Photos soon
Google is rolling Nano Banana into Search via Lens with a new Create mode, and it’s already working under the hood in NotebookLM to power Video Overviews with new styles like watercolor and anime. Google also says Photos integration is “coming in the weeks ahead.” They note 5B+ images generated since launch in August. Initial rollout is US and India in English.
How to try it: Open the Google app, tap Lens, then Create to transform or generate images without leaving Search.

🎬 Creative Feature
Veo 3.1 inside Flow by Google.
Filmmaker Dave Clark just shared his first tests with Veo 3.1 inside Flow by Google. His team at Promise has early API access and is already integrating it into their MUSE platform, led by Mariana Acuña Acosta.
Workflow: mix of image and text-to-video
Noticed: more dynamic motion; the new Expand Prompt feature is especially useful
👉 Watch the tests.
Playful mini-story
Meet Mauricio Tonon. His latest Sora 2 test is a playful mini-story that shows you can keep tight control and still have fun. He wrote the characters’ lines, sketched the sound design, and shaped a loose, coherent narrative — a great reminder that learning-by-making counts.
Watch the three trailers here.
AI integrates into a rebranded product.
Martin Haerlin uses AI to integrate a rebranded product directly into existing shots, so teams can repurpose assets without a reshoot. It’s especially smooth when the original product is already in frame, since tracking and lighting cues are there.
Check the full post here.
Kling delivers consistently.
In just 10 minutes, creator Tianyu Xu turned 2 prompts into 13 crisp video clips using Kling 2.1 with start and end frames in a Weavy workflow. His takeaway for designers and architects is simple: you don’t always need the newest model. You need the one that performs for your use case. Sora 2 and Veo 3 are exciting, but for fast, reliable image-to-video, Kling delivers consistently.
What you’ll learn next
LoRA for Architects workshop: Ar. June Chow on practical LoRA training for your preferred styles
Tianyu’s video best practices to speed up your pipeline
👉 Watch the original post and details here.

💡 Insight
Treat AI output like production plates you can trust. Start your first cut in Flow with Veo 3.1 so mood and pacing are baked in from the audio. Use inline edits to add or remove elements instead of regenerating whole shots. When you need quick stills with believable lighting, spin up MAI-Image-1 and comp as you would any plate. Keep an eye on OpenAI’s buildout because more capacity typically means longer durations, steadier motion, and fewer throttles on deadline days. For fast mockups and concept frames on the go, Nano Banana inside Lens and NotebookLM is a handy pocket tool. Small proofs, quick passes, cleaner finals — that’s the move this week.
🔔 Stay in the Loop!
Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.
Curios about C.A.S.H Camp Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.
Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩🎨
Stay in the creative mood and harness the power of AI,
Moodelier and Claire 🌈✨