- The AI Stuff Creators and Artists Should Know
- Posts
- Act‐Two, Unlimited AI, Talking Images & Legal Clarity: This Week’s Creative Power‐Pack
Act‐Two, Unlimited AI, Talking Images & Legal Clarity: This Week’s Creative Power‐Pack


Hi Creatives! 👋
This week’s roundup is packed with seriously cool AI upgrades—from Runway’s next-level motion capture to chatty images and full-power agents. If you're ready to push your creative boundaries and streamline your process, you're in the right place.
This Week’s Highlights:
📖Runway’s Act‑Two
🎉 Unlimited AI Creations Just Became a Thing
🎬 Flow’s New Feature: Images that Speak!
🎉 Hot Launch: ChatGPT Agent is Here!
🌟 In the Studio: Netflix's Bold AI Move
✨The Emergence of AI Safety Layers & Beyond
📌 Creative Picks

📖Runway’s Act‑Two
Runway’s next-gen Gen‑4-based motion capture tool that, from a single driving performance video + character reference, can animate a full upper-body performance—face, head, hands, and arms—onto any character image or video
Key Capabilities
Gesture control: Transfer body and hand movements (when using a character image)
Environmental motion: Automatically adds subtle camera shake and background motion for realism
Self-tuning facial expressiveness: Adjustable, with default level 3 for balanced realism
🛠️ How to Use & Tutorials
Runway’s official help article provides a clear tutorial on creating with Act‑Two:
Upload a performance video
Provide character image/video input
Adjust gesture and expressiveness settings
Render and download
Read the full details and get early access to Act-One right here:
🔗 Runway Act-Two Official Page
🎉 Unlimited AI Creations Just Became a Thing
Freepik has just removed all usage caps for AI image generation on its Premium+ and Pro plans—meaning:
Truly unlimited image creations using top-tier models like Mystic, Google Imagen, Flux, Seedream, Ideogram, Runway, GPT, and Classic 🌟
Unlimited editing tools included—Retouch, Resize, Upscale, background edits, and more 🛠️
AI video creators rejoice: MiniMax 02 video model is unlimited through July, with most other models going unlimited by month's end 🎥
Pricing: Premium+ ≈ $24.50/month; Pro with full video features and merch license ≈ $39–$250/month (Pro varies by content features)
Why it matters:
Creative freedom: No tracking of credits or hit-by-limit pop-ups—just pure experimentation.
Smoother workflow: Generate, edit, and output media seamlessly in one unified space.
Big pressure on competitors: Freepik's bold move sets a new bar—others may have to follow.
What you can do now:
Rapidly test design ideas, moodboards, and iterative campaigns without worrying about credit burn
Explore AI-generated video content (MiniMax is free for July)
Dive into merch-ready designs if you’re on Pro
Bottom line:
With no more “credit countdowns,” Freepik is unlocking full creative flow. If you’re on Premium+ or Pro, it’s your moment to push boundaries—without breaking the bank.
🎬 Flow’s New Feature: Images that Speak!
Remember back in May when Google dropped its Flow AI video tool at I/O—turning single images into cinematic clips? Well, it just got way more magical. Now you can MAKE THEM TALK. Yep—images that SPEAK. 😲
Quick Highlights:
Speak Your Story: Simply type dialogue into your video prompts, and Flow creates realistic voiceovers.
Global Access: Now available in 140+ countries, opening doors for diverse creative storytelling worldwide.
Fast & Easy: No extra audio layering needed—Flow handles visuals and voices in one go!
Great for quick social content, engaging product demos, or just experimenting with your storytelling ideas.
🎉 Hot Launch: ChatGPT Agent is Here!
OpenAI just rolled out its new Agent Mode, combining deep research, web browsing, and code-gen into a single, powerful AI coworker. It’s like having a digital assistant with its own virtual computer. Here’s why it matters ⬇️
🌟 Meet the Agent Team
A dream team of OpenAI experts—Yash (Operator), Jing and Issa (Deep Research), and Casey (Agents)—came together to unite their skills. The result? A tool that thinks, browses, clicks, codes, and executes complex tasks seamlessly. No more fragmented workflows—Agent Mode brings everything under one roof.
🧠 From Proof-of-Concept to Power Tool
Earlier this year:
Operator could browse & transact online.
Deep Research mastered long-form investigations.
But users wanted both in one package. And now they’ve got it: a virtual computer equipped with:
Text browser for research
Visual browser to click and scroll
Terminal to run code, generate slides & spreadsheets
API access (Gmail, GitHub, etc.), plus an image-gen tool for visuals
👥 Who Can Use It—and How Much?
Accessible to: Pro, Plus, and Team users (and hitting Enterprise/Education soon)
Usage cap: Pro users get 400 agent prompts/month, Plus/Team get 40. Extra credits can be purchased.
🌍 Why Creatives Should Care
✨ Time saver extraordinaire: Long-form content, pitch decks, event planning, gifts, content repurposing—all made faster.
🎨 Focus on what you love: Let the AI handle the grind—research, spreadsheets, booking, code—while you bring imagination.
🤝 Collaborative superpower: Agent Mode can pause, ask questions, adapt—just like working with another human.
Bring your vision to life: From marketing campaigns to content series, the agent does the groundwork; your creativity does the rest.
🔗 Read Full Details Here
For the official scoop and demos, check out OpenAI’s announcement: “Introducing ChatGPT Agent: bridging research and action”
🌟 In the Studio: Netflix's Bold AI Move

AI-Generated Image
Netflix has officially dropped its first final footage created with generative AI! The breakthrough scene appears in the Argentine sci-fi series El Eternauta, where a Buenos Aires building dramatically collapses—brought to life 10× faster and cheaper than traditional VFX methods thanks to GenAI
Ted Sarandos, Netflix’s co‑CEO, said it best: “It’s real people doing real work with better tools.” This scene wasn’t just a budget hack—it was proof that creativity and efficiency can dance together, even in budget‑tight projects.
🔍 Why This Matters for Creative Minds
Think of it like discovering a secret shortcut in a familiar city. Generative AI isn’t about replacing artists—it’s about giving them super‑charged tools. Here's how it applies to your creative journey:
Speedy prototyping – Visualize epic shots overnight, skipping weeks of back‑and‑forth with 3D teams.
Budget breathing room – Add flourish to indie sets—like grand collapses or futuristic landscapes—without breaking the bank.
Creative juice boost – Pre‑viz, storyboard, moodboards—AI helps explore ideas fast and flexibly.
Spaces open up – As crunched‑budget shows get ambitious, tiny teams can now tell bold stories that once needed blockbuster dollars.
💡 How You Can Bring This to Life
Explore GenAI tools: Try VFX‑enabled platforms like Runway, Stable Diffusion X, or niche indie VFX apps for fast mockups.
AI pre‑pro workflows: Use generative tools to draft scene compositions, lighting moods, and camera angles before you shoot.
Blend AI into marketing: Voice search suggestions, AI-generated trailers, and smart thumbnails—all keep your audience engaged.
📣 Final Take
Netflix’s leap isn’t just about streamers—it’s a signpost for creative empowerment. AI is becoming a co‑creator: speeding up ideation, unlocking visuals, and lowering cost barriers. Whether you’re working indie or pitching big, this is a nod to lean innovation with punch.
✨The Emergence of AI Safety Layers & Beyond

🖼️ AI Safety Layers: How AI Protects Us In The Era Ahead by Scott Belsky
🛡️ Safety Layers to Watch Over You
AI-powered safety agents built into your OS/apps can alert you to scams, phishing, voice‑cloning tricks—or even doomscrolling triggers.
They help you send fewer emotionally charged messages and nudge compliance in workplace settings.
🤖 AI as Support, Not Surveillance
These models are designed to run locally, preserving your privacy while offering real‑time, empathetic intervention—like a safety net, not Big Brother.
📿 Bridging AI & Religion
Belsky explores how AI’s ability to respond might fill a spiritual gap. It’s a shift from the silent divine to something interactive and contextual.
🤝 Founders Aren’t Solo Acts
Behind every “visionary” founder is a founding team. Belsky honors Bryan Latten (Behance) and urges celebrating the collective—your creative collaborators—who shape every project.
🔍 Why Creatives Should Care
Guarded creativity: Spend less time stressed or sidetracked, and more time in flow.
Human-centered feedback: Get gentle nudges and better context before hitting ‘send’ or publishing.
Team gratitude: Spotlight co-creators—the people who really make the magic happen.
Read the full newsletter for an immersive dive into these ideas:
“The Emergence Of ‘Safety Layers,’ Religion In The Era of AI, Angel Thesis, & Founder Truths” by Scott Belsky
📌 Creative Picks
Henry Daubrez – multi‑model AI animation workflow
Henry put together a short, stop-motion style animation featuring a talking character and full control over look and camera. He shared the results on LinkedIn and said it “worked better than expected”
How he did it (quick breakdown):
Backgrounds – Made painterly backdrops with a tweaked Midjourney.
Character – Created a character sheet in Imagen 3, then refined it in Flux Kontext (texture, lighting).
Lighting – Matched light between character and background using Flux.
Action scenes – Used Google VEO2 (“ingredients”) to set up scenes and script camera moves (using prompts like “transition to”).
Dialogue – Took a close-up, ran it through VEO3 for synced motion and voice. He cloned a voice via Eleven Labs Pro and refined it.
Final polish – Quick compositing, added stock effects, and scored it using Udio.
Tools he used:
Midjourney, Imagen 3, Flux Kontext
Google VEO2 & VEO3
Eleven Labs Pro, voice modifier
Compositing tools + stock FX + Udio
Why it’s cool:
He broke the project into clear, modular steps—like a visual toolkit. What once took weeks now comes together in just a few hours, perfect for rapid experimentation.
Sebastian Grey (via DreamZero Ai)
They used Veo 3 to animate a still image of Jack “Punch” Perkins — the first Black officer and captain in Royal Navy history — into a short video clip with synced voice and ambiance
Why it stands out:
In just a few seconds, the clip brings a historical figure to life, combining visuals, dialogue, and atmosphere. It’s a fresh take on “generative cinema,” showing you don’t need a full film crew to tell powerful stories .
Likely tools used:
Veo 3 (via Google Gemini/Flow) for photo-to-video conversion with sound.
Possibly Canva AI’s video features, which are powered by Veo 3 for easy prompt-based creation.
Yigit Kirca - Creator at Wonder Studios
Yigit used Google DeepMind’s Veo 3 Image‑to‑Video With Audio to turn their own images into a short animated clip. The result featured consistent characters, lip syncing, sound effects, and studio-level visuals—all in one go
Tools they likely used:
Gemini app (Pro/Ultra) or Flow to run the Veo 3 model
Uploaded their image, typed a prompt describing visuals and sounds
Let Veo 3 generate the video with built-in lip sync and audio

💡 Industry Insight:
This week’s toolbox brings not just bolder creation but smarter execution. With tools like Runway Act‑Two, unlimited AI from Freepik, Flow’s voice-integrated visuals, and ChatGPT Agent Mode, you can craft richer stories faster.
Just remember: designing with purpose is key. Using AI openly, responsibly, and with respect for creative rights keeps your work authentic and future-proof. Ethical innovation isn’t optional—it’s the new standard.
That is for this week, folks.
🔔 Stay in the Loop!
Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.
Curios about C.A.S.H Camp Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.
Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩🎨
Stay in the creative mood and harness the power of AI,
Moodelier and Claire 🌈✨