From prompt to pipeline with nodes. Try Runway Workflows or Figma Weave

Hi Creatives! 👋 

This week is about control and continuity—research that flows straight into drafts with Atlas, brand-safe outputs at scale with Adobe’s Foundry, and node-based canvases that let you branch ideas without losing the thread in Runway Workflows and Figma Weavy. We break down likeness and consistency with Nano Banana, stress-test models in the Image Editing Showdown

This Week’s Highlights:

  • ChatGPT Atlas — a browser with ChatGPT built in

  • GenAI Image Editing Showdown

  • Adobe’s new “AI Foundry” is here — brand-trained models for teams

  • 🎧 Nano Banana, in plain speak

  • Runway’s new Workflows

  • Figma acquires Weavy, launches “Figma Weave” for AI-native creation

  • 🎬 Creative Feature

    • Lovis Odin — Next Scene V2

    • AI-assisted circular workflow

Life Update 🎨✨

I’ve joined Krea.ai to help grow partnerships and marketing initiatives around creative AI. If you’re building or experimenting in this space, I’d love to connect and learn what you’re working on. Let’s make something beautiful — and a little bit smarter — together.

Perks: Get 20% off your first 3 months on Krea when you sign up with my link.

ChatGPT Atlas — a browser with ChatGPT built in

Atlas is a new macOS browser from OpenAI with a built-in ChatGPT sidebar, optional “Agent Mode,” and tight privacy controls. It helps you research, summarize, compare products, draft copy, and even complete tasks without leaving the page. Windows, iOS, and Android are coming soon.

What dropped

  • Atlas (macOS): Available to Free, Plus, Pro, Go. Business beta live; Enterprise/Edu via admin. Windows + mobile coming soon.

  • Sidebar: Summarize pages, grab quotes, outline, draft emails, parse tables.

  • Agent Mode: For select paid tiers, handles multi-step tasks like research and shopping.

  • Privacy: Browsing data isn’t used for training by default; optional “browser memories.”

  • Release notes: Early fixes and improvements are already rolling out.

Why this matters

  • Faster research for briefs, treatments, and pitch decks.

  • Easy comparisons for gear and tools without tab overload.

  • Draft as you browse: captions, outlines, shot lists.

  • Offload repetitive work with Agent Mode.

  • More control over what gets remembered.

👉 Read full details here: OpenAI

GenAI Image Editing Showdown

Picking an image-editing model for real work? This quick feature lines up multiple tools on the same prompts so you can spot strengths and weak spots fast. Check this site to see the side-by-side comparison of different models.

Adobe’s new “AI Foundry” is here — brand-trained models for teams

Image Credits:Adobe

Adobe just introduced AI Foundry, a service that helps companies build custom generative models trained on their own brand assets and IP. These models can output text, images, video, and even 3D scenes, and the service uses usage-based pricing so teams pay for what they actually run.

Why it matters for creatives 💡

  • On-brand at scale. Train once on style guides, product shots, palettes, and tone, then spin seasonal, multilingual, and format variants without starting from zero.

  • Cleaner rights posture. Firefly is trained on licensed data, and Foundry fine-tunes with your IP to help reduce copyright headaches.

  • Faster cycles. Great for always-on social, promo cut-downs, and product page refreshes where consistency beats one-off hero shots.

Try this next 🧰

  1. Assemble a “training kit.” Logo packs, typography, color tokens, product angles, VO examples, and proof of rights in one place.

  2. Write guardrails. Prompt do’s and don’ts plus a short house style for copy and visuals.

  3. Pilot one campaign. Narrow goal, clear metrics, weekly review of outputs.

  4. Watch consumption. Set caps and track generations since pricing is usage-based.

Who’s already using it: Early adopters include Home Depot and Walt Disney Imagineering; reports also point to Red Sea Global evaluating Adobe’s enterprise AI stack

Note: AI Foundry is enterprise-only. You access it via Adobe’s business page and a “contact sales/request info” flow; it’s positioned for organizations with Admin Console–managed licenses, not individual creators.

Read full details here.

🎧 Nano Banana, in plain speak

The team spills that Nano Banana (Gemini 2.5 Flash Image) is built to cut the boring stuff so creators can spend way more time actually creating. Think conversational edits with the image quality you want, plus likeness and consistency that hold up for real work.

  • Less tedious, more creative: Removes the grind so ideas stay front and center.

  • Origin: Imagine’s visual fidelity meets Gemini’s interactive editing.

  • Zero-shot likeness: One reference photo can look like you—no fine-tuning.

  • Character consistency: Same subject across frames for storyboards and brand work.

  • Speed as a feature: Low latency encourages more tries and better results.

  • Raise the floor: Focus on improving the worst outputs, not cherry-picks.

  • Right UI for the job: Chat/voice for quick tweaks, workflow graphs when you need control.

  • Visual reasoning: Strong at diagrams/explainers for teaching and pitching.

  • Enterprise path: Long brand guides in context for tighter compliance over time.

  • Sleeper feature: Multi-image “interleaved” stories for consistent sequences.

Insight

For creative teams, the real advantage is control plus speed. Likeness, consistency, and rapid retries turn look-dev and comps into an afternoon, not a week. As long-context guidance improves, this fits cleanly into brand-safe pipelines.

Watch the podcast here.

Runway’s new Workflows

Runway’s new Workflows is a big, node-based canvas where you chain text→image, image→video, and video→video steps. You’re not limited to Runway models — you can mix in third-party models — then tweak shots, swap backgrounds, and keep motion and dialogue intact across steps.

Why creatives should care

  • Build repeatable pipelines for brand-safe looks

  • Branch variations fast for A/B shots and social cuts

  • Save templates so your team stays consistent across scenes

  • Mix LLM nodes to auto-polish prompts between steps

Quick start

  1. Go to Workflows then New Workflow

  2. Add nodes: TextGen-4 ImageGen-4 Image→Video

  3. Connect outputs to inputs and click Run to test a node or Run All for the chain

  4. Save as template and branch prompts for versions

Heads-up: You’ll need a paid Runway account to use Workflows.

Read more here.

Figma acquires Weavy, launches “Figma Weave” for AI-native creation

Figma just bought Weavy and is rolling it into Figma Weave, a model-agnostic, node-based workspace for generating and editing images, video, animation, motion design, and VFX right on a browser canvas. Think pro controls plus AI outputs you can branch, remix, and refine. 🎬

What’s new

  • Model-agnostic: Pick the right model per task. Examples mentioned include Seedance, Sora, Veo for cinematic video, Flux and Ideogram for realism, and Nano-Banana or Seedream for precision.

  • Node-based workflow: Chain steps, branch ideas, and keep versions you can tweak without breaking the flow.

  • Built-in editing: Masking, lighting tweaks, color grading, and more on the same canvas.

  • Community and scale: Already used by indie creators, startups, and Fortune 100 teams. Figma is growing the Weave team in Tel Aviv to accelerate the roadmap.

  • All inside Figma: Expands the platform from design files to a broader media pipeline spanning concept to final polish.

Early use cases to try

  • Pitch videos and mood films: Block scenes with AI shots, color grade, and export faster for stakeholder reviews.

  • Brand kits to content: Feed brand palettes and references, generate social banners, then fine-tune typography and grading.

  • Product mocks and motion tests: Swap models for realism or stylization, keep both branches for side-by-side review.

  • VFX previz: Rough in shots with Seedance or Sora, then refine elements and passes without leaving the canvas.

Read more and the official announcement here.

🎬 Creative Feature

Lovis Odin — Next Scene V2

A purpose-built LoRA for Qwen Image Edit 2509 that generates seamless “next shots” while keeping characters, lighting, and mood consistent. Trained on thousands of paired cinematic shots for smoother, more emotional transitions.

  • Stronger shot-to-shot consistency

  • Better lighting and character preservation

  • Cleaner framing with no black bar artifacts

Use it in ComfyUI or any diffusers pipeline. Type “Next Scene:” and describe what happens next.
Open-source and built for filmmakers, animators, and storytellers.

Try it: Hugging Face workflow + fal thread

👉 Check it here.

AI-assisted circular workflow

László has shared a longer Director’s Cut of his Toyota spot, expanding on an AI-assisted circular workflow where ideas are generated, tested, approved, and refined in quick loops. The film traces “mobility” from life’s first movement to horse-powered travel to modern cars, showing how freedom of movement connects people and places. Created with Toyota and collaborators at Google, including Randy Han and Mitsuhito Hirose.

👉 Check it here.

Chroma Awards

Heads up: Chroma Awards deadline is now Nov 17, 11:45 PM PST. I’m on the finals jury and would love to see your AI film, music video, or game in the mix. $175,000+ in prizes and strong exposure to studios and brands.

Enter at chromaawards.com and share with a friend who needs the nudge.

💡 Insight

This week’s thread is “control without chaos.” Use ChatGPT Atlas when you need research to flow straight into first drafts and moodboards with less context-switching, but keep a human eye on nuance and citations; treat the GenAI Image Editing Showdown as your fast picker for who’s best at hair edges, product cleanup, text, and multi-pass retouch so the team isn’t guessing; reach for Adobe’s AI Foundry when you need on-brand, governed outputs at SCALE lean on Nano Banana to keep faces and product forms consistent across variations while watching for subtle drift; build repeatable spots in Runway Workflows with node-based templates you can rerun for each campaign; and explore Figma Weave to keep AI image/video iterations, prompts, and feedback in the same file so handoffs shrink and creative history stays reviewable.

That is all for this week, folks.

🔔 Stay in the Loop!

Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.

Curios about C.A.S.H Camp  Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.

Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩‍🎨

Stay in the creative mood and harness the power of AI,

Moodelier and Claire 🌈✨