- The AI Stuff Creators and Artists Should Know
- Posts
- This One-Click Photoshop Tool Is Changing My Process—Here’s Why
This One-Click Photoshop Tool Is Changing My Process—Here’s Why


Hi Creatives! 👋
This week’s updates hit both sides of the creative brain: the tools that supercharge your workflow—and the conversations that safeguard your art.
From Adobe’s Harmonize in Photoshop (yes, one click and your object melts into the scene—lighting, shadows, and all) to DreamWorks’ bold “no AI training” stance, it’s clear we’re moving into a world where creators get both power and protection.
Plus, if you’ve ever dreamed of running AI locally—no cloud, no lag, just pure image generation—AMD x Stability AI just made it real. And behind the scenes? Studios like Netflix and Disney are quietly reshaping pipelines with tools like Runway. This isn’t just theory. It’s happening now.
Whether you're building moodboards in Photoshop or editing video with Gen AI, one thing’s clear: these aren’t just tools—they're a shift in how we create, collaborate, and claim ownership.
This Week’s Highlights:
🧪 Creative Spark: Photoshop’s AI‑Powered Harmonize
🎬DreamWorks Says “Hands Off!”
✨Local Image Gen Just Got Real: AMD x Stable Diffusion 3.0 Medium
🎬 Hollywood’s Quiet Move: Netflix & Disney Turn to AI for Video Creation
🎤 Reflecting on My Panel at Create & Cultivate

🧪 Creative Spark: Photoshop’s AI‑Powered Harmonize

Adobe’s Harmonize (beta) – a one‑click AI tool that blends an added object or figure into a background by matching color, lighting, shadows, and overall tone for seamless compositing.
Built on Project Perfect Blend from Adobe MAX 2024 and powered by Firefly AI
Why it matters to creatives
🎭 Eliminates manual tweaks: no more fiddling with curves or masking just to make subjects look “in‑scene.”
💡 Chips away at the skills barrier: even beginners can create natural composite scenes in second.
Perfect for mood boards, campaign mockups, product photography—you name it.
🗺️ Wrap-Up
This Adobe Photoshop update—especially Harmonize (beta)—is a breath of fresh air for creatives: intelligent, accessible, and workflow‑friendly. Whether you’re experimenting with surreal montages, building campaign visuals, or simply tweaking product photos—Adobe’s generative tools now lighten the technical load and elevate your playbook.
Want to dig deeper?
Read full details on The Verge: the full write‑up is packed with examples, pros/cons, and visual demos 👉 "read full details here"
🎬DreamWorks Says “Hands Off!”
Imagine putting years into crafting characters, jokes, visual style—and then some training algorithm mashes it into bland, soulless generative art. This legal move by DreamWorks is like planting a "Keep Out" flag in front of your creative studio. It says: you can’t leech off our work to train your bots.
DreamWorks just added a legal disclaimer at the end of The Bad Guys 2 that reads:
“All rights in this work are reserved... This work may not be used to train AI.”
The end credits of BAD GUYS 2 said “screw AI. Try stealing our work! It’ll be a crime to your artless ass.” All I must say to this is SAY THAT SHIT WITH YOUR WHOLE CHEST DREAMWORKS ANIMATION! SPEAK ON IT!
— Rendy Jones (@rendy_jones)
11:38 PM • Jul 29, 2025
📌 This isn't just a one-off. They did the same in the How to Train Your Dragon remake, signaling a clear studio-wide stance.

From How to Train Your Dragon
That matters because:
🎨 It preserves the value of human-made art—and supports livelihoods of animators, writers, sound artists, etc.
⚖️ It sets a standard for other creators and studios to follow. If one big player stands firm, others might follow.
🛡️ It gives us more control over how, where, and by whom our work can be used.
So yes—as creatives, we’re watching copyright law evolve in real time, and this is one of the flashes of red alert saying: this is off-limits for AI training.
🔗 Read more:
IGN article
✨Local Image Gen Just Got Real: AMD x Stable Diffusion 3.0 Medium
Ever dreamed of generating high-res images without the cloud? No subscriptions. No waiting. Just you, your laptop, and your creative brain?
Well… AMD and Stability AI just made that happen.
They’ve released the world’s first Stable Diffusion 3.0 Medium model optimized for Ryzen AI laptops with XDNA 2 NPUs — and it runs fully offline. That means you can now generate stunning 4MP images (2048×2048 px) directly on your device.
🖼️ No internet.
🔐 No data leaving your machine.
⚡ And it’s surprisingly fast — ~70 seconds per image.
🔧 Quick Specs:
Works on laptops with Ryzen AI 300 or AI MAX+ processors (with XDNA 2 NPU)
Needs at least 24 GB RAM (model uses ~9 GB)
Supports offline generation via Amuse 3.1 Beta
Free for personal use or businesses under $1M revenue
AMD calls this a “creator’s model,” and it shows — you can prompt anything from a hyperrealistic soda can to a stylized fashion lookbook. Add structure, style, and even camera settings in your prompts for best results.
💡 What This Means for Us:
This shift toward fully local AI image generation is a big win for creators:
It gives us more control, especially for sensitive or branded content
It lowers the barrier to entry — no need for external GPUs or huge cloud fees
And it reminds us that the future of AI isn’t just big servers — it’s personal, portable, and private
Whether you’re sketching ideas in a coffee shop or building full campaigns, tools like this are putting creative power right into our hands — no strings attached.
👉 To read more, click here
🎬 Hollywood’s Quiet Move: Netflix & Disney Turn to AI for Video Creation
In one of our recent newsletters, we shared how Netflix used generative AI to create a collapsing building scene in El Eternauta—produced in a fraction of the usual time and cost. (Revisit that story here) 🌟
Turns out, that was just the start.
🗞️ What’s New?
Netflix and Disney have both been working behind the scenes with Runway, the video tool startup that recently secured $545 million in funding. The platform is now valued at over $3 billion, and it’s not just Netflix—it’s also caught the attention of Disney, AMC, and Lionsgate.

Imagen4 Generated
🛠️ Netflix reportedly used Runway’s tech in El Eternauta to bring that dramatic building collapse to life.
🎥 Disney and others are testing Runway for pre-visuals and post-production workflows, with some eyeing exclusive collaborations.
📦 With this kind of backing, Runway is positioning itself as a trusted creative tool—not just for indie filmmakers, but for top-tier studio pipelines.
🔎 For Creative Folks Like Us
AI-assisted video is being used—quietly but consistently—in major productions
Studios are exploring new workflows that combine AI footage generation with traditional production
Tools like Runway are showing up in real-world pipelines, not just in experiments or side projects
💡 Our Take
With El Eternauta, Netflix confirmed that AI-generated visuals can make it to screen—not just in demos, but in actual finished scenes. While it hasn’t been disclosed which platform was used to build that moment, it’s clear that major studios are exploring a new toolbox.
And while the buzz might be happening quietly, the shift is real. These early examples remind us that now is the time to explore what’s possible—and how we as creatives can use these tools to complement our own process with intention and transparency.
👉 Read more from Yahoo Finance
🎤 Reflecting on My Panel at Create & Cultivate
Have you noticed how attitudes toward AI in media have shifted? 🤔
Just three years ago, it felt like a gimmick—or even a threat. A year later, curiosity started to grow. And today? So many of us are embracing AI as a true creative partner.

I had the honor of speaking about this evolution on the "Create, Iterate, Elevate: Your AI Journey Begins Here" panel at the sold-out Create & Cultivate Festival, presented by @qualcomm Snapdragon on July 19, 2025.
Sharing the stage with these powerhouse women was such a gift:
🌟 @erinondemand – Erin Winters, Founder of Erin On Demand
🌟 @thebemusedstudio – Lauren deVane, Founder & Creative Director of Bemused Studio
🌟 @taylor.loren – Content Strategist, Creator & Fractional CMO
And our conversation was beautifully moderated by Carmen True, VP and Head of Compute Marketing at Qualcomm.
A few heartfelt thank-yous:
• To @jaclynrjohnson and the amazing @createcultivate team for inviting me into this vibrant community
• To @qualcomm Snapdragon for sponsoring our session and showing what it looks like when technology empowers creativity
• To @karmtrue for steering such a thoughtful and intentional dialogue around AI, ethics, and opportunity
We covered a lot of ground—from real-life use cases to the challenges of ethical implementation—but one thing became clear: keeping creativity at the center is still our biggest responsibility.
I truly believe that when we combine the ingenuity of the creative and media world with the depth of AI research, we can build something sustainable, thoughtful, and bold.
Collaboration isn’t just nice to have—it’s the only way forward. 🌀

💡 Industry Insight
As AI tools become faster, smarter, and more intuitive—from Photoshop’s Harmonize to fully local image gen—our creative potential expands. But with that comes a bigger responsibility: to shape how these tools are used.
Whether it’s a legal disclaimer from DreamWorks or an AI-generated Netflix scene, the message is the same—the future of creativity will be built by those who stay intentional. Not just fast, but thoughtful. Not just seamless, but respectful.
So experiment boldly, but remember: your voice, your choices, and your ethics are what make AI artful.
That is for this week folks!
🔔 Stay in the Loop!
Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.
Curios about C.A.S.H Camp Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.
Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩🎨
Stay in the creative mood and harness the power of AI,
Moodelier and Claire 🌈✨