- The AI Stuff Creators and Artists Should Know
- Posts
- Node Workflows Are Turning Experiments Into Playbooks
Node Workflows Are Turning Experiments Into Playbooks



Hi Creatives! đ
This week is about creative control with fewer surprises, in your pipeline and in your rights. Tools are blending together faster, âmotion controlâ is becoming a default expectation, and 2026 is shaping up to be a very real year for AI copyright decisions that will land on creators first.
This Weekâs Highlights:
Adobe + Runway are moving closer (what it could mean for end-to-end workflows) đŹ
2026 is a big year for AI copyright, and creators will feel it in contracts and distribution
Kling 2.6 Motion Control is now in Freepik (more direction, fewer rerolls)
đŹ Creative Feature
Tests LTX-V2 (Open-Weight AI Video)
AI Productions x Stereocolor on Hybrid Advertising

In 2025, generative media stopped being a âcool demoâ and started behaving like a real production tool. Not because the outputs suddenly became perfect, but because editing and consistency finally got usable enough for teams to iterate without restarting from scratch. Once identity, style, and continuity hold up, you can run the boring (and necessary) parts of production: shot lists, continuity notes, review cycles, controlled revisions, and clean handoffs.
As we head into 2026, the momentum is shifting from prompting in chats to building systems. Node based workflows are turning creative experiments into repeatable pipelines, and enterprises are moving from pilots to actual playbooks with approvals, brand controls, and clearer norms around rights and disclosure. The next pressure point is already forming: less âgenerate more video,â more âedit fewer clips with more control,â especially for motion behavior and continuity. The creators who win here are not just producing outputs, they are building direction, taste, and repeatable processes that scale.
Read the full LinkedIn post
A few weeks ago, Adobe + Runway quietly did a âthis changes the workflowâ kind of collab đŹ
In mid December, Adobe announced a multi year partnership with Runway, and the very first outcome is concrete, Runwayâs Gen 4.5 shows up inside Adobe Firefly. The pitch is basically: generate motion where you already start ideas (Firefly), then finish where you already polish (Premiere Pro and After Effects).
Below are the upsides, plus the stuff that might bite later.
Whatâs great for creators â
Less export gymnastics
If you live in Creative Cloud, having Runway video generation inside Firefly reduces the âdownload, re upload, convert, repeatâ loop. Thatâs time back for actual directing and editing.Faster concept to cut
For pitch reels, social variations, animatics, and mood tests, this is a clean path: idea â rough sequence â export â refine.Model choice becomes more practical
Adobe is leaning into âpartner modelsâ in Firefly, meaning you can pick a model based on the job instead of forcing every brief through one engine.A good excuse to stress test
Adobe also marketed a short âunlimited generationsâ window for some plans, which is useful for creatives who want to do a proper bake off without obsessing over credits.
Quick reality check: this collab is exciting, but there are a few things to watch.
First, rights and terms. âItâs in Adobeâ doesnât automatically mean itâs cleared for every client use, so you still want to check the fine print for the specific model.
Second, lock in. If the best stuff shows up in Adobe first, your whole workflow starts living there, which is convenient⌠until you want flexibility.
Third, expectations. Clients might hear âAI video inside Adobeâ and assume consistency is solved. Itâs not. You still need planning and post.
And last, cost. Testing is cheap and fun, but once a team relies on it, youâll want a real budget for iterations.
Read the full article here.
Here is a workflow by Giovanni Nakpil
Kling 2.6 Motion Control is now in Freepik
Freepik added Kling 2.6 Motion Control to its AI Video Generator, letting you control motion using your own reference video (performance), while swapping the character look with a reference image.
How it works (inputs)
Motion reference video = the movement, pacing, expression you want
Appearance reference image = the character/look
Text prompt = style, background, mood refinements
Key specs
Length: 3â30 seconds
Resolution: 720p (standard), 1080p (Pro)
Why creators should care
Better for sports, martial arts, dance, fast action where prompting motion is usually messy
Freepik claims improved hand gestures, synced facial expressions, and one-take consistency up to 30s
Quick workflow
Open AI Video Generator â choose Kling 2.6 Motion Control â upload motion video â add character image â prompt â generate.
Read full details here.

AI-generated image
2026 is a big year for AI copyright, and creators will feel it
US courts are weighing training as fair use, while the EU is pushing clearer labels for AI-generated content.
Two rule-making forces are converging in 2026:
US courts are moving toward decisions on whether training genAI on copyrighted work is protected by fair use, or whether companies need to pay and license.
This 2026, the US courts are set up for more rulings after fresh lawsuits and a major 2025 settlement, with billions potentially on the line depending on how fair use is applied to AI training.
So far, courts are split.
Judge William Alsup described AI training as âtransformativeâ under fair use, but still flagged liability around storing pirated books not tied to training.
Judge Vince Chhabria ruled for Meta in one case yet warned AI training often may not be fair use, raising concerns about content flooding the market and harming creator incentives.
Licensing and settlement are already happening alongside the lawsuits (including big entertainment and music examples), which suggests the market may move toward paid access even before courts fully settle the doctrine.
More hearings are expected in 2026 across disputes involving music, visual art, and model developers, and outcomes could either clarify fair use or extend the uncertainty.
Insight (US)
This is heading toward a world where âWhere did it train?â becomes a standard question in creative procurement, similar to âDo we have usage rights for this stock photo?â
Read full details here.

AI-generated image
The EU is formalizing a Code of Practice to help companies comply with the AI Actâs transparency obligations for marking and labelling AI-generated or manipulated content, including deepfakes.
If you ship creative work for clients, publish for an audience, or build a brand, this can affect tool choice, budgets, approvals, and disclosure expectations.
The EUâs Code of Practice is meant to help providers and professional deployers comply with Article 50 transparency obligations around AI-generated or manipulated content.
AI tool makers should add markings to AI outputs that computers can detect, and that can work across different platforms when possible.
Deployers (professional use) are expected to label deepfakes and AI generated or manipulated text on matters of public interest, with an exception when thereâs human review and editorial responsibility.
The Commissionâs update outlines a timeline: draft now, iteration through early 2026, final target June 2026, with transparency rules becoming applicable August 2, 2026.
Insight (EU)
âLabelingâ is becoming less of a personal preference and more of a workflow requirement, especially if you work with EU based brands, platforms, or audiences.
Read full details here.
đŹ Creative Feature
Tests LTX-V2 (Open-Weight AI Video)
LĂĄszlĂł GaĂĄl puts Lightricksâ LTX-V2 through a fast local test and explains why open-weight matters: you can train your own style, use any aspect ratio, and get tighter control with seed locking, FPS, and up to 4K renders. Their setup: local rendering on an RTX Blackwell, ControlNets (depth + canny), and Veo 2âstyle prompts (LTX-V2 uses a Google Gemma text encoder).
đ Check it here.
AI Productions x Stereocolor on Hybrid Advertising
Billy Boman AI Productions shares a behind-the-scenes look at their collaboration with Stereocolor AB on Allente Nordicâs new concept, âAllente World of Entertainment.â As the AI Creative Technologist partner, Billy helped build open-source video-to-video workflows to blend real soccer movement with Allenteâs brand colors, alongside traditional VFX/CGI (including those dragon scenes). Itâs a clean example of hybrid production done right: live action realism, AI-assisted motion integration, and classic post all working together.
đ Check it here.

đĄ Insight
Control without chaos isnât just a workflow preference right now. Itâs becoming a survival skill. This weekâs updates all point to the same direction: tighter pipelines, more predictable motion, and more pressure to understand what you can actually ship, scale, and legally stand behind.
That is all for this week, folks.
đ Stay in the Loop!
Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.
Curios about C.A.S.H Camp Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.
Don't forget to follow us on Instagram đ¸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! đ¨đŠâđ¨
Stay in the creative mood and harness the power of AI,
Moodelier and Claire đâ¨


