Freepik AI-generated video

Hi Creatives! 👋

This week’s highlights all point to a practical shift: generative tools are being judged less on “wow” outputs and more on whether they hold up inside real creative pipelines. You can see it across image gen (where typography and repeatability matter), video (where “directing” beats rerolling), and music (where fast drafts help you move from concept to cut sooner). The upside is speed and iteration. The trade-off is higher expectations around consistency, rights, and verification.

This Week’s Highlights:

  • Nano Banana 2: fast image gen that’s actually built for real workflows

  • Gemini’s AI music maker (Lyria 3 beta)

  • Hollywood vs. ByteDance (Seedance 2.0)

  • Paul Trillo x Asteria’s “no slot machine” pipeline

Nano Banana 2: fast image gen that’s actually built for real workflows

Nano Banana 2 (also referred to as Gemini 3.1 Flash Image) is Google DeepMind’s updated image generation and editing model. The focus is less on “one pretty image” and more on outputs that fit real creative workflows: readable text, consistency across a series, and practical sizes for delivery.

What it can do

  • Cleaner text in images
    Better at generating legible headlines, labels, and signage. It also supports translating text inside an image for localization.

  • More consistent series outputs
    Designed to keep the same characters and objects stable across multiple images, useful for storyboards, campaign sets, and recurring characters.

  • Web grounded generation
    Can pull context from web search to help create infographics and diagrams that reference real world information.

  • Production friendly specs
    Supports common aspect ratios and higher resolution outputs, up to 4K.

What we noticed:

  • Nano Banana: still had some hallucinated details, and the text didn’t land in the typography style we wanted.

  • Nano Banana 2: text looks cleaner and more legible, and the overall style is closer to what we prompted.

For the full comparison, check our previous post on Recraft vs Nano Banana vs Seedream for generating images with text here.

What to watch for

  • Web grounded does not mean always correct
    Infographics and “data visuals” still need human verification.

  • Brand and likeness risk rises with realism
    Avoid using real people, logos, or trademarked characters unless you have permission and cleared assets.

  • Provenance is improving, but not universal
    Google highlights SynthID watermarking and C2PA style credentials. Helpful for transparency, but many platforms and viewers still do not check.

Quick examples you can try

1) Poster mockup with readable type

Prompt: Create a minimalist product poster for a skincare serum. Add a bold headline: “Hydration, minus the heaviness.” Add a subhead: “Lightweight gel serum for daily use.” Keep the text perfectly legible and aligned. Output 4:5.

2) Localization pass

Prompt: Translate the headline and CTA into Spanish while keeping the typography style consistent. Update any small details so it feels natural for Spanish speaking audiences.

3) 3 frame storyboard with the same character

Prompt: Create three frames featuring the same character with identical face, hair, and outfit. Frame 1: entering a studio. Frame 2: arranging props. Frame 3: presenting the final setup. Keep the character consistent across all frames. Output 16:9.

Read the full details here.

Gemini’s AI music maker (Lyria 3 beta)

Google is adding AI music generation directly inside the Gemini app, powered by Lyria 3 (beta). The promise is simple: you describe a mood, a genre, or a moment, and Gemini generates a 30 second track that you can use as a quick soundtrack, plus optional cover art for sharing.

For creators, this is less about replacing music production and more about speeding up the “I need something that fits this edit” step, especially for social videos, drafts, and concept pitches.

What you can do with Lyria 3 in Gemini

  • Generate music from text prompts
    Describe genre, instruments, tempo, mood, and structure.

  • Use images and video as inspiration
    Upload a photo or short clip, then ask for a track that matches the vibe.

  • Instrumental or lyrical tracks
    Gemini can also generate lyrics, depending on the prompt and settings.

  • Shareable output
    Google positions this as a lightweight creation flow, made to live inside chat.

What to watch for (so it doesn’t backfire)

  • Rights + “style prompts”: Even if a tool claims it avoids copying, anything that references a specific artist or recognizable track can raise legal and brand risk. Keep it genre/mood-based for client work.

  • Edit control: 30 seconds is great for drafts, but you may still need clean structure (intro/build/hit) for cutting to picture, which usually means multiple generations.

  • Training transparency: Coverage notes ongoing debate about what music models train on and how clearly companies disclose that. For commercial use, assume scrutiny can happen.

💡 Insight

For creators, this is most useful as a draft soundtrack generator for pitches, animatics, and early edits — when you need a vibe fast and you’re not ready to commission music. For anything commercial, the practical move is to keep prompts generic (mood/genre/instrumentation), track your prompt versions, and have a backup plan with licensed libraries if a project needs cleaner rights posture.

Read the full details here.

Hollywood vs. ByteDance (Seedance 2.0)

A new Los Angeles Times report says the Motion Picture Association (MPA) has sent a cease-and-desist letter to ByteDance over Seedance 2.0, calling the situation “pervasive copyright infringement.” This is a notable escalation because it moves from individual studio pushback to an industry trade group stepping in with a unified position.

The flashpoint is familiar: viral AI clips that look cinematic and realistic, but appear to use actors’ likenesses and studio owned IP without permission.

Here’s the core sequence, in plain terms:

  • Seedance 2.0 generates high fidelity AI video that has been used to create viral clips featuring recognizable actors and well known franchises.

  • Major studios have already issued legal threats, and the LA Times reports the MPA has now added its own cease-and-desist.

  • The MPA’s argument goes beyond “users misused the tool.” It argues the model was trained on studio works without consent and that the infringement is built into how the system performs.

  • ByteDance has said it respects IP and is working on stronger safeguards.

This means if you’re a creator or a brand team, don’t use Seedance 2.0 for anything publishable or commercial when the output involves any of the following:

  • Real people’s likenesses (celebs, influencers, employees, clients) without explicit permission.

  • Recognizable IP: characters, franchises, scenes, distinctive costumes, or anything that reads like a specific studio property.

  • “In the style of” prompts tied to a known film / show / creator if the goal is to mimic a recognizable look closely enough that viewers could confuse it for the original.

Read the full details here.

Paul Trillo x Asteria’s “no slot machine” pipeline

We’ve been following Paul Trillo, and this LinkedIn post is a good look at how Asteria is using generative video in a real production pipeline, not just text prompts.

For an Aston Martin F1 project (with CoreWeave), they combined live action, miniatures, 3D, VFX, and generative rendering—but the key is control: they rebuilt the workflow so they can edit specific frames on a timeline and refine shots repeatedly (instead of rerolling random outputs). Some shots used 10+ keyframes, art-directed in Cinema 4D + Photoshop, then placed back onto a 3D animatic—making it possible to accurately track logos and brand details, which is usually where AI workflows break.

Why it matters for creatives

  • More predictable iterations: refine a shot with intention, not roulette.

  • Brand-safe direction: they mention training fully licensed custom models on Marey (no scraped data).

  • Signals the shift: generative video as an extension of 3D pipelines, not a replacement.

Trade-off

  • This is still high-effort, high-craft production. The control comes with complexity.

Check their work here.

💡 Insight

If there’s one theme to keep in mind, it’s this: tools are getting better at generating assets, but the value still comes from your decisions. Use these updates to buy time back for what actually differentiates your work: clearer creative intent, stronger brand consistency, and cleaner risk posture.

A simple way to apply this week’s highlights:

  • Treat “web-grounded” and AI-infographics as drafts: verify facts before you ship.

  • Keep anything client-facing IP-safe: use cleared assets, avoid recognizable likenesses/logos unless you have permission.

  • Design for repeatability: build small prompt templates and series rules (type style, layout grid, character consistency) so your outputs don’t drift from post to post.

That’s it for this week, folks.

🔔 Stay in the Loop!

Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.

Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩‍🎨

Stay in the creative mood and harness the power of AI,

Moodelier and Claire 🌈

Keep Reading