SIRI GETS GEMINI. CREATORS GET CONTROL.

Hi Creatives! 👋 

This week’s theme is simple: the “default” is getting smarter, and that’s going to change what clients, teams, and audiences expect from your work.

When Apple and Google start reshaping the assistant stack (yes, Siri gets Gemini), it’s not just tech gossip, it’s a signal that creative workflows are moving closer to the OS level. At the same time, tools like LTX-2 and Freepik’s Change Camera are pushing generative video and image edits toward something that looks less like “a demo clip” and more like an editable pipeline. Add Niji 7 for creators who live in illustration and stylization, plus “Variations + Storyboard” features that nudge people toward planning instead of random spawning, and you’ve got a week that quietly screams:

Speed is cheap now. Direction is the premium.

This Week’s Highlights:

  • Apple + Google redraw the assistant map, Siri gets Gemini

  • Real time captions that actually keep up

  • 🎬 LTX-2, the “ship ready” AI video update

  • Niji 7, cleaner style control for illustration workflows

  • Freepik Change Camera

  • Variations + Storyboard, planning becomes a feature

Everything Everywhere AI at Once (Upscale Day LA)

I’ll be at Everything Everywhere AI at Once (Upscale Day LA), a director’s cut style meetup for the people building and using AI in film. Expect real talk on what’s exciting, what’s risky, and what “craft” should still mean when workflows get faster.

Why it’s worth your time

  • Keynote from Joaquín Cuenca (Freepik CEO)

  • AI in film workflow session, plus an AI video showcase

  • Panel on how AI is reshaping production, talent, ownership, and ethics

  • Networking, DJ, and a long hang for meeting collaborators

Seats are limited + approval required. If you’re in LA and working in film, VFX, directing, production, or creative tech, come through.
👉 RSVP here: Upscale Day LA event page

Apple + Google redraw the assistant map, Siri gets Gemini

Ctto

Apple and Google announced a multi year collaboration where the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud tech, and those models will power future Apple Intelligence features, including a more personalized Siri expected this year. Apple also says Apple Intelligence will continue to run on device and through its Private Cloud Compute, keeping Apple’s privacy posture.

Why this matters for creatives

A genuinely “useful” assistant changes the shape of creative work, not by replacing taste, but by compressing the annoying parts:

  • Pre production at speed: turning a messy brief into shot lists, prop lists, alt concepts, and production checklists faster.

  • Cross app handoffs: moving from “idea” to “calendar, notes, tasks, and files” without you playing project manager for your own brain.

  • Personal context as a feature: if Siri can reference your project history, brand notes, and prior decisions, your creative direction gets more consistent across iterations.

If this actually lands, the biggest win is less about novelty and more about fewer context resets.

The tradeoffs (quick version)

  1. Style flattening is real
    When the same assistant becomes everyone’s default, outputs start to sound alike. Use it for structure, then add your taste so your work doesn’t blend into the feed.

  2. “Privacy forward” ≠ “client safe”
    Even with Apple’s Private Cloud Compute, don’t paste sensitive briefs, unreleased product details, contracts, or performance data into any assistant. Think lower risk, not no risk.

  3. This is a distribution deal, not just a feature
    Apple ships a stronger Siri faster. Google gets massive default placement. Creatives inherit new expectations around speed, volume, and “just ask the assistant.”

Read more details here.

Real time captions that actually keep up

You know that moment when you’re mid interview, mid livestream, or mid client review and someone says the one line you need to clip later… and you’re stuck scrubbing the timeline to find it?

ElevenLabs just released Scribe v2 Realtime to make that pain smaller: live transcription in under ~150ms, designed for real time captioning, meeting assistants, and voice agents. 📝

What’s new (creator relevant highlights)

  • Real time transcription under 150ms so captions feel in sync

    90+ languages plus auto language detection so bilingual sessions are easier.

  • “Negative latency” prediction that guesses next words and punctuation to keep text feeling instant.

  • Manual commit + Voice Activity Detection so you can control when text “locks” for cleaner captions and notes.

  • Enterprise privacy options like Zero retention mode and EU and India data residency for sensitive client work.

Why this matters for creatives

If you edit, produce, or run sessions live, this can tighten your whole workflow:

  • Faster clips: searchable transcript while you’re still recording.

  • Cleaner captions: fewer rewrites mid sentence if you use commit control.

  • Better collab: producer, editor, and social can pull selects from the same text doc.

Read full details here.

🎬 LTX-2, the “ship ready” AI video update

If you’ve ever loved a generated clip… then immediately hated it in the edit because motion got weird or identity drifted, LTX-2 is trying to solve that part of the job.

Key specs creatives will care about

  • Up to 20 seconds per clip 

  • Native 4K output

  • Up to 50 FPS for smoother motion and better slowdowns

  • Audio + video generated together so timing can feel more cohesive

  • Two model options: ltx-2-fast (speed) and ltx-2-pro (quality)

Why this matters for your workflow

  • More usable iterations: Longer clips plus stronger control means fewer “restart from scratch” cycles.

  • Better for story and brand: Consistent motion and camera behavior is what makes content feel intentional, not random.

  • Audio included: Even if you replace it later, synced audio can speed up early cuts and pacing decisions.

Read full details here.

Niji 7, cleaner style control for illustration workflows

Niji 7 (niji Midjourney’s newest model) is aimed at one thing creatives actually care about: cleaner, more consistent anime outputs with fewer rerolls.

What’s new (key details)

  • Better anime coherence: faces, eyes, and small details hold together more reliably 👁️

  • Stronger prompt understanding: more literal + more controllable (great for art direction) 🧭

  • Improved text rendering: nicer for posters, panels, signage, UI-ish concepts 📝

  • Better sref performance: style reference is a bigger win in this release 🎨

  • No cref in v7: character reference isn’t supported (they hint a different solution later) 👀

Why this matters for creatives

  • Production-friendly iteration: fewer “almost” images means faster client rounds and series work (key art, webtoon panels, campaign visuals).

  • More direction, less chaos: when a model is more literal, your art direction actually sticks, but you’ll want to be more specific in prompts.

Quick prompting tip

If you want consistency: add counts + placement (left/right, foreground/background) and keep the first version simple before you go cinematic.

Read more details here.

Change Camera (simple body) 📸🔁

Ever get one perfect image… then need a different angle for an ad, deck, or storyboard?

Freepik’s Change Camera lets you take one image and generate new camera viewpoints (think: same scene, different angle) so you don’t have to start from scratch.

Why it matters for creatives

  • Faster coverage: hero shot, 3/4 view, close-up, alternate framing

  • More consistency: keeps the vibe while you explore angles

  • Better storyboarding: build a sequence from one strong frame

Read full details here: the official announcement + tutorial.

Variations + Storyboard (simple body) 🎬

One image → multiple directions → a full sequence.

Start with a single frame on Freepik, open Variations to generate different takes, then use Storyboard to turn them into a multi-shot flow fast.

Why it matters for creatives

  • Turn key art into a sequence for campaigns, pitches, and social series

  • Move like a director: pick the path, not random outputs

  • Speed: from one image to a storyboard-ready set of shots

Read full details here: the official Variations post + storyboard tutorial.

💡 Insight

Here’s the uncomfortable truth that’s also kind of freeing: as tools get better, “good enough” gets crowded. When anyone can generate a decent image, a decent clip, a decent caption, the work that stands out is the work with taste, constraints, and intent.

This week’s updates point to a shift in creative advantage:

  • Assistants are becoming infrastructure, not apps. That means creative teams will soon be judged on how well they design their workflows, not how fast they can click around.

  • Camera control and storyboarding features are a tell, the winners won’t be the people who generate the most, they’ll be the people who can art direct, revise, and keep continuity.

  • Real time speech and captions reduce friction, which is great for accessibility and speed, but also raises expectations. If everything is easier to produce, audiences get pickier faster.

So if you’re wondering “what’s my edge now,” try this for the next week:

  1. Write the brief like a director, not a prompter.

  2. Choose constraints on purpose, style bible, camera rules, references, what you refuse to do.

  3. Use the tools for iteration, not identity. Your taste is the brand.

If any of these highlights hit your workflow in a real way, reply with the one you’re most curious about, and I’ll do a creator focused breakdown next edition.

That is all for this week, folks.

🔔 Stay in the Loop!

Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.

Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩‍🎨

Stay in the creative mood and harness the power of AI,

Moodelier and Claire 🌈✨