Virtual Styling in 3 Steps? Gemini’s Nano Banana Just Changed AI Editing

AI-generated Image

Hi Creatives! 👋 

AI news keeps stacking up, but this week it’s less about speed and more about what we can actually do with it. Google’s Gemini editor just got a major glow-up, Meta is teaming with Midjourney, and DeepMind is turning text into whole worlds you can explore. Add in Higgsfield’s new tricks and some seriously creative tests with Kling AI—and it’s clear the boundaries of what’s possible keep expanding.

At the center of it all? More control, smoother workflows, and fresh ways to bring ideas to life.

This Week’s Highlights:

  • Gemini’s image editor gets a serious upgrade 🎨

  • 🚀Meta × Midjourney: what creators should know

  • Genie 3: text-to-world for playable scenes

  • ICYMI: Higgsfield’s Assist + Product-to-Video

  • 🎬 Creative Feature

    • NanoBanana + Photoshop

    • Clean Camera Moves with Kling AI 2.1

    • IM8 x Aryna Sabalenka — AI Twin to Times Square

    • 📺 “Vision 9” by SK2 (KlingAI 2.1)

Gemini’s image editor gets a serious upgrade 🎨

Google just pushed a big upgrade to Gemini’s built-in editor. The new model—internally nicknamed Nano Banana and officially Gemini 2.5 Flash Image—blends multiple photos, keeps faces and products consistent, supports multi-turn edits, and tags outputs with SynthID for provenance.

What’s new

  • Consistent likeness across outfits, background, and location changes.

  • Blend multiple photos into one scene; make targeted, multi-turn edits.

  • Clear labeling: visible watermark + invisible SynthID on every AI image.

Power demo: virtual try-on in three steps
We tested Nano Banana inside Freepik’s AI Image Editor for this demo. Here’s the flow:

  1. Upload a clear reference photo of yourself as the base.

  2. Add the product image of the outfit you want to try.

  3. Generate, then rerun for complex designs until the details land.

What we saw: many results captured fabric and silhouette well, while intricate pieces benefited from a couple more passes. The potential for fast styling tests is real.

Where you can use Nano Banana now

  • Gemini app on web and mobile, with the updated editor built in.

  • Adobe Firefly & Adobe Express via partner models. Select Gemini 2.5 Flash Image in Firefly’s Text-to-Image or Boards, then iterate in Express.

  • Freepik AI Suite. Freepik announced Google Nano Banana support for image generation and edits.

  • Higgsfield. Choose Nano Banana for precise image edits and blends.

  • FLORA. The team introduced “Google Nanobanana in FLORA” with example workflows.

  • Replicate. Run google/nano-banana in the Playground or via API, complete with schema and examples.

For builders

  • Ship it through Gemini API and Google AI Studio, or deploy on Vertex AI. Google also partnered with OpenRouter and fal.ai for wider access.

Why it matters for creators

  • Faster lookbooks, mood boards, and pre-viz from a single reference.

  • Quicker campaign mocks while keeping faces, products, and styling consistent.

  • Built-in provenance markers for safer asset handling.

Read the full article Google’s announcement.

🚀Meta × Midjourney: what creators should know

🖼️ STK043_VRG_Illo_N_Barclay_2_Meta

Meta is partnering with Midjourney to license its “aesthetic technology” for use across Meta’s models and products. The companies say this is a technical collaboration between research teams, not an acquisition

Why does this matter?

  • 📷 Midjourney’s aesthetic has become a creative standard—and now Meta has direct access to it.

  • 🤝 The deal is a licensing partnership, not a model integration—so Meta is not merging models, but using content.

  • 💬 This raises new questions about how platforms might use creator-generated content as training material.

  • 🔍 The bigger trend? Tech giants are securing rights to AI-generated assets—expect more deals like this in the coming months.

For creators, this is a double-edged brush: exposure for the aesthetic, but also growing pressure to define rights over what’s made with AI tools.

Genie 3: text-to-world for playable scenes

Google DeepMind just introduced Genie 3, a real-time “world model” that turns a text prompt into a navigable environment. It runs at 720p, 24 fps, and keeps a world consistent for a few minutes — a big step from earlier Genie demos.

Why it matters for creatives

  • Rapid previz: sketch interactive shots and camera paths before a shoot.

  • Playable moodboards: walk through look-and-feel instead of static boards.

  • Agent testing: drop DeepMind’s SIMA into these worlds to prototype goals and sequences.

Read more: Genie 3

ICYMI: Higgsfield’s Assist + Product-to-Video

If you haven’t tried Higgsfield yet, it’s a creative studio for AI images and video with tools like Soul ID for consistent characters and Draw to Video for sketch-based motion control. A few weeks back they shipped two updates we just caught up on:

  • Assist now runs on GPT-5 for fast prompt ideas, tool picks, and quick workflow tips right inside the app.

  • Product-to-Video inside Draw to Video. Drop a product or outfit into the frame, add arrows or short notes, and it turns into a cinematic shot without long prompts. Works with Veo 3 and MiniMax Hailuo 02.

Why you’ll care
Quick way to mock up ads, story clips, or UGC concepts in one place. Higgsfield

Skim the posts:

🎬 Creative Feature

NanoBanana + Photoshop

Arminas put NanoBanana to work inside Photoshop and the results are 🔥. His quick test shows how far the model has come for creators: conversational prompts, strong character consistency, smart use of reference images, and tight color control—all without leaving your editor.

Why we loved this test

  • Tool stack: NanoBanana + Photoshop workflow

  • Standout moves: pose-matching and image-to-image refinements with clean, consistent outputs

  • Why it matters: faster iteration in your native editing space means less context switching and more time crafting the final look

Clean Camera Moves with Kling AI 2.1

Rory Flynn puts Kling AI 2.1 keyframes to work, building smooth drone-style moves while keeping subjects and environments consistent. He shares a clear prompt pattern for start and end frames, camera motion, lighting, and style, then stress-tests it with a full 360 drone shot. Great takeaways for anyone testing keyframes on brand visuals or short spots.

Tool stack: Midjourney for image refs, Kling AI 2.1 for video, Udio for music, Topaz Astra for upscale and FPS.

Why we like it: practical prompt structure, strong motion control, and results that hold up even with ambitious moves.

IM8 x Aryna Sabalenka — AI Twin to Times Square

An AI ad for David Beckham’s IM8 that racked up 85M Instagram views in 48 hours and played in Times Square during the US Open. P.J. Accetturo and team reimagined a Matrix “choice” moment with an AI twin of Aryna Sabalenka and a tight, production-ready pipeline. 🎬

How they built it

  • Concept: A Matrix-style decision scene tailored to IM8’s ambassador, scripted by Nate Dern, directed by Theo Dudley.

  • Look dev and prompts: Shot list and references in Figma, style frames in Midjourney, prompts refined with Gemini.

  • Digital twin and VO: Likeness created in Ideogram and the final line voiced with ElevenLabs.

  • Animation and motion: Sequences animated with Veo 3 and Kling, designed to avoid on-camera lip sync for cleaner realism.

Why we like it
Clear creative idea, smart tool choices, and constraints that keep results believable at scale.

📺 “Vision 9” by SK2 (KlingAI 2.1)

Samuele (@visualsk2) delivers “Vision 9,” crafted entirely with KlingAI 2.1 using start and end frames. The piece shows crisp, coherent motion and color-forward styling, a smart reference for anyone testing controlled image-to-video workflows for ads, teasers, or title sequences. 🎥

💡 Industry Insight

This week really showed how wide the creative AI world is stretching. Gemini’s editor is becoming a serious styling tool, Meta is cozying up with Midjourney, and Genie 3 is pushing text-to-world into interactive scenes. Add in Higgsfield’s product-to-video and the new tests with Kling AI—and it feels like every corner of creative work is getting a nudge forward.

For me, what stands out is how these tools aren’t just adding shiny features—they’re changing how we plan, mock up, and even play with ideas. Whether it’s trying on outfits virtually, building smoother camera moves, or testing worlds you can actually walk through, the gap between imagination and execution is shrinking fast.

Try them out, mix things up, and see what works best for you. The updates are cool, but it’s your creativity that makes them unforgettable.

That’s a wrap for this week!

🔔 Stay in the Loop!

Did you know that you can now read all our newsletters about The AI Stuff Creators and Artist Should Know that you might have missed.

Curios about C.A.S.H Camp  Click here Book a free 20-minute discovery call with me. It's the perfect no-strings-attached way to explore if this is the path for you.

Don't forget to follow us on Instagram 📸: @clairexue and @moodelier. Stay connected with us for all the latest and greatest! 🎨👩‍🎨

Stay in the creative mood and harness the power of AI,

Moodelier and Claire 🌈✨