Nano Banana 2: Putting Your Own Photos Inside Gemini Images
Google's Nano Banana 2 powers personalized image generation in the Gemini app, letting users include themselves and loved ones using their Google Photos library.
Google released Nano Banana 2 as part of the Gemini app’s Personal Intelligence update on April 16, 2026. The model integrates with Google Photos to generate images that include specific people, pets, and personal context from the user’s library. For developers working on personalized AI features, the rollout demonstrates how to connect user-specific data to a generative model without requiring manual uploads or detailed prompts.
Personal Context in Image Generation
The main friction in AI image generation has been prompt engineering. Getting a result that feels personal previously required long descriptions and manually uploaded reference photos. Nano Banana 2 removes that step by pulling context directly from the user’s connected Google apps.
If a user asks Gemini to “design my dream house” or “create a picture of my desert island essentials,” the model draws on their interests and preferences to ground the output in things they actually care about. There is no additional setup required beyond the Google app connections the user has already configured.
Using Google Photos for Identity
The deeper integration involves Google Photos. Users who connect their photo library to Personal Intelligence can generate images featuring themselves and people they know. The model uses the labeled face groups (organized under “Frequent Faces” or “Family & Friends”) that users already maintain in Google Photos.
A prompt like “create a claymation image of me and my family enjoying our favorite activity” produces results grounded in actual photos from the library. Users can experiment with different visual styles, including watercolors, charcoal sketches, and oil paintings. If the model picks the wrong reference photo, users can click the ’+’ icon to select a different one, or use the Sources button to see which image was auto-selected.
Privacy Controls
Google states that the Gemini app does not directly train its models on the user’s private Google Photos library. The company trains on limited information, such as specific prompts and model responses, to improve functionality. Connecting Google apps to Gemini is an opt-in experience that users can disable in settings at any time.
Availability
Personalized image generation in the Gemini app is rolling out to eligible Google AI Plus, Pro, and Ultra subscribers in the U.S., with plans to expand to Gemini in Chrome desktops and additional users. Nano Banana 2 is also available through Google AI Studio and the Gemini API for developers building with the model directly.
If you build applications that combine user data with generative models, the key takeaway is the labeled data approach. Relying on structured metadata the user has already created (face labels, grouped contacts) reduces onboarding friction compared to requiring dedicated training sessions or manual uploads.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
Build Real-Time Voice Agents with Cloudflare Agents SDK
Learn how to integrate low-latency voice interactions into your AI agents using Cloudflare's new @cloudflare/voice package and Durable Objects.
Adobe Opens Firefly Custom Models to Creators
Adobe has launched Firefly Custom Models in public beta, letting creators train its image generator on their own artwork and styles.
Multitask Seamlessly with Chrome’s New Split-Screen AI Mode
Google’s latest Chrome update introduces AI Mode, featuring a split-screen interface and multi-tab bundling to streamline complex research and shopping.
Cloudflare Now Forces AI Bots to Only Scrape Canonical Content
The new Redirects for AI Training tool converts soft canonical tags into hard 301 redirects to stop AI crawlers from ingesting deprecated or duplicate data.
Boost Model Accuracy With MaxText Post-Training on TPUs
Google's MaxText adds SFT and Reinforcement Learning support for single-host TPUs, enabling efficient LLM refinement with GRPO and Tunix integration.