Google Nano Banana 2 AI Tool Unveiled: Here's What's New & How To Use It

Newspoint

Google has unveiled Nano Banana 2, its newest state-of-the-art AI image generation model, promising a significant leap in both speed and visual intelligence. Officially designated as Gemini 3.1 Flash Image, the model merges the advanced creative capabilities of the premium Nano Banana Pro with the rapid processing speeds of Google’s Flash architecture. Nano Banana 2 is likely to make Pro-tier image generation accessible to far more users than ever before.

Product Manager at Google DeepMind, Naina Raisinghani announced the new model on its official blog marking the third entry in what has become one of Google’s most viral AI product lines. The original Nano Banana debuted in August 2025 and rapidly became a sensation; Nano Banana Pro followed in November, catering to users needing studio-quality output. Now, Nano Banana 2 is designed to be even better.

Google says that all Nano Banana 2 outputs will be watermarked using the company’s proprietary SynthID technology, now paired with interoperable C2PA Content Credentials Nano Banana 2 introduces a raft of new capabilities. Here's a lowdown on what's new with the latest AI tool.

1. Real-time knowledge integration

Hero Image

Perhaps the most significant upgrade is the model’s ability to access real-world information and live web search imagery when generating images. This allows Nano Banana 2 to accurately render specific subjects - from a particular museum’s facade to a specific city street - with a factual grounding that previous Flash-tier models lacked. Practical applications include generating infographics, turning handwritten notes into diagrams, and producing data visualisations from raw inputs. A demo app called “Window Seat”, built to showcase this feature, generates photorealistic window views based on specific global locations and live weather data.

2. Precision text rendering and localisation

The model can generate accurate, legible text within images - a notoriously difficult task for AI image generators. This makes it suitable for marketing mockups, greeting cards, banners, and dynamic UI generation. Going further, it can also translate and localise text within an image, adapting copy for different markets without needing external tools. A companion demo, the 'Global Ad Localizer', demonstrates this by converting advertising material into region-appropriate versions, adjusting both language and visual context in a single workflow.

3. Subject consistency across complex scenes

One of the most technically demanding challenges in AI-generated imagery is maintaining character and object fidelity across a sequence of images. Nano Banana 2 now supports up to five distinct characters and 14 objects within a single workflow, preserving their appearance consistently. This opens the door for storyboarding, narrative art, and multi-scene creative projects. Google’s “Pet Passport” demo illustrates this by taking a single reference photo of a pet and rendering it accurately across famous global landmarks.

4. Enhanced instruction following

Nano Banana 2 claims to adhere more strictly to complex or nuanced prompts, reducing the gap between what a user asks for and what the model produces. Developers also gain new control through configurable 'thinking levels' - toggling between Minimal (the default) and High or Dynamic settings to allow the model to spend more processing time on intricate prompts before rendering, improving output accuracy.

5. Production-ready specs: From 512px to 4K

The model supports a comprehensive range of aspect ratios and resolutions. New native support has been added for ultra-wide and ultra-tall formats - 4:1, 1:4, 8:1, and 1:8 - alongside a new 512px resolution tier optimised for low-latency pipelines. Existing 1K, 2K, and 4K output options remain available, giving creators and developers full control over how their images are sized and formatted.

Newspoint

6. Improved visual fidelity

Beyond technical specifications, Nano Banana 2 delivers visible improvements in image quality. Vibrant lighting, richer textures, and sharper fine details characterise its output – all without sacrificing the speed that defines the Flash family of models.

Nano Banana 2: Who gets it?

Google is rolling out Nano Banana 2 broadly across its ecosystem, with the model now available on the following platforms:

- Gemini App: Nano Banana 2 replaces Nano Banana Pro as the default image model across the Fast, Thinking, and Pro tiers. Google AI Pro and Ultra subscribers retain access to the original Pro model for specialised, high-fidelity tasks via the “regenerate” option in the three-dot menu.

- Google Search: Available in AI Mode and Google Lens, across the Google app as well as mobile and desktop browsers. The rollout covers 141 new countries and territories, adding eight new language supports.

- AI Studio and Gemini API: Available now in developer preview, with pricing published on the Gemini API documentation page.

- Google Cloud / Vertex AI: Available in preview for enterprise deployments.

- Flow: Nano Banana 2 is now the default image generation model in Google’s AI filmmaking platform, available to all Flow users at zero credits.

- Google Ads: The model is live in the Ads platform, powering asset suggestions during campaign creation.

- Google Antigravity and Firebase: Also available for developers building custom applications.