Gemini 3 Rolls Out: New Interface, Stronger Coding Tools & Google Antigravity Explained

Newspoint
Google has officially begun rolling out its much-awaited Google Gemini 3 model, marking a major step forward in the company’s race against OpenAI and Anthropic. CEO Sundar Pichai described Gemini 3 as a “state-of-the-art” and “most intelligent” model, designed to deliver superior coding, reasoning, and analysis — a leap that reinforces Google’s push to lead the AI frontier. This highlights how the Google Gemini 3 model features set a new benchmark in advanced AI systems.
Hero Image


The company has made Gemini 3 widely accessible across platforms. Users can try it directly in the Gemini app through Google AI Pro and Ultra subscriptions, explore it in AI Mode in Search, and developers can access it via the Gemini API in AI Studio, Google Antigravity , its new agentic development platform, and Gemini CLI. Enterprises can use it through Vertex AI and Gemini Enterprise, making the Gemini 3 multimodal capabilities widely available across consumer, developer, and enterprise environments.

In just two years, Gemini has grown into a mainstream AI tool. The Gemini app now has over 650 million users, and AI Overviews are used by 2 billion users monthly — a testament to its rapid global expansion. This momentum sets the stage for the next generation as Gemini 3 AI adoption accelerates worldwide.


What can Gemini 3 do?

Google highlights that Gemini 3 is built for powerful multimodal understanding, making it one of the most capable models for “vibe coding.” According to Google, the Google Gemini 3 model features outperform all previous benchmarks. The model topped the LM Arena leaderboard with a score of 1501 points, demonstrating PhD-level reasoning and securing top scores on Humanity’s Last Exam and GPQA Diamond — a showcase of its advanced reasoning capabilities.

The performance doesn’t stop there. Gemini 3 Pro excelled in multimodal reasoning with 81% on MMMU-Pro and 87.6% on Video-MMMU, showing how the model can process complex questions across science, mathematics, and real-world applications. This underscores the strength of Gemini 3 multimodal capabilities across text, images, and video.


During a briefing, Google showcased how Gemini 3 integrates visual and spatial understanding. For example, it can read and translate handwritten recipes in multiple languages and convert them into a neatly formatted cookbook. When fed academic papers and video lectures, it can write code to create interactive flashcards — making digital learning more engaging. These examples highlight the expanding Gemini 3 agentic abilities.

For developers, Google is positioning Gemini 3 as its most advanced agentic coding model. It outperforms the earlier Gemini 2.5 Pro and is deeply integrated across Google AI Studio, Vertex AI, Gemini CLI, and Google Antigravity, Google’s new agentic development platform. These tools strengthen the Gemini 3 developer ecosystem.

Google also claims that Gemini 3 is its most secure AI model yet. According to the company, “The model shows reduced sycophancy, increased resistance to prompt injection, and improved protection against misuse via cyberattacks.” These improvements are part of the Gemini 3 security upgrades aimed at reducing vulnerabilities.

A new interface and Dynamic View

Users opening the Gemini app will notice a refreshed look. Google has introduced “generative interfaces”, which redesign the layout dynamically based on what you ask. For example, if you request a “three-day trip to Rome next summer,” Gemini presents the results like a magazine-style itinerary with visuals and organised modules — an example of the new Gemini 3 UI improvements.

You may also like



Google has also added Dynamic View, a feature that designs a tailored user interface in real time. Ask Gemini to “explain the Van Gogh gallery with life context for each piece,” and it builds a rich, scrollable interactive display. This reflects how the Gemini 3 AI interface transformation is redefining user interaction.

Another major addition is Gemini Agent, powered by Gemini 3. This tool can handle multi-step tasks inside the app, such as syncing with Google apps, managing calendar events, creating reminders, prioritising daily tasks, and writing replies. It is currently rolling out to Gemini AI Ultra users, making the Gemini Agent productivity features a significant upgrade.

Gemini 3 in Search and Antigravity

Google is bringing Gemini 3 to Search through AI Mode. Selecting “Thinking” from the model menu enables the AI to process complex, multi-layered questions — especially long queries that require context. This upgrade enhances Gemini 3 AI Mode Search features, and the rollout has begun in the U.S., with higher usage limits for AI Pro and Ultra subscribers.

Google also introduced Google Antigravity, a new agentic platform that lets developers code at a higher, task-oriented level. This forms an essential part of the Gemini 3 agentic development tools, designed for the next wave of autonomous AI applications.

Aggressive AI race

The global AI race has become more intense in 2025. Every major AI company is competing to build faster and smarter LLMs. Google is moving aggressively to keep up with — and surpass — OpenAI and Anthropic, who recently launched their own next-gen models. As competition rises, the relevance of Google Gemini 3 model features becomes even more crucial.


While Gemini 3 offers strong multimodal reasoning, coding intelligence , and agentic capabilities, Google is still under pressure as new players enter the market. Chinese models, like DeepSeek’s open-source breakthrough earlier this year, have added new challengers. Meanwhile, Meta has formed a new superintelligence team led by top former OpenAI researchers — adding to the fast-evolving landscape of global AI market competition.


More from our partners
Loving Newspoint? Download the app now
Newspoint