Google Launches Gemini 3 and Antigravity: Coding Just Went Autonomous
Google has unveiled Gemini 3 and its new AI-powered coding platform Antigravity, marking a major shift toward autonomous software development.
Google has launched Gemini 3, its newest AI model, along with a new coding platform called Antigravity on Tuesday. Despite the name, this project is not about defying physical gravity or launching objects into space. Instead, “Antigravity” refers to a groundbreaking AI-assisted software development platform.
The launch comes only seven months after Gemini 2.5 and days after OpenAI released GPT 5.1.
Google rolled out Gemini 3 across its full AI ecosystem. The model is available in the Gemini app, Search, AI Studio and Vertex AI. According to Google, Gemini app has over 650 million monthly users base.
Google also introduced Antigravity, an AI-driven workspace for developers and coders.
The platform is free for public preview.
Google says Gemini 3 delivers stronger reasoning and planning. Tulsee Doshi, Google’s head of product for the Gemini model, said, “With Gemini 3, we’re seeing this massive jump in reasoning.”
Google CEO Sundar Pichai added,
“It’s amazing to think that, in just two years, AI has evolved from simply reading text and images to reading the room.”
Gemini 3 introduces Generative UI, which creates interactive interfaces inside Search results.
Users can get dynamic layouts, maps, simulations and calculators directly in responses.
For the first time, Google shipped its latest Gemini model into Search on launch day.
In AI Mode, the model builds visual responses and interactive elements.
Josh Woodward, product lead for Gemini, said,“With generative UI (User Interface), Gemini 3 can dynamically create the overall response layout.”
Paid Google AI Pro and Ultra subscribers get early access to these features.
Google is also testing Gemini 3 Deep Think, a new reasoning mode.
What is Antigravity
Antigravity is a coding workspace powered by AI agents. It gives the AI access to the editor, terminal and browser.
Google said in a statement,“Antigravity transforms AI assistance from a tool in a developer’s toolkit into an active partner.”
This AI can now write code, test code, run programs, catch bugs, validate results and plan multi-step tasks. All of it with less human input.
Google says Antigravity is built on VS Code and gives developers two views: an Editor view for writing code, and a Manager view for running multiple AI agents at the same time.
The system also creates “Artifacts,” which are proof documents, task lists, screenshots, test results, and browser recordings so developers can see exactly what the AI did.
In one of the demos on the official website, Antigravity built a full flight-tracking app from scratch: it wrote the code, launched the app in a real browser, and recorded the result.
Google engineers call Antigravity part of the move toward “agentic coding,” where developers supervise AI instead of typing every line of code. The company says this is meant to speed up development, improve accuracy, and make software creation accessible to more people.
With new tools like Cursor, Replit, and Manus also added to Gemini 3, the larger trend is clear: coding is shifting from manual work to AI-driven teamwork. Google claims Antigravity will keep improving as its agents learn from each project.
Big Tech is expected to spend $600 billion on AI infrastructure this year. Google is positioning Gemini as the core of its next phase of AI-native products.
According to Pichai, ‘More than 70% of Google Cloud customers already use Google’s AI tools. AI Overviews in Search now reach 2 billion users every month.’
The race is competitive. OpenAI and Anthropic are pushing their own advanced models.
But Google says its strength comes from multimodality and real-time reasoning.
For now, Gemini 3 marks Google’s strongest attempt to regain AI leadership. And it signals the company’s shift from AI research to AI-powered products.


