Google announced multiple updates to the Gemini AI Chatbot app during Google I/O 2025, which includes a wider range of multimodal AI capabilities, an updated AI model, and deeper integration with Google’s product suite.
Starting Tuesday, Google will introduce Gemini Live's camera and screen sharing capabilities to all users on iOS and Android. Powered by Project Astra, this feature allows users to have near real-time verbal conversations with Gemini while streaming video from their smartphone’s camera or screen to an AI model.
For example, while walking around a new city, users can point their phones to buildings and introduce Gemini Live to the architecture or history behind it, and get answers with little delay.
Over the next few weeks, Google said Gemini Live will also begin to integrate more deeply with other apps. The company said Gemini Live will soon be able to provide instructions for Google Maps, create events in Google Calendar, and list to-do lists via Google Tasks.
The massive updates for Google Gemini are part of the company's efforts to compete with Openai's Chatgpt, Apple's Siri and other digital assistant providers. The rise of AI chatbots provides users with a new way to interact with the Internet and its devices. This puts pressure on several large tech businesses, including Google Search and Google Assistant. Google announced during I/O 2025 that Gemini now has 400 million active users, and the company certainly hopes to grow that user base with these updates.
Google introduces two new AI subscriptions: Google AI Pro, the rebranding of the $20 per month Gemini Premium Program, while Google AI Ultra is a $250 per month competing with Chatgpt Pro. The Ultra program provides users with high rate limits, early access to new AI models, and exclusive access to certain features.
Google announced Tuesday that Pro and Ultra subscribers who have chosen English as their language in Chrome will also access Gemini in their Chrome browser. The integration is designed to let users ask Gemini to sum up information or answer questions about what appears on their screen.
Google says it is updating in-depth research, and Gemini's AI agent can generate comprehensive research reports by allowing users to upload their own private PDFs and images. In-depth research will cross-reference these private PDFs with public data to create more personalized reports. The company said that users will soon be able to integrate drives and Gmail directly into in-depth research.
Gemini's free users are getting the updated AI image model Imagen 4, which Google says provides better text output. Subscribers to the company's new $250 AI Ultra plan will also visit Google's latest AI video model, VEO 3, which generates sounds corresponding to the video scene via native audio.
Google also updated the default model in Gemini to Gemini 2.5 Flash, which the company said will provide higher quality responses and deliver with lower latency.
To cater to a growing number of students using AI chatbots, Google says Gemini will now create a personalized quiz that focuses on areas where users find challenging. Gemini will help develop other quizzes and action plans to strengthen these areas when users answer the wrong answer.