Gemini AI Can Now Analyze Your Phone Screen and Camera Feed – Here's How It Works
Share this article:
Google is introducing exciting new capabilities for Gemini, its AI-powered chatbot, as part of its Project Astra initiative. Among the latest additions is Gemini Live, which can now interpret content displayed on your phone screen or through the camera viewfinder, providing instant responses based on what it sees.
The feature was first discovered by 9to5Google, after a Reddit user reported its appearance on their Xiaomi smartphone. This development confirms that Gemini can analyze on-screen information and answer related queries in real time.
Another key feature tied to Project Astra allows users to leverage their device’s camera for real-world object recognition. By activating the full-screen Gemini Live interface and initiating a video stream, the AI can identify and provide details about objects, animals, or anything in view—as demonstrated in the video below.
This functionality proves especially useful for quick visual searches, eliminating the need for manual input. However, access to these new Gemini features is currently limited to a small group of users subscribed to Google One AI Premium , priced at ₹1,950 per month.
Initially, Google had announced that these enhancements would debut on Pixel devices, but the rollout now appears to be random. As a result, users on the free version of Gemini may have to wait before gaining access to these advanced tools.
Next Story