
Google has begun rolling out significant updates to its AI assistant, Gemini Live, introducing real-time screen-sharing and live video interaction capabilities. These features, developed under Google’s Project Astra, enable Gemini to “see” through a user’s smartphone screen or camera, providing immediate, context-aware responses.
The screen-sharing functionality allows users to share their device screens with Gemini Live, facilitating dynamic interactions. For instance, users can seek assistance with app navigation or request information about on-screen content, enhancing the overall user experience.
The live video feature enables Gemini to interpret real-time camera feeds, allowing users to point their smartphone cameras at objects or scenes and receive instant information or guidance. This capability is particularly beneficial for tasks such as identifying landmarks, translating text, or receiving real-time assistance with physical tasks.
These advanced features are currently being introduced to Gemini Advanced subscribers as part of the Google One AI Premium plan. Early adopters have reported accessing these functionalities on select devices. 9to5Google reported that A Reddit user was able to access the feature on their Xiaomi phone.
The rollout underscores Google’s commitment to advancing AI assistant capabilities, positioning Gemini Live ahead of competitors like Amazon’s Alexa Plus and Apple’s Siri, which are still developing similar functionalities.
As these features become more widely available, they are expected to significantly enhance the way users interact with their devices, offering a more intuitive and responsive AI experience. Users are encouraged to explore these new capabilities to fully leverage the potential of Gemini Live in their daily tasks and inquiries.
- Google Workspace Gets AI Overhaul! - May 21, 2025
- Google Expands AI Mode to Lens for Smarter Image Search - May 14, 2025
- OpenAI Enhances ChatGPT with Memory for Context-Aware Chats - April 24, 2025