SEARCH
SHARE IT
The most recent iteration of Google's AI model, Gemini 2.0, was unveiled. It now allows tool integration for the "agentic era" as well as visual and audio output. AI systems capable of autonomously completing tasks with adaptive decision-making are represented by agentic AI models. Consider using a prompt to automate chores like appointment booking or shopping.
Multiple agents will be available in Gemini 2.0 to assist you in a variety of ways, from making ideas in real time while playing games like Clash of Clans to selecting a gift and putting it to your shopping cart in response to a prompt.
The AI agents in Gemini 2.0 exhibit goal-oriented behaviour, just like other AI agents. They are able to make a list of tasks and complete them on their own. Project Astra is one of the agents in Gemini 2.0. It is a multimodal AI assistant for Android phones that integrates Google Search, Lens, and Maps.
Another experimental AI agent that can explore a web browser by itself is called Project Mariner. For "trusted testers," Mariner is now accessible as a Chrome extension in early preview version.
Gemini 2.0 Flash is the initial iteration of Google's new AI model, apart from the AI agents. Compared to the Gemini 1.0 and 1.5 models, this experimental (beta) version has better benchmark performance, decreased latency, and enhanced coding and mathematical reasoning and comprehension. Additionally, it has native picture generation capabilities using Google DeepMind's Imagen 3 text-to-image model.
All users can access Gemini 2.0 Flash Experimental on the web, and the mobile Gemini app will soon follow. To test it out, users must choose the Gemini 2.0 Flash Experimental option from the dropdown menu.
The new model is also available to developers through Vertex AI and Google AI Studio.
MORE NEWS FOR YOU