SEARCH
SHARE IT
Google is giving its powerful AI image model a much broader stage. The company has announced that Nano Banana, officially known as Gemini 2.5 Flash Image, is expanding beyond the Gemini app and will soon appear across several key Google products. The rollout begins with Google Search and NotebookLM, with Google Photos next in line to receive the feature in the coming weeks.
Originally launched in August, Nano Banana has already proven to be one of Google’s most dynamic creative tools. Developed by Google DeepMind, the AI research arm of Alphabet, the model specializes in realistic and stylistically rich image generation. According to Google, users have created more than five billion images with Nano Banana since its debut, signaling enormous engagement and growing interest in AI-powered visual tools.
The expansion marks a significant step in Google’s strategy to weave generative AI more tightly into its everyday products. By making Nano Banana accessible through widely used services like Search and Photos, the company aims to turn casual interactions into creative opportunities.
In Google Search, Nano Banana now appears as part of AI Mode and Google Lens for users in the United States and India. Through Lens, available on both Android and iOS, users can enter the “Create” mode to generate or modify images simply by typing a prompt. They can also take a photo and ask the AI to make specific edits — such as adjusting lighting, changing objects, or reimagining the entire scene. The integration makes image creation a seamless part of everyday search behavior, bridging the gap between visual search and generative creativity.
Meanwhile, NotebookLM — Google’s AI-powered note-taking and research assistant — is getting one of the biggest boosts from Nano Banana’s rollout. The app’s Video Overviews feature, which automatically summarizes content in video form, is being upgraded with new visual capabilities. With Nano Banana, these summaries will now include contextual imagery, animations, and stylistic flourishes designed to make the generated videos more engaging and informative.
NotebookLM users will also gain access to six new visual styles for their video summaries: Papercraft, Watercolor, Anime, Whiteboard, Retro Print, and Heritage. These additions give creators more flexibility in how their AI-generated videos look and feel, offering a distinct artistic tone for different use cases — from educational explainers to creative storytelling.
The update also introduces two new video formats. The first, called “Explainer,” produces detailed and structured videos built from a user’s own source materials, designed to offer an in-depth understanding of complex topics. The second, “Brief,” creates short, snappy summaries that highlight the main points of a document or discussion. Together, these formats position NotebookLM as a powerful multimedia tool for both professionals and students who want to digest and present information visually.
Under the hood, Nano Banana continues to showcase Google’s advancements in AI-driven image synthesis. The model can alter a person’s appearance — such as changing outfits or aging them a few years — merge multiple photos into a cohesive composite, and transfer visual styles between images. These capabilities make it ideal not just for creative projects, but also for practical applications like design visualization, education, and content production.
Google’s internal testing and community engagement around Nano Banana have been equally ambitious. Last month, the company hosted a Nano Banana hackathon, inviting developers and artists to experiment with the model’s potential. The event drew thousands of participants, and 50 winners received a total of $400,000 in prizes — an indication of how strongly Google is investing in building a creative ecosystem around its AI technologies.
MORE NEWS FOR YOU