SEARCH
SHARE IT
Google is continuing its rapid push into AI-driven productivity tools by bringing one of its most advanced search features, Deep Research, into NotebookLM. The company’s AI-powered note-taking and research assistant is already evolving quickly, and this latest integration positions it as a more powerful hub for academic work, creative projects, and professional research.
Deep Research is not entirely new—Google introduced it earlier inside the Gemini app and in Google Search’s AI Mode. There, the tool is known for generating highly detailed reports by performing layered, multi-step searches that go far beyond a standard query. But moving Deep Research into NotebookLM transforms the workflow: instead of simply returning an answer, the feature can now tie its findings directly into a user’s research environment.
Within NotebookLM, Deep Research takes on three major roles. First, it can design a structured research plan tailored to the topic you’re investigating. That includes identifying key questions, suggesting avenues of exploration, and outlining a logical sequence in which to tackle them. Second, it can scan hundreds of websites to surface relevant resources, offering article recommendations that you might not find through traditional browsing. And finally, it can automatically assemble an organized, well-sourced report that draws from the materials you have loaded into your notebook.
What makes this integration particularly useful is that Deep Research can run continuously in the background. Users can start a query, add the resulting report and its citations directly to their notebook, and then continue feeding NotebookLM additional sources while the system keeps working. This uninterrupted, iterative process is designed to mimic the way real research often unfolds: with new information arriving mid-stream and the need to revise or expand earlier findings.
Accessing the feature is relatively simple. On the left side of the NotebookLM interface, there is a panel labeled Sources. From there, users can select Web as the input type. A drop-down menu next to it now includes a Deep Research option. After choosing it, all that’s required is typing a question or prompt into the text field. NotebookLM then launches a deep, multi-layered search that goes far beyond typical summary tools.
Alongside this major update, Google unveiled support for several new file types that can be added as sources within NotebookLM. This expansion significantly broadens the range of materials the app can analyze.
Images are now supported, allowing users to upload photos of handwritten notes, worksheets, brochures, whiteboard snapshots, and more. NotebookLM can extract information from the images and incorporate it into its responses.
Google Drive integration has also been improved. Users can add PDFs directly from Drive and even attach Drive file URLs in the same way they currently use web links or YouTube videos. This will be particularly useful for students and professionals who rely heavily on cloud-stored documents.
Google Sheets support is another major enhancement. Users can upload spreadsheets and ask questions about their contents, enabling NotebookLM to analyze data tables, spot trends, or summarize numerical information without external tools.
NotebookLM now also accepts Word files in .docx format. This means research drafts, meeting notes, and written reports can be uploaded and dissected just as easily as PDFs.
These new file types will be fully available to users over the next week, with one exception: image support will roll out a little later, appearing in the coming weeks. Even so, the expanded file compatibility represents one more step in Google’s goal of turning NotebookLM into a comprehensive research assistant capable of handling everything from documents to datasets.
Deep Research and broader file support are only part of a series of updates Google has been pushing recently. NotebookLM now includes conversation memory, allowing the system to remember previous exchanges and maintain context across long sessions. This makes the chat experience more fluid and reduces the need to repeat background details or instructions.
The app has also improved its Video Overviews feature using the Nano Banana model, providing more accurate and digestible summaries of long videos. For users in technical fields, native LaTeX rendering is now supported, offering clearer handling of equations and complex mathematical expressions. And the Learning Guide feature allows students or self-learners to ask open-ended questions that help break down complicated problems into manageable steps.
MORE NEWS FOR YOU