When AI Mode started allowing users to switch between models and Gemini 2.5 Pro last month, it caught my attention. This followed Google’s launch of Deep Search, arriving seven months after Deep Research became available in the Gemini app.
AI Mode, which is similar to Gemini Live but does not provide screen sharing, started testing "Search Live" earlier this week using video input. While these last two features were teased back at Google I/O 2025, the unexpected addition of Canvas in AI Mode raised eyebrows.
Canvas is an interactive workspace where you can highlight sections of a document and drop prompts into a side chat for instant edits. Gemini introduced this in March, balancing text and coding workflows.
When I asked Google about these overlapping launches, their best explanation was that users now expect LLM-powered tools to have deep research capabilities and features like Canvas. That reasoning tracks—another new AI Mode feature is the ability to upload PDFs with prompts.
And this duplication isn’t slowing down. With Project Mariner, AI Mode is set to gain agent-like capabilities, such as helping you purchase stocks, Gemini will have its own "Agent Mode," however. AI Mode will soon be able to access Gmail and other Google apps to answer according to your preferences thanks to personal context integration.
(As a side note, the speed of new AI Mode feature rollouts is impressive, but Google urgently needs to expand availability beyond just three countries.)
Most people still see Google primarily as a search engine. Yet it must evolve into a tool that aligns with shifting user expectations over time.
The problem is optics. It’s easy to see why many believe AI Mode and Search should merge with Gemini, making it clearer which Google AI app they should be using. Dropping Gemini’s distinct branding in favor of a four-color Google logo hasn’t helped.
Gemini itself handles tasks that you wouldn’t want a search engine to manage, especially on mobile—where users are accustomed to a smart assistant that can control devices, access notes and files, manage calendars, set reminders, and more.
This separation actually matters. As LLM-driven assistants evolve, they’ll increasingly function like proto-AGI agents, assisting with complex, everyday tasks.
We’re still years away from that future. Until then, the best thing the Gemini app can do is keep shipping next-gen assistant features—better phone control, richer interactive experiences. Frontier capabilities like Deep Think, while exciting for power users, won’t define its success.
What will truly matter are practical, integrated features—like a future Gemini detecting an upcoming exam or event in your calendar and proactively helping you prepare. That’s the kind of advancement that will cement the difference between Gemini and Google’s broader search AI.


0 Comments