{"id":5207,"date":"2026-05-12T00:10:29","date_gmt":"2026-05-12T00:10:29","guid":{"rendered":"https:\/\/suprmind.ai\/hub\/gemini\/features\/"},"modified":"2026-05-12T02:41:34","modified_gmt":"2026-05-12T02:41:34","slug":"features","status":"publish","type":"page","link":"https:\/\/suprmind.ai\/hub\/gemini\/features\/","title":{"rendered":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live"},"content":{"rendered":"<div style=\"padding-top: 40px;\">\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 1: HERO --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section class=\"hero\">\n<div class=\"hero-content\">\n<div class=\"hero-label\">Gemini Features Deep Dive<\/div>\n<h1>How Gemini Works: <br \/>Deep Research, Gems, Canvas, <br \/>Imagen, Veo, and Live<\/h1>\n<p class=\"hero-subtitle\" style=\"padding-top: 30px;\">\n                Gemini ships ten distinct user-facing features split across four categories: research and reasoning (Deep Research, Deep Research Max), customization (Gems, Canvas), conversational and audio interfaces (Audio Overviews, NotebookLM, Live, Project Astra), workspace integration (Gmail, Docs, Sheets, Slides, Meet), and media generation (Imagen 4, Veo 3.1).\n            <\/p>\n<p class=\"hero-subtitle\" style=\"margin-top: 24px; font-size: 19px; color: #9ca3af; max-width: 800px;\">\n                This guide covers what each feature actually does, how it works mechanically, when to use it, when not to, and the documented limitations and transparency gaps. For tier requirements, see the <a href=\"\/hub?page_id=5206\" style=\"color: #fff; text-decoration: underline;\">Gemini Pricing Guide<\/a>. For comparisons against Claude, ChatGPT, Grok, and Perplexity equivalents, see <a href=\"\/hub\/gemini\/vs-other-ai\/\" style=\"color: #fff; text-decoration: underline;\">Gemini vs Other AI Models<\/a>.\n            <\/p>\n<p style=\"margin-top: 16px; font-size: 14px; color: #9ca3af;\">\n                Last verified May 10, 2026. Next refresh due August 10, 2026.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- suprmind-demo-injection --><\/p>\n<h2 style=\"text-align:center; max-width:800px; margin:0 auto 24px;\">See how Gemini Works With other Four Frontier AI Models in Multi-AI Orchestrated Business Discussion<\/h2>\n<p>    <style>@media (max-width: 768px){#suprmind-demo{margin-left:calc(-1 * var(--suprmind-demo-mobile-bleed, 8px))!important;margin-right:calc(-1 * var(--suprmind-demo-mobile-bleed, 8px))!important;width:calc(100% + var(--suprmind-demo-mobile-bleed, 8px) * 2)!important;}}<\/style><div id=\"suprmind-demo\" style=\"width:100%; overflow:hidden;\"><\/div><br \/>\n    <!-- \/suprmind-demo-injection --><\/p>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 2: DEEP RESEARCH --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1000px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Deep Research and Deep Research Max<\/div>\n<h2>How multi-step research works <br \/>at the agentic layer.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px 0 24px 0;\">\n                Deep Research is the feature that turns Gemini from a chat model into a research agent. Activated through a UI toggle in the Gemini app or via the Deep Research model selection in the model picker, it fires an iterative retrieval-augmented-generation loop. The agent decomposes the query into sub-topics, browses up to hundreds of websites iteratively (plus the user&#8217;s Gmail, Drive, and Chat if permitted), follows fresh links, summarizes findings in an internal scratchpad, and synthesizes the result into a multi-page cited report.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                The output is a structured research document with numbered source citations. Reports can be converted to Audio Overview format (two-host podcast-style audio), to Canvas for further editing, to interactive exploration formats, or to quizzes for retention testing. The conversion options sit at the top of the report when generation completes.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                Deep Research Max launched 2026-04-20 as the higher-tier variant. It runs longer iterations, traverses deeper through linked sources, and adds Model Context Protocol (MCP) server integration plus native visualizations to the synthesis stage. The API exposes two model variants as of 2026-04-21: <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">deep-research-preview-04-2026<\/code> for speed and streaming, and <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">deep-research-max-preview-04-2026<\/code> for maximum comprehensiveness at higher cost.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability<\/h3>\n<ul class=\"feature-list\" style=\"margin-bottom: 40px;\">\n<li><span class=\"check\"><\/span><strong>Free tier:<\/strong> 5 reports per month.<\/li>\n<li><span class=\"check\"><\/span><strong>Google AI Plus:<\/strong> more access (exact number not disclosed).<\/li>\n<li><span class=\"check\"><\/span><strong>Google AI Pro:<\/strong> 5x more Audio Overviews than Free, implying higher Deep Research quota.<\/li>\n<li><span class=\"check\"><\/span><strong>Google AI Ultra:<\/strong> highest limits, plus the visual exploration output that lower paid tiers do not get.<\/li>\n<li><span class=\"check\"><\/span><strong>API:<\/strong> paid tier with model-specific pricing.<\/li>\n<\/ul>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Documented Limitations<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                Source quality varies. Deep Research surfaces blogs alongside peer-reviewed sources, marketing pages alongside primary government documents. The synthesis layer cites accessed URLs but does not independently verify whether the claims at those URLs are accurate. The user-side verification load is real: the report contains citations that the user must validate against the original sources before relying on the conclusions for any high-stakes decision.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                The hard limits: maximum sources browsed is &#8220;up to hundreds&#8221; per Google&#8217;s official language with no specific cap published. The API file size limit is 100 MB (increased from 20 MB on 2026-01-08). The Free tier cap of 5 reports per month is the firmest published constraint.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 3: GEMS --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1000px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Gems<\/div>\n<h2>Custom AI personas <br \/>with the four-field construction model.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px 0 24px 0;\">\n                Gems are customizable Gemini chat instances built through the Gem Builder. The construction model defines four fields: Persona (the role the Gem plays), Task (what the Gem should do), Context (how the Gem performs the task), and Format (how the output should be presented). Up to 10 reference files can be attached to each Gem and used across all interactions.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                Gems persist across sessions and retain their configured instructions. A user can create a Gem for &#8220;weekly Python code reviewer&#8221; with attached coding standards documents, a Gem for &#8220;meal planner with my dietary restrictions&#8221; with attached preferences, and a Gem for &#8220;writing coach in my style&#8221; with attached samples. Each Gem operates in its own conversation namespace.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                Google also provides pre-built Gems in the Gems Manager. The pre-built set covers common use cases (writing coach, code helper, brainstorm partner). The functional comparison: Gems are Google&#8217;s equivalent of GPT Custom GPTs, with comparable construction patterns and a 10-file reference attachment limit.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability and Workspace Integration<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                Available on Free tier with limits. Full Gem creation is confirmed for paid tiers, though specific per-day or per-month creation limits are not publicly enumerated. Gems can be integrated into Google Workspace apps including Gmail, Docs, and Drive, surfacing inside those apps as configured assistants rather than only inside the Gemini chat interface.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                The hard limit worth noting: the 10-file reference attachment cap means workflows that depend on a larger reference corpus cannot use Gems alone. For corpus sizes above 10 files, NotebookLM is the firmer fit since it accepts larger source sets and grounds responses in the source material rather than parametric knowledge.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 4: CANVAS --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1000px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Canvas<\/div>\n<h2>Side-by-side workspace <br \/>with the targeted-edit pattern.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px 0 24px 0;\">\n                Canvas opens a split-panel interface inside the Gemini app. The chat sits on the left, and the document, code, slides, or app prototype sits on the right. Users can type directly in the Canvas panel or issue edit instructions through the prompt box. Changes auto-save. The panel supports documents, code, web apps, slides, and code prototypes.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                The targeted-edit pattern is the differentiator. Users can select a section of text or code in the Canvas panel and prompt Gemini to revise that specific section. The model reads the selection plus the surrounding context and proposes an edit without regenerating the entire document. The pattern is comparable in function to Claude&#8217;s Artifacts feature.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                Canvas output formats supported include Audio Overview (the document becomes a two-host audio summary), quiz, infographic, flashcards, and web app. The format conversion runs through the format selector at the top of the Canvas panel.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                Basic Canvas (documents, code) is available to Free users. Visual and interactive report output from Deep Research into Canvas is Ultra-only as confirmed by independent third-party reporting from late 2025. Workspace Enterprise Business edition has a Canvas feature toggle in the enterprise admin interface, allowing organization-level Canvas enablement for business users.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                The hard limits: Pro tier subscription marketing references up to 1,500 pages of file uploads and up to 30,000 lines of code. App and web-app generation in Canvas relies on the underlying Gemini model&#8217;s context limits rather than separately enumerated Canvas-specific caps.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 5: AUDIO OVERVIEWS AND NOTEBOOKLM --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1000px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Audio Overviews and NotebookLM<\/div>\n<h2>Two-host audio synthesis <br \/>integrated into the consumer app.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px 0 24px 0;\">\n                Audio Overviews convert source documents, slides, and Deep Research reports into podcast-style discussions between two AI hosts. The two-host dialogue pattern was pioneered by NotebookLM, the standalone notebook-first product, and integrated into the Gemini consumer app on 2025-03-17.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                In the Gemini app, Audio Overview generation is tied to the Deep Research model selection: a Deep Research report can be converted to Audio Overview format from within the result view. The audio runs in the background, allowing concurrent work in the chat interface during generation. In NotebookLM, Audio Overview generation runs per notebook through the Studio panel, with one audio overview per notebook.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                NotebookLM Plus is the paid NotebookLM tier with higher source counts per notebook, longer audio output, and customization controls. NotebookLM Enterprise is the Workspace tier with API access via the <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">notebooks.audioOverviews.create<\/code> method, integrated into Workspace identity and access controls.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability<\/h3>\n<ul class=\"feature-list\" style=\"margin-bottom: 24px;\">\n<li><span class=\"check\"><\/span><strong>Free:<\/strong> NotebookLM access included with platform limits.<\/li>\n<li><span class=\"check\"><\/span><strong>Google AI Plus:<\/strong> more Audio Overviews and notebooks.<\/li>\n<li><span class=\"check\"><\/span><strong>Google AI Pro:<\/strong> 5x more Audio Overviews than Free plus expanded notebook limits.<\/li>\n<li><span class=\"check\"><\/span><strong>Google AI Ultra:<\/strong> highest limits and best model capabilities.<\/li>\n<\/ul>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                The hard limit: one audio overview per notebook through the API. Specific notebook count limits and source-per-notebook caps are not publicly enumerated for consumer tiers in available documentation.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 6: WORKSPACE INTEGRATION --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1100px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Workspace Integration<\/div>\n<h2>Gmail, Docs, Sheets, Slides, Meet. <br \/>The integration depth is the moat.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px auto 60px; max-width: 900px;\">\n                Gemini in Workspace surfaces as a side panel or inline assistant inside Google Workspace applications. The integration depth differs across applications.\n            <\/p>\n<div style=\"overflow: hidden; margin-bottom: 24px;\">\n<div style=\"float: left; width: 32%; padding: 32px; border: 2px solid rgba(255,255,255,0.08); border-radius: 12px; box-sizing: border-box;\">\n<h4 style=\"font-size: 18px; margin: 0 0 16px 0; font-weight: 600;\">Gmail<\/h4>\n<p style=\"font-size: 15px; line-height: 1.7; margin: 0; color: rgba(255,255,255,0.85);\">\n                        Drafting full replies from short bullet points, summarizing long threads, suggesting calendar invites from email content, Smart Compose extension.\n                    <\/p>\n<\/p><\/div>\n<div style=\"float: left; width: 32%; margin-left: 2%; padding: 32px; border: 2px solid rgba(255,255,255,0.08); border-radius: 12px; box-sizing: border-box;\">\n<h4 style=\"font-size: 18px; margin: 0 0 16px 0; font-weight: 600;\">Docs<\/h4>\n<p style=\"font-size: 15px; line-height: 1.7; margin: 0; color: rgba(255,255,255,0.85);\">\n                        Writing assistance, paragraph rewriting, tone adjustment, format restructuring, section generation from prompts.\n                    <\/p>\n<\/p><\/div>\n<div style=\"float: left; width: 32%; margin-left: 2%; padding: 32px; border: 2px solid rgba(255,255,255,0.08); border-radius: 12px; box-sizing: border-box;\">\n<h4 style=\"font-size: 18px; margin: 0 0 16px 0; font-weight: 600;\">Sheets<\/h4>\n<p style=\"font-size: 15px; line-height: 1.7; margin: 0; color: rgba(255,255,255,0.85);\">\n                        Formula generation from natural language descriptions, data analysis suggestions, chart recommendations.\n                    <\/p>\n<\/p><\/div>\n<\/p><\/div>\n<div style=\"clear: both;\"><\/div>\n<div style=\"overflow: hidden; margin-bottom: 40px;\">\n<div style=\"float: left; width: 49%; padding: 32px; border: 2px solid rgba(255,255,255,0.08); border-radius: 12px; box-sizing: border-box;\">\n<h4 style=\"font-size: 18px; margin: 0 0 16px 0; font-weight: 600;\">Meet<\/h4>\n<p style=\"font-size: 15px; line-height: 1.7; margin: 0; color: rgba(255,255,255,0.85);\">\n                        Meeting note generation, action item extraction, post-meeting summary delivery.\n                    <\/p>\n<\/p><\/div>\n<div style=\"float: right; width: 49%; padding: 32px; border: 2px solid rgba(255,255,255,0.08); border-radius: 12px; box-sizing: border-box;\">\n<h4 style=\"font-size: 18px; margin: 0 0 16px 0; font-weight: 600;\">Slides and Vids<\/h4>\n<p style=\"font-size: 15px; line-height: 1.7; margin: 0; color: rgba(255,255,255,0.85);\">\n                        Slide generation from outlines, slide rewriting from feedback, image generation through Imagen integration. Vids: AI video creation from prompts and assets.\n                    <\/p>\n<\/p><\/div>\n<\/p><\/div>\n<div style=\"clear: both;\"><\/div>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                Free tier: Gemini in Gmail only as a basic side panel feature, plus Gemini app chat access. The deep Workspace integration across all five applications requires either Google AI Plus (Gmail and Vids and more), Google AI Pro (Gmail, Docs, Vids, and more), or Google AI Ultra (highest limits across all apps). The Workspace Business plans bundle the integration with graduated feature access by plan tier.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                The integration depth is structurally hard to replicate elsewhere. For organizations already standardized on Google Workspace, the in-app integration creates real switching cost relative to a stand-alone external chat interface. The relevant procurement question is rarely &#8220;Gemini API cost vs ChatGPT API cost.&#8221; It is whether the Workspace integration depth offsets the calibration deficit per the Suprmind Multi-Model Divergence Index, April 2026 Edition.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 7: IMAGEN 4 --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1100px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Imagen 4 &#8211; Image Generation<\/div>\n<h2>Three quality tiers in the dedicated API. <br \/>Nano Banana for native in-chat generation.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px auto 24px; max-width: 900px;\">\n                Imagen 4 is the dedicated text-to-image API model family with three speed and quality variants: Fast, Standard, and Ultra. Imagen 4 Standard and Ultra reached general availability on 2025-08-14, with Imagen 4 Fast on the same date.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 24px; max-width: 900px;\">\n                The native image generation variant in the Gemini model itself is separate. Nano Banana (Gemini 2.5 Flash Image) reached general availability on 2025-10-02, allowing image generation and editing in the same model context as text. Nano Banana 2 (Gemini 3.1 Flash Image Preview) launched 2026-02-26. Nano Banana Pro is in preview as of the research date, positioned as state-of-the-art for highly contextual native image creation.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 40px; max-width: 900px;\">\n                The architectural distinction matters for workflow design. The Imagen 4 family is the dedicated image-only API with per-image pricing. The Nano Banana family is image generation integrated inside the conversational Gemini model, allowing iterative image editing within a chat context. For workflows where the image is the deliverable, Imagen 4 is the firmer path. For workflows where the image accompanies a longer conversational task, Nano Banana fits the integrated context better.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 32px; text-align: center;\">API Pricing (Imagen 4)<\/h3>\n<div class=\"comparison-table comparison-table-3\" style=\"max-width: 800px; margin: 0 auto 40px;\">\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Variant<\/div>\n<div class=\"comparison-value\">Per-Image Cost<\/div>\n<div class=\"comparison-value\">Use Case<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Imagen 4 Fast<\/div>\n<div class=\"comparison-value\">$0.02<\/div>\n<div class=\"comparison-value\">High-volume exploration<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Imagen 4 Standard<\/div>\n<div class=\"comparison-value\">$0.04<\/div>\n<div class=\"comparison-value\">Default production tier<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Imagen 4 Ultra<\/div>\n<div class=\"comparison-value\">$0.06<\/div>\n<div class=\"comparison-value\">Highest-quality output<\/div>\n<\/p><\/div>\n<\/p><\/div>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 40px; max-width: 900px;\">\n                The text rendering quality on Imagen 4 was a specific improvement focus. Independent reporting at the launch period flagged better text rendering and overall image quality up to 2K resolution as the headline change versus prior generations.\n            <\/p>\n<div style=\"padding: 32px; border: 2px solid rgba(255,255,255,0.08); border-left: 2px solid #fbbf24; border-radius: 12px; max-width: 900px; margin: 0 auto;\">\n<h4 style=\"font-size: 20px; margin: 0 0 16px 0; font-weight: 600;\">The 2024 Image Generation Controversy<\/h4>\n<p style=\"font-size: 17px; line-height: 1.7; margin: 0; color: rgba(255,255,255,0.9);\">\n                    Worth flagging because it shaped Gemini&#8217;s brand reputation. In February 2024, Google paused human image generation after users demonstrated that Gemini was producing historically inaccurate images that predominantly featured people of color regardless of historical context. The examples included Black Founding Fathers and Nazi soldiers of non-European descent. Google SVP Prabhakar Raghavan acknowledged the company &#8220;missed the mark.&#8221; The feature was paused, recalibrated, and resumed. The controversy remains the most prominent public failure associated with the Gemini brand and is referenced in regulatory filings and academic literature on AI safety calibration.\n                <\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 8: VEO 3.1 --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1100px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Veo 3.1 &#8211; Video Generation<\/div>\n<h2>Up to 4K with native audio synthesis. <br \/>Reference images, frame control, portrait orientation.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px auto 24px; max-width: 900px;\">\n                Veo 3.1 is Google&#8217;s current video generation model, available in the Gemini app (consumer) through the Flow filmmaking platform and via the API. The Veo line launched in May 2024 in preview, with Veo 2 reaching GA on 2025-04-09, Veo 3 on 2025-09-09 (the first model to generate synchronized audio natively), and Veo 3.1 in preview from 2025-10-15. Veo 3.1 Lite launched 2026-03-31 as the lower-tier variant.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 40px; max-width: 900px;\">\n                Veo 3.1 generates video from text prompts or image inputs at up to 4K resolution. The model supports portrait orientation, video extension (extending an existing clip), reference image inputs (up to 3), and first\/last frame specification (precise control over the opening and closing shots). The audio synthesis runs natively alongside video, producing dialogue, sound effects, and ambient noise synchronized with the visual track.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability and API Pricing<\/h3>\n<ul class=\"feature-list\" style=\"margin-bottom: 40px;\">\n<li><span class=\"check\"><\/span><strong>Veo 3.1 (full):<\/strong> Ultra subscribers (consumer).<\/li>\n<li><span class=\"check\"><\/span><strong>Veo 3.1 Lite:<\/strong> AI Plus and Pro tiers (limited access).<\/li>\n<li><span class=\"check\"><\/span><strong>Free tier:<\/strong> limited access to Veo 3.1 via Flow.<\/li>\n<\/ul>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 32px; text-align: center;\">API Pricing per Second of Generated Video<\/h3>\n<div class=\"comparison-table comparison-table-4\" style=\"max-width: 1000px; margin: 0 auto 40px;\">\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Variant<\/div>\n<div class=\"comparison-value\">720p<\/div>\n<div class=\"comparison-value\">1080p<\/div>\n<div class=\"comparison-value\">4K<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Standard with audio<\/div>\n<div class=\"comparison-value\">$0.40<\/div>\n<div class=\"comparison-value\">$0.40<\/div>\n<div class=\"comparison-value\">$0.60<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Fast with audio<\/div>\n<div class=\"comparison-value\">$0.10<\/div>\n<div class=\"comparison-value\">$0.12<\/div>\n<div class=\"comparison-value\">$0.30<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Lite with audio<\/div>\n<div class=\"comparison-value\">$0.05<\/div>\n<div class=\"comparison-value\">$0.08<\/div>\n<div class=\"comparison-value\">n\/a<\/div>\n<\/p><\/div>\n<\/p><\/div>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 0; max-width: 900px;\">\n                The per-second pricing structure means a 30-second 1080p Veo 3.1 Standard clip costs $12 in pure inference. The Lite variant at 720p is $1.50 for the same duration, the cheapest path for low-resolution exploratory generation.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 9: GEMINI LIVE AND PROJECT ASTRA --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1100px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Gemini Live and Project Astra<\/div>\n<h2>Real-time voice with low-latency interruption <br \/>and snapshot-based camera input.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px auto 24px; max-width: 900px;\">\n                Gemini Live is the real-time voice conversation mode in the Gemini app. The mode supports back-and-forth spoken interaction with low latency, interruption handling (the user can talk over Gemini and the response adapts), context retention across the voice session, and integration with the phone&#8217;s camera for visual context during conversation.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 24px; max-width: 900px;\">\n                Project Astra is the underlying research initiative. It explores breakthrough capabilities for real-time multimodal AI assistance, including spatial processing, screen sharing, and tool use across Google apps. Project Astra is not a standalone shipping product. Its capabilities are progressively incorporated into the Gemini app and the Live mode.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 40px; max-width: 900px;\">\n                The camera integration runs as snapshot-based capture rather than continuous video stream at the consumer rollout. The user points the phone camera, and Gemini analyzes a snapshot or short sequence. The screen sharing capability allows Gemini to observe what is on the user&#8217;s device screen and provide contextual responses. Tool use and Google app integration (Search, Gmail, Calendar, Maps) layer the agentic capability on top of the conversational surface.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">API Model and Pricing<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                The current Live API model is <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">gemini-3.1-flash-live-preview<\/code> (launched 2026-03-26).\n            <\/p>\n<ul class=\"feature-list\" style=\"margin-bottom: 40px;\">\n<li><span class=\"check\"><\/span><strong>Text input:<\/strong> $0.75 per million tokens.<\/li>\n<li><span class=\"check\"><\/span><strong>Audio input:<\/strong> $3.00 per million tokens.<\/li>\n<li><span class=\"check\"><\/span><strong>Image and video input:<\/strong> $0.002 per minute.<\/li>\n<li><span class=\"check\"><\/span><strong>Text output:<\/strong> $4.50 per million tokens.<\/li>\n<li><span class=\"check\"><\/span><strong>Audio output:<\/strong> $12.00 per million tokens.<\/li>\n<\/ul>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 40px; max-width: 900px;\">\n                The audio output rate is the highest per-token rate in the Gemini API, reflecting the inference cost of voice synthesis at conversational latency. For workflows where high-volume audio output is the deliverable, the per-million-token output rate is the cost driver.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                Gemini Live basic: Free tier and above. Project Astra camera and screen-sharing capabilities: originally required paid tier, with broader rollout to Android 10+ devices through 2025. Agentic agent mode (Gemini Agent in Ultra tier): US-only, English-only.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 10: COMPUTER USE AND JULES --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1000px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Computer Use and Jules<\/div>\n<h2>Agentic browser control. <br \/>Asynchronous coding agent.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px 0 24px 0;\">\n                Computer Use is the model capability that allows Gemini to &#8220;see&#8221; a digital screen and perform UI actions like clicking, typing, and navigating. It is exposed through the API as a specialized model and as a tool callable from Gemini 3 Pro and Gemini 3 Flash.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                The Gemini 2.5 Computer Use Preview launched 2025-10-07. Computer Use was added as a tool to Gemini 3 Pro and Gemini 3 Flash on 2026-01-29. The model receives screen content as input and emits UI actions as output. Workflows can chain perception (read screen) with action (click, type, navigate) to automate browser tasks that previously required manual operation.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 24px 0;\">\n                Jules is the asynchronous coding agent referenced in the May 2026 subscription page. Jules operates on code repositories and runs in the background, comparable in positioning to coding agent products from other vendors. Jules availability is currently in Beta with English-only and 18+ requirements, plus a capacity caveat that means access is not always guaranteed.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                Google Antigravity, referenced in the subscription page, is the agentic development platform separate from core Gemini.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">API Pricing (Computer Use)<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">gemini-2.5-computer-use-preview-10-2025<\/code>: $1.25 per million input tokens (for inputs \u2264200,000 tokens), $10.00 per million output tokens.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Tier Availability<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                Computer Use API: paid tier. Jules: Pro tier higher limits, Ultra tier highest limits (Beta with the English-only and 18+ caveats). Gemini Agent mode: US-only, English-only, Ultra tier exclusive.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 11: TIER-TO-MODEL TRANSPARENCY AND CITATION MECHANICS --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1100px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Tier-to-Model Transparency and Citation Mechanics<\/div>\n<h2>Two cross-feature behaviors that shape <br \/>every workflow on the platform.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px auto 60px; max-width: 900px;\">\n                Two cross-feature behaviors warrant separate coverage because they affect every feature in the network.\n            <\/p>\n<h3 style=\"font-size: 24px; font-weight: 600; margin: 40px auto 24px; max-width: 900px;\">Tier-to-Model Transparency<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 24px; max-width: 900px;\">\n                The Gemini app&#8217;s model selector shows model names (3.1 Pro, 3 Flash, etc.) in a dropdown when users manually switch. The default model delivered per tier is described in subscription marketing language only: Free gets 3 Flash plus varying access to 3.1 Pro, AI Plus gets enhanced access to 3.1 Pro, AI Pro gets higher access to 3.1 Pro, AI Ultra gets the highest limits. No UI element in the default chat surface displays the exact model ID or version being used for any given query.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 40px; max-width: 900px;\">\n                The tier-to-model mapping is documented in subscription marketing but not surfaced at inference time. This is a documented user pain point in the developer community (GitHub VS Code issue 283194, 2025-04-21). Developers using the API must specify model IDs explicitly to lock model identity, since the <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">gemini-pro-latest<\/code> and <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">gemini-flash-latest<\/code> aliases were updated in January 2026 to point to Gemini 3 generation models, and Google&#8217;s documentation states aliases are periodically hot-swapped with two-week email notice. Single-source confirmation of which model a specific UI query hits is not available to the end user.\n            <\/p>\n<h3 style=\"font-size: 24px; font-weight: 600; margin: 60px auto 24px; max-width: 900px;\">Citation Mechanics<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 24px; max-width: 900px;\">\n                In the Gemini app, citations appear when Google Search grounding is active. Citations link to web sources. The system does not currently distinguish between claims sourced from the model&#8217;s parametric knowledge versus claims grounded via real-time web search in standard consumer output. Users seeing a Gemini response that includes both grounded and parametric content cannot tell which claims have a source backing and which do not without manually checking the citation list against each claim.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 24px; max-width: 900px;\">\n                In Deep Research, citations are more explicit. Reports include numbered source citations with links to the web pages browsed during the research session. Each numbered citation maps to a specific section of the synthesis. This is the citation pattern most likely to support audit-quality research workflows.\n            <\/p>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 auto 0; max-width: 900px;\">\n                In the API, the Grounding with Google Search tool returns grounding metadata with source URLs. The File Search API (launched 2025-11-06) returns <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">media_id<\/code> and <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">page_numbers<\/code> for visual citations against uploaded documents. Per Suprmind&#8217;s AI Hallucination Rates and Benchmarks reference (May 2026 update), Gemini 3 Pro scored 76% on the Columbia Journalism Review citation hallucination test. This is significantly higher than Perplexity Sonar Pro at 37% (best of any model tested). For citation-grounded research workflows where attribution accuracy matters, pair Gemini for breadth with Perplexity for citation grounding.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 12: DOCUMENT HANDLING --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1000px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Document Handling<\/div>\n<h2>Formats, file size limits, <br \/>and parser fidelity gaps.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px 0 40px 0;\">\n                Gemini handles document upload and analysis through both the consumer app and the API. The supported format set covers most everyday workflows.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Supported Formats<\/h3>\n<ul class=\"feature-list\" style=\"margin-bottom: 40px;\">\n<li><span class=\"check\"><\/span><strong>Text and code:<\/strong> plain text, Markdown, code files (Python, JavaScript, others), CSV, JSON.<\/li>\n<li><span class=\"check\"><\/span><strong>Document formats:<\/strong> PDF (supported as of 2024-08-09), DOCX.<\/li>\n<li><span class=\"check\"><\/span><strong>Image formats:<\/strong> PNG, JPEG, WebP.<\/li>\n<li><span class=\"check\"><\/span><strong>Audio formats:<\/strong> various standard audio inputs.<\/li>\n<li><span class=\"check\"><\/span><strong>Video formats:<\/strong> MP4, MOV, WebM. Gemini 3 generation supports native video understanding with per-minute pricing on the Live API.<\/li>\n<\/ul>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                The video understanding capability is unique within the Gemini family at the consumer tier. The 1M token context window enables analysis of approximately one hour of video at standard resolution.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">File Size Limits<\/h3>\n<ul class=\"feature-list\" style=\"margin-bottom: 40px;\">\n<li><span class=\"check\"><\/span><strong>Chat UI:<\/strong> subscription marketing references up to 1,500 pages of file uploads in Pro tier.<\/li>\n<li><span class=\"check\"><\/span><strong>API:<\/strong> 100 MB per file (increased from 20 MB on 2026-01-08).<\/li>\n<li><span class=\"check\"><\/span><strong>Code repository upload:<\/strong> up to 30,000 lines mentioned in subscription marketing.<\/li>\n<li><span class=\"check\"><\/span><strong>Cloud Storage bucket URLs:<\/strong> also supported as of 2026-01-08.<\/li>\n<\/ul>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 40px 0;\">\n                The 100 MB API limit is meaningfully higher than several competitor APIs and supports workflows that require larger document ingestion. Combined with the 1M context window, the practical ceiling for long-document workflows is the published MRCR v2 accuracy curve rather than the file size cap. Plan workflows to keep retrieval and reasoning inside 128k tokens where accuracy is high.\n            <\/p>\n<h3 style=\"font-size: 22px; font-weight: 600; margin: 40px 0 24px;\">Parser Fidelity<\/h3>\n<p style=\"font-size: 18px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 0 0 0 0;\">\n                PDF parsing is confirmed for both chat UI and API. The multimodal embedding model <code style=\"background: rgba(139,92,246,0.1); padding: 2px 6px; border-radius: 4px; font-size: 0.9em;\">gemini-embedding-2<\/code> (GA 2026-04-22) added PDF as a native input type, allowing PDF content to be embedded for retrieval without intermediate text extraction. What is not formally documented in available sources: DOCX table extraction fidelity, embedded image extraction from documents, footnote handling, and OCR behavior on scanned PDFs. If your workflow depends on these specifics, test empirically rather than relying on documentation.\n            <\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 13: FEATURE AVAILABILITY MATRIX --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div style=\"max-width: 1300px; margin: 0 auto;\">\n<div style=\"text-align: center;\">\n<div class=\"section-label\">Feature Availability Matrix<\/div>\n<h2>Every feature, every tier, <br \/>at a glance.<\/h2>\n<\/p><\/div>\n<p style=\"font-size: 19px; line-height: 1.8; color: rgba(255,255,255,0.9); margin: 40px auto 40px; max-width: 900px;\">\n                Tier availability for several features is not enumerated in official Google docs as of May 2026. Treat tier-specific limits as Volatile and verify at gemini.google.com\/subscriptions before relying on the cap for production planning.\n            <\/p>\n<div class=\"comparison-table comparison-table-6\" style=\"max-width: 1200px; margin: 0 auto;\">\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Feature<\/div>\n<div class=\"comparison-value\">Free<\/div>\n<div class=\"comparison-value\">AI Plus<\/div>\n<div class=\"comparison-value\">AI Pro<\/div>\n<div class=\"comparison-value\">AI Ultra<\/div>\n<div class=\"comparison-value\">API<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Deep Research<\/div>\n<div class=\"comparison-value\">5\/month<\/div>\n<div class=\"comparison-value\">More<\/div>\n<div class=\"comparison-value\">5x Free<\/div>\n<div class=\"comparison-value\">Highest + visual<\/div>\n<div class=\"comparison-value\">Yes (preview)<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Deep Research Max<\/div>\n<div class=\"comparison-value\">No<\/div>\n<div class=\"comparison-value\">Limited<\/div>\n<div class=\"comparison-value\">Limited<\/div>\n<div class=\"comparison-value\">Yes<\/div>\n<div class=\"comparison-value\">Yes<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Gems<\/div>\n<div class=\"comparison-value\">Limited<\/div>\n<div class=\"comparison-value\">Yes<\/div>\n<div class=\"comparison-value\">Full<\/div>\n<div class=\"comparison-value\">Full<\/div>\n<div class=\"comparison-value\">Custom<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Canvas<\/div>\n<div class=\"comparison-value\">Basic<\/div>\n<div class=\"comparison-value\">Basic<\/div>\n<div class=\"comparison-value\">Full<\/div>\n<div class=\"comparison-value\">Full + visual<\/div>\n<div class=\"comparison-value\">n\/a<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Audio Overviews<\/div>\n<div class=\"comparison-value\">Limited<\/div>\n<div class=\"comparison-value\">More<\/div>\n<div class=\"comparison-value\">5x Free<\/div>\n<div class=\"comparison-value\">Highest<\/div>\n<div class=\"comparison-value\">NotebookLM API<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">NotebookLM<\/div>\n<div class=\"comparison-value\">Yes<\/div>\n<div class=\"comparison-value\">More<\/div>\n<div class=\"comparison-value\">More<\/div>\n<div class=\"comparison-value\">Highest<\/div>\n<div class=\"comparison-value\">Workspace API<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Workspace integration<\/div>\n<div class=\"comparison-value\">Gmail only<\/div>\n<div class=\"comparison-value\">Gmail, Vids<\/div>\n<div class=\"comparison-value\">All apps<\/div>\n<div class=\"comparison-value\">Highest<\/div>\n<div class=\"comparison-value\">Bundle<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Imagen 4<\/div>\n<div class=\"comparison-value\">Limited<\/div>\n<div class=\"comparison-value\">Nano Banana Pro<\/div>\n<div class=\"comparison-value\">Nano Banana Pro<\/div>\n<div class=\"comparison-value\">Full + highest<\/div>\n<div class=\"comparison-value\">Per-image<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Veo 3.1<\/div>\n<div class=\"comparison-value\">Via Flow<\/div>\n<div class=\"comparison-value\">Lite<\/div>\n<div class=\"comparison-value\">Lite<\/div>\n<div class=\"comparison-value\">Full<\/div>\n<div class=\"comparison-value\">Per-second<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Gemini Live<\/div>\n<div class=\"comparison-value\">Basic<\/div>\n<div class=\"comparison-value\">Yes<\/div>\n<div class=\"comparison-value\">Yes<\/div>\n<div class=\"comparison-value\">Highest<\/div>\n<div class=\"comparison-value\">Live API<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Project Astra<\/div>\n<div class=\"comparison-value\">Limited<\/div>\n<div class=\"comparison-value\">Camera\/screen<\/div>\n<div class=\"comparison-value\">Camera\/screen<\/div>\n<div class=\"comparison-value\">Full agentic<\/div>\n<div class=\"comparison-value\">n\/a<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Computer Use<\/div>\n<div class=\"comparison-value\">No<\/div>\n<div class=\"comparison-value\">No<\/div>\n<div class=\"comparison-value\">Limited<\/div>\n<div class=\"comparison-value\">Agent (US)<\/div>\n<div class=\"comparison-value\">Yes (paid)<\/div>\n<\/p><\/div>\n<div class=\"comparison-row\">\n<div class=\"comparison-feature\">Jules (coding)<\/div>\n<div class=\"comparison-value\">No<\/div>\n<div class=\"comparison-value\">No<\/div>\n<div class=\"comparison-value\">Higher<\/div>\n<div class=\"comparison-value\">Highest (Beta)<\/div>\n<div class=\"comparison-value\">Beta<\/div>\n<\/p><\/div>\n<\/p><\/div>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 14: FAQ --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section id=\"faq\" aria-labelledby=\"faq-heading\">\n<p class=\"section-label\">FAQ<\/p>\n<h2 id=\"faq-heading\">Gemini Features: Frequently Asked Questions<\/h2>\n<div class=\"faq-accordion\">\n<details class=\"faq-item\" open>\n<summary class=\"faq-question\">\n                    <span>What is Gemini Deep Research?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Deep Research is an agentic feature in Gemini that autonomously browses up to hundreds of websites, plus a user&#8217;s Gmail, Drive, and Chat if permitted, then synthesizes findings into a multi-page cited report. Mechanically, it runs an iterative search-read-synthesize loop powered by Gemini 3.1 Pro. Deep Research Max (launched 2026-04-20) adds MCP support and native visualizations for long-horizon professional research tasks.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>What are Gems in Gemini?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Gems are customizable AI personas within the Gemini consumer application. Users configure a Gem with a name, behavioral instructions, a specific role, and up to 10 reference files. Gems persist across sessions and retain their configured instructions. They are comparable in function to GPT Custom GPTs on the ChatGPT platform. Gem creation is available starting from the Free tier with full creation on paid tiers.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>How does Gemini Canvas work?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Canvas is a side-by-side workspace within Gemini where the model generates and iteratively edits formatted documents, code, or structured outputs in a separate panel from the chat interface. The user can request revisions targeting specific sections without regenerating the full document. Canvas is comparable in function to Claude&#8217;s Artifacts feature. Available on Free tier (basic) with full visual output on Pro and Ultra.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>What is Gemini Live?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Gemini Live is a real-time voice conversation mode in the Gemini app that enables back-and-forth spoken interaction with low latency. It allows interruption, context retention across the voice session, and integration with the phone&#8217;s camera (visual context during conversation). It is available on Android and iOS. Project Astra is the research initiative underlying Live&#8217;s multimodal real-time capabilities.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>Can Gemini analyze videos?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Yes. Gemini 3.1 Pro and the Gemini 2.5+ generation support native video understanding. The model processes video frames and audio tracks as input and can answer questions about video content, summarize footage, and identify elements within clips. The 1M token context window enables analysis of approximately one hour of video at standard resolution.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>Does Gemini generate images?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Yes. Gemini&#8217;s image generation capability uses the Imagen 4 family of models (Fast, Standard, Ultra) and the native Nano Banana variant integrated into the Gemini model itself. The API offers pay-per-image pricing: Fast at $0.02, Standard at $0.04, Ultra at $0.06. Consumer app image generation is available on Free tier (limited) and expanded on Pro and Ultra tiers.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>Does Gemini generate videos?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Yes. Veo 3.1 is the current video generation model, available through the Flow filmmaking platform in the Gemini app and via API. Veo 3.1 generates video at up to 4K with native audio synthesis. Tier availability: Ultra subscribers get full Veo 3.1, Plus and Pro tiers get Veo 3.1 Lite, Free tier gets limited access via Flow. API per-second pricing ranges from $0.05 (Lite 720p) to $0.60 (Standard 4K).<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>What is Project Astra?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Project Astra is Google DeepMind&#8217;s research prototype for a universal AI assistant with real-time multimodal understanding. It demonstrated real-time camera-to-speech understanding at Google I\/O 2024 and serves as the research foundation for Gemini Live&#8217;s real-time capabilities. Project Astra is not a separate shipping product. Its capabilities are progressively incorporated into the Gemini app.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>Can Gemini control my computer or browser?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Yes, through the Computer Use capability. Gemini can &#8220;see&#8221; a digital screen and perform UI actions like clicking, typing, and navigating to automate browser tasks. Available through the API (paid tier) as a specialized model and as a tool on Gemini 3 Pro and Gemini 3 Flash. Gemini Agent mode for full agentic browsing is currently US-only and English-only on the Ultra tier.<\/p>\n<\/p><\/div>\n<\/details>\n<details class=\"faq-item\">\n<summary class=\"faq-question\">\n                    <span>How accurate are Gemini&#8217;s citations in Deep Research?<\/span><br \/>\n                    <span class=\"faq-icon\" aria-hidden=\"true\">+<\/span><br \/>\n                <\/summary>\n<div class=\"faq-answer\">\n<p style=\"font-size: 16px;\">Per Suprmind&#8217;s AI Hallucination Rates and Benchmarks reference (May 2026 update), Gemini 3 Pro scored 76% on the Columbia Journalism Review citation hallucination test. This means citations are generated and link to real sources, but the claimed information often does not match the source content. The CJR test scores higher than Grok-3 (94%) but trails Perplexity Sonar Pro (37%, best of any model). For citation-grounded research where attribution accuracy is the audit point, pair Gemini for breadth with Perplexity for citation validation.<\/p>\n<\/p><\/div>\n<\/details><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- SECTION 15: FINAL CTA --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 100px 48px;\">\n<div class=\"cta-section\">\n<h2>Gemini&#8217;s features are deep. <br \/>Suprmind orchestrates five model families.<\/h2>\n<p class=\"cta-subtitle\">\n                Use Gemini for multimodal breadth and Workspace integration. Pair with Claude for calibration, Perplexity for citation accuracy, GPT for math reasoning, and Grok for contrarian signal. All in one shared conversation, with cross-model fact-checking before any answer reaches your decision.\n            <\/p>\n<div class=\"hero-cta-group\">\n                <a href=\"\/signup\/spark\" class=\"btn-white\">Start Your Free Trial<\/a><br \/>\n                <a href=\"\/hub?page_id=2571\" class=\"btn-white\">See How Suprmind Works<\/a>\n            <\/div>\n<p style=\"margin-top: 24px; font-size: 14px; opacity: 0.7;\">7-day free trial. All five frontier models. No credit card required.<\/p>\n<\/p><\/div>\n<\/section>\n<p>    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><br \/>\n    <!-- FOOTER NOTE --><br \/>\n    <!-- \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550 --><\/p>\n<section style=\"padding: 40px 48px; text-align: center;\">\n<p style=\"font-size: 16px; color: #e5e7eb; font-weight: 500; margin-bottom: 8px;\">\n            Disagreement is the feature.\n        <\/p>\n<p style=\"font-size: 14px; color: #e5e7eb; font-style: italic;\">\n            Last verified May 10, 2026. Next refresh due August 10, 2026.\n        <\/p>\n<\/section>\n<\/div>\n<style>\r\n.lwrp.link-whisper-related-posts{\r\n            \r\n            margin-top: 40px;\nmargin-bottom: 30px;\r\n        }\r\n        .lwrp .lwrp-title{\r\n            \r\n            \r\n        }.lwrp .lwrp-description{\r\n            \r\n            \r\n\r\n        }\r\n        .lwrp .lwrp-list-container{\r\n        }\r\n        .lwrp .lwrp-list-multi-container{\r\n            display: flex;\r\n        }\r\n        .lwrp .lwrp-list-double{\r\n            width: 48%;\r\n        }\r\n        .lwrp .lwrp-list-triple{\r\n            width: 32%;\r\n        }\r\n        .lwrp .lwrp-list-row-container{\r\n            display: flex;\r\n            justify-content: space-between;\r\n        }\r\n        .lwrp .lwrp-list-row-container .lwrp-list-item{\r\n            width: calc(12% - 20px);\r\n        }\r\n        .lwrp .lwrp-list-item:not(.lwrp-no-posts-message-item){\r\n            \r\n            \r\n        }\r\n        .lwrp .lwrp-list-item img{\r\n            max-width: 100%;\r\n            height: auto;\r\n            object-fit: cover;\r\n            aspect-ratio: 1 \/ 1;\r\n        }\r\n        .lwrp .lwrp-list-item.lwrp-empty-list-item{\r\n            background: initial !important;\r\n        }\r\n        .lwrp .lwrp-list-item .lwrp-list-link .lwrp-list-link-title-text,\r\n        .lwrp .lwrp-list-item .lwrp-list-no-posts-message{\r\n            \r\n            \r\n            \r\n            \r\n        }@media screen and (max-width: 480px) {\r\n            .lwrp.link-whisper-related-posts{\r\n                \r\n                \r\n            }\r\n            .lwrp .lwrp-title{\r\n                \r\n                \r\n            }.lwrp .lwrp-description{\r\n                \r\n                \r\n            }\r\n            .lwrp .lwrp-list-multi-container{\r\n                flex-direction: column;\r\n            }\r\n            .lwrp .lwrp-list-multi-container ul.lwrp-list{\r\n                margin-top: 0px;\r\n                margin-bottom: 0px;\r\n                padding-top: 0px;\r\n                padding-bottom: 0px;\r\n            }\r\n            .lwrp .lwrp-list-double,\r\n            .lwrp .lwrp-list-triple{\r\n                width: 100%;\r\n            }\r\n            .lwrp .lwrp-list-row-container{\r\n                justify-content: initial;\r\n                flex-direction: column;\r\n            }\r\n            .lwrp .lwrp-list-row-container .lwrp-list-item{\r\n                width: 100%;\r\n            }\r\n            .lwrp .lwrp-list-item:not(.lwrp-no-posts-message-item){\r\n                \r\n                \r\n            }\r\n            .lwrp .lwrp-list-item .lwrp-list-link .lwrp-list-link-title-text,\r\n            .lwrp .lwrp-list-item .lwrp-list-no-posts-message{\r\n                \r\n                \r\n                \r\n                \r\n            };\r\n        }<\/style>\r\n<div id=\"link-whisper-related-posts-widget\" class=\"link-whisper-related-posts lwrp\">\r\n            <h3 class=\"lwrp-title\">Related Topics and Pages<\/h3>    \r\n        <div class=\"lwrp-list-container\">\r\n                                            <ul class=\"lwrp-list lwrp-list-single\">\r\n                    <li class=\"lwrp-list-item\"><a href=\"https:\/\/suprmind.ai\/hub\/gemini\/pricing\/\" class=\"lwrp-list-link\"><span class=\"lwrp-list-link-title-text\">Gemini Pricing 2026: Free, AI Plus, AI Pro, AI Ultra, and API Costs<\/span><\/a><\/li><li class=\"lwrp-list-item\"><a href=\"https:\/\/suprmind.ai\/hub\/gemini\/vs-other-ai\/\" class=\"lwrp-list-link\"><span class=\"lwrp-list-link-title-text\">Gemini vs ChatGPT, Claude, Grok and Perplexity: A 2026 Honest Comparison<\/span><\/a><\/li><li class=\"lwrp-list-item\"><a href=\"https:\/\/suprmind.ai\/hub\/gemini\/\" class=\"lwrp-list-link\"><span class=\"lwrp-list-link-title-text\">Google Gemini 2026: Models, Features, Pricing, and Accuracy<\/span><\/a><\/li>                <\/ul>\r\n                        <\/div>\r\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.<\/p>\n","protected":false},"author":0,"featured_media":0,"parent":5199,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-5207","page","type-page","status-publish","hentry"],"aioseo_notices":[],"aioseo_head":"\n\t\t<!-- All in One SEO Pro 4.9.0 - aioseo.com -->\n\t<meta name=\"description\" content=\"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.\" \/>\n\t<meta name=\"robots\" content=\"max-image-preview:large\" \/>\n\t<link rel=\"canonical\" href=\"https:\/\/suprmind.ai\/hub\/gemini\/features\/\" \/>\n\t<meta name=\"generator\" content=\"All in One SEO Pro (AIOSEO) 4.9.0\" \/>\n\t\t<meta property=\"og:locale\" content=\"en_US\" \/>\n\t\t<meta property=\"og:site_name\" content=\"Suprmind - Multi-Model AI Decision Intelligence Chat Platform for Professionals for Business: 5 Models, One Thread .\" \/>\n\t\t<meta property=\"og:type\" content=\"website\" \/>\n\t\t<meta property=\"og:title\" content=\"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind\" \/>\n\t\t<meta property=\"og:description\" content=\"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.\" \/>\n\t\t<meta property=\"og:url\" content=\"https:\/\/suprmind.ai\/hub\/gemini\/features\/\" \/>\n\t\t<meta property=\"fb:admins\" content=\"567083258\" \/>\n\t\t<meta property=\"og:image\" content=\"https:\/\/suprmind.ai\/hub\/wp-content\/uploads\/2026\/01\/disagreement-is-the-feature-og-scaled.png\" \/>\n\t\t<meta property=\"og:image:secure_url\" content=\"https:\/\/suprmind.ai\/hub\/wp-content\/uploads\/2026\/01\/disagreement-is-the-feature-og-scaled.png\" \/>\n\t\t<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n\t\t<meta name=\"twitter:site\" content=\"@suprmind_ai\" \/>\n\t\t<meta name=\"twitter:title\" content=\"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind\" \/>\n\t\t<meta name=\"twitter:description\" content=\"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.\" \/>\n\t\t<meta name=\"twitter:creator\" content=\"@suprmind_ai\" \/>\n\t\t<meta name=\"twitter:image\" content=\"https:\/\/suprmind.ai\/hub\/wp-content\/uploads\/2026\/01\/disagreement-is-the-feature-og-scaled.png\" \/>\n\t\t<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t\t<meta name=\"twitter:data1\" content=\"21 minutes\" \/>\n\t\t<script type=\"application\/ld+json\" class=\"aioseo-schema\">\n\t\t\t{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/#breadcrumblist\",\"itemListElement\":[{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/#listItem\",\"position\":1,\"name\":\"Google Gemini 2026: Models, Features, Pricing, and Accuracy\",\"item\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/\",\"nextItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/#listItem\",\"name\":\"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live\"}},{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/#listItem\",\"position\":2,\"name\":\"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live\",\"previousItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/#listItem\",\"name\":\"Google Gemini 2026: Models, Features, Pricing, and Accuracy\"}}]},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/#organization\",\"name\":\"Suprmind\",\"description\":\"Decision validation platform for professionals who can't afford to be wrong. Five smartest AIs, in the same conversation. They debate, challenge, and build on each other - you export the verdict as a deliverable. Disagreement is the feature.\",\"url\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/\",\"email\":\"team@suprmind.ai\",\"foundingDate\":\"2025-10-01\",\"numberOfEmployees\":{\"@type\":\"QuantitativeValue\",\"value\":4},\"logo\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/suprmind-slash-new-bold-italic.png\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/#organizationLogo\",\"width\":1920,\"height\":1822,\"caption\":\"Suprmind\"},\"image\":{\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/#organizationLogo\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/suprmind.ai.orchestration\",\"https:\\\/\\\/x.com\\\/suprmind_ai\"]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/#webpage\",\"url\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/\",\"name\":\"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind\",\"description\":\"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.\",\"inLanguage\":\"en-US\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/#website\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/gemini\\\/features\\\/#breadcrumblist\"},\"datePublished\":\"2026-05-12T00:10:29+00:00\",\"dateModified\":\"2026-05-12T02:41:34+00:00\"},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/#website\",\"url\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/\",\"name\":\"Suprmind\",\"alternateName\":\"Suprmind.ai\",\"description\":\"Multi-Model AI Decision Intelligence Chat Platform for Professionals for Business: 5 Models, One Thread .\",\"inLanguage\":\"en-US\",\"publisher\":{\"@id\":\"https:\\\/\\\/suprmind.ai\\\/hub\\\/#organization\"}}]}\n\t\t<\/script>\n\t\t<!-- All in One SEO Pro -->\r\n\t\t<title>How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind<\/title>\n\n","aioseo_head_json":{"title":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind","description":"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.","canonical_url":"https:\/\/suprmind.ai\/hub\/gemini\/features\/","robots":"max-image-preview:large","keywords":"","webmasterTools":{"miscellaneous":""},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"BreadcrumbList","@id":"https:\/\/suprmind.ai\/hub\/gemini\/features\/#breadcrumblist","itemListElement":[{"@type":"ListItem","@id":"https:\/\/suprmind.ai\/hub\/gemini\/#listItem","position":1,"name":"Google Gemini 2026: Models, Features, Pricing, and Accuracy","item":"https:\/\/suprmind.ai\/hub\/gemini\/","nextItem":{"@type":"ListItem","@id":"https:\/\/suprmind.ai\/hub\/gemini\/features\/#listItem","name":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live"}},{"@type":"ListItem","@id":"https:\/\/suprmind.ai\/hub\/gemini\/features\/#listItem","position":2,"name":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live","previousItem":{"@type":"ListItem","@id":"https:\/\/suprmind.ai\/hub\/gemini\/#listItem","name":"Google Gemini 2026: Models, Features, Pricing, and Accuracy"}}]},{"@type":"Organization","@id":"https:\/\/suprmind.ai\/hub\/#organization","name":"Suprmind","description":"Decision validation platform for professionals who can't afford to be wrong. Five smartest AIs, in the same conversation. They debate, challenge, and build on each other - you export the verdict as a deliverable. Disagreement is the feature.","url":"https:\/\/suprmind.ai\/hub\/","email":"team@suprmind.ai","foundingDate":"2025-10-01","numberOfEmployees":{"@type":"QuantitativeValue","value":4},"logo":{"@type":"ImageObject","url":"https:\/\/suprmind.ai\/hub\/wp-content\/uploads\/2026\/02\/suprmind-slash-new-bold-italic.png","@id":"https:\/\/suprmind.ai\/hub\/gemini\/features\/#organizationLogo","width":1920,"height":1822,"caption":"Suprmind"},"image":{"@id":"https:\/\/suprmind.ai\/hub\/gemini\/features\/#organizationLogo"},"sameAs":["https:\/\/www.facebook.com\/suprmind.ai.orchestration","https:\/\/x.com\/suprmind_ai"]},{"@type":"WebPage","@id":"https:\/\/suprmind.ai\/hub\/gemini\/features\/#webpage","url":"https:\/\/suprmind.ai\/hub\/gemini\/features\/","name":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind","description":"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.","inLanguage":"en-US","isPartOf":{"@id":"https:\/\/suprmind.ai\/hub\/#website"},"breadcrumb":{"@id":"https:\/\/suprmind.ai\/hub\/gemini\/features\/#breadcrumblist"},"datePublished":"2026-05-12T00:10:29+00:00","dateModified":"2026-05-12T02:41:34+00:00"},{"@type":"WebSite","@id":"https:\/\/suprmind.ai\/hub\/#website","url":"https:\/\/suprmind.ai\/hub\/","name":"Suprmind","alternateName":"Suprmind.ai","description":"Multi-Model AI Decision Intelligence Chat Platform for Professionals for Business: 5 Models, One Thread .","inLanguage":"en-US","publisher":{"@id":"https:\/\/suprmind.ai\/hub\/#organization"}}]},"og:locale":"en_US","og:site_name":"Suprmind - Multi-Model AI Decision Intelligence Chat Platform for Professionals for Business: 5 Models, One Thread .","og:type":"website","og:title":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind","og:description":"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.","og:url":"https:\/\/suprmind.ai\/hub\/gemini\/features\/","fb:admins":"567083258","og:image":"https:\/\/suprmind.ai\/hub\/wp-content\/uploads\/2026\/01\/disagreement-is-the-feature-og-scaled.png","og:image:secure_url":"https:\/\/suprmind.ai\/hub\/wp-content\/uploads\/2026\/01\/disagreement-is-the-feature-og-scaled.png","twitter:card":"summary_large_image","twitter:site":"@suprmind_ai","twitter:title":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live - Suprmind","twitter:description":"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.","twitter:creator":"@suprmind_ai","twitter:image":"https:\/\/suprmind.ai\/hub\/wp-content\/uploads\/2026\/01\/disagreement-is-the-feature-og-scaled.png","twitter:label1":"Est. reading time","twitter:data1":"21 minutes"},"aioseo_meta_data":{"post_id":"5207","title":null,"description":"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.","keywords":null,"keyphrases":null,"canonical_url":null,"og_title":null,"og_description":"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.","og_object_type":"default","og_image_type":"default","og_image_custom_url":null,"og_image_custom_fields":null,"og_custom_image_width":null,"og_custom_image_height":null,"og_video":null,"og_custom_url":null,"og_article_section":null,"og_article_tags":null,"twitter_use_og":true,"twitter_card":"default","twitter_image_type":"default","twitter_image_custom_url":null,"twitter_image_custom_fields":null,"twitter_title":null,"twitter_description":"Every Gemini feature in depth: Deep Research and Deep Research Max, Gems, Canvas, Audio Overviews, NotebookLM, Workspace integration, Imagen 4, Veo 3.1, Live, Project Astra, Computer Use, and the tier-to-model transparency gap.","schema_type":null,"schema_type_options":null,"pillar_content":false,"robots_default":true,"robots_noindex":false,"robots_noarchive":false,"robots_nosnippet":false,"robots_nofollow":false,"robots_noimageindex":false,"robots_noodp":false,"robots_notranslate":false,"robots_max_snippet":null,"robots_max_videopreview":null,"robots_max_imagepreview":"none","tabs":null,"priority":null,"frequency":null,"local_seo":null,"seo_analyzer_scan_date":"2026-05-12 00:52:40","created":"2026-05-12 00:10:29","updated":"2026-05-12 00:52:40","og_image_url":null,"twitter_image_url":null},"aioseo_breadcrumb":null,"aioseo_breadcrumb_json":[{"label":"Google Gemini 2026: Models, Features, Pricing, and Accuracy","link":"https:\/\/suprmind.ai\/hub\/gemini\/"},{"label":"How Gemini Works: Deep Research, Gems, Canvas, Imagen, Veo, and Live","link":"https:\/\/suprmind.ai\/hub\/gemini\/features\/"}],"_links":{"self":[{"href":"https:\/\/suprmind.ai\/hub\/wp-json\/wp\/v2\/pages\/5207","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/suprmind.ai\/hub\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/suprmind.ai\/hub\/wp-json\/wp\/v2\/types\/page"}],"replies":[{"embeddable":true,"href":"https:\/\/suprmind.ai\/hub\/wp-json\/wp\/v2\/comments?post=5207"}],"version-history":[{"count":0,"href":"https:\/\/suprmind.ai\/hub\/wp-json\/wp\/v2\/pages\/5207\/revisions"}],"up":[{"embeddable":true,"href":"https:\/\/suprmind.ai\/hub\/wp-json\/wp\/v2\/pages\/5199"}],"wp:attachment":[{"href":"https:\/\/suprmind.ai\/hub\/wp-json\/wp\/v2\/media?parent=5207"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}