Home Hub Features Use Cases How-To Guides Platform Pricing Login



Grok Features Deep Dive

How Grok Works:
DeepSearch, Think Mode,
Companions and More

Grok ships with twelve distinct features split across four categories: research and reasoning, content generation, conversational interfaces, and workspace tools.

This guide covers what each feature actually does, how it works mechanically, when to use it, when not to, and the documented limitations and transparency gaps.

For pricing on each feature’s tier requirements, see the Grok Pricing Guide. For comparisons against ChatGPT, Claude, Gemini, and Perplexity equivalents, see Grok vs Other AI Models.



How multi-step research works.

DeepSearch is the feature that turns Grok from a chat model into a research agent. Activated through a UI toggle on grok.com or by prefixing prompts with “Use DeepSearch:”, it fires an iterative retrieval-augmented-generation loop. The agent splits the query into sub-queries, runs parallel searches against the web and X, follows fresh links, summarizes each batch in an internal scratchpad, and repeats until it hits a 10-step limit or a time threshold.

The model cross-checks up to seven consistency layers before drafting the response. Users can toggle a “Thoughts” view to see the intermediate reasoning steps. In the API, DeepSearch maps to enabling the web_search and x_search server-side tools. Citations are only generated when these tools are invoked.

DeeperSearch is the more thorough variant. It runs additional iterations, traverses deeper through linked sources, and produces a longer synthesis stage. xAI employees described it as “an improved version of DeepSearch” that goes “two steps further.” Latency is the trade-off: DeeperSearch takes meaningfully longer.

Tier availability

Free tier: limited DeepSearch with usage caps. SuperGrok and above: full DeepSearch. DeeperSearch likely requires SuperGrok or higher; precise tier mapping is not enumerated in official docs. Treat tier-specific limits as Volatile.

Documented limitations

Source quality varies. DeepSearch surfaces blogs alongside Reuters, viral X posts alongside verified reporting. The standard interface does not consistently distinguish X-sourced from web-sourced citations. Hard limit: 10 search steps per prompt.



The reasoning tax:
Why Think Mode hurts summarization.

Think Mode activates Grok’s reasoning model path. Instead of producing an answer in one shot, the model generates an internal chain-of-thought, visible via the “Thoughts” toggle, before producing output. xAI’s official description: Think “focuses on advanced reasoning and problem-solving… like a human thinking.” Available across tiers for basic Think access; Heavy mode (extended reasoning) requires SuperGrok Heavy. Grok 4.3 has reasoning always on by default with no toggle.

The mechanism produces a documented trade-off that users rarely see flagged: turning Think Mode on for document summarization tasks increases hallucination rates. Per Suprmind’s AI Hallucination Rates and Benchmarks reference (May 2026 update), Grok-4-fast-reasoning scored 20.2% on the Vectara New Dataset for summarization hallucination – the highest of any frontier model tested on that benchmark. Grok-3 on the same benchmark scored 5.8%. The jump from 5.8% to 20.2% is the largest within-family regression in the Vectara dataset.

Why reasoning increases summarization hallucination

The mechanism is documented. Reasoning models invest compute into generating inferences. When the task is open-ended analysis, those inferences add value. When the task is grounded summarization (compressing a source document into a shorter version), the same inference engine adds conclusions and inferences the source document does not contain. Vectara measures this as fabrication – the model added facts not in the source.

The practical guidance: turn Think Mode on for analytical tasks where step-by-step logic matters and you want chain-of-thought transparency. Turn it off for document summarization, citation-grounded research, and any task where the deliverable is compressed source material. The reasoning tax is real and quantified.

When Think Mode earns its cost

Open-ended technical analysis. Multi-step problem decomposition. Math problems where intermediate steps validate the conclusion. Strategic decisions where you want to inspect the model’s reasoning before accepting the answer. In these cases, the visible chain-of-thought is signal, and the inference-heavy generation produces value rather than fabrication.



The transparency gap:
The most opaque feature in Grok’s lineup.

Expert Mode is a usage mode rather than a tier or a model version. Users select it manually from the consumer app. It forces higher compute and deeper reasoning regardless of query complexity, in contrast to Auto Mode which routes dynamically.

The gap: no verbatim official xAI definition of Expert Mode appears in xAI’s published documentation as of this guide’s research pass. We searched docs.x.ai, the xAI blog, the Grok user guide, and primary launch announcements. Expert Mode appears in third-party YouTube tutorials, third-party review articles, and the consumer app UI itself, but not in xAI’s own documentation as a defined feature with stated mechanics.

Third-party descriptions place Expert Mode in this hierarchy

  • Auto Mode – dynamic routing based on perceived query complexity
  • Fast Mode – quick response, no thinking step
  • Expert Mode – higher compute, deeper reasoning
  • Thinking Mode (Beta) – full RL reasoning with visible chain-of-thought
  • Heavy Mode – 16-agent parallel architecture (SuperGrok Heavy only)

In this framing, Expert Mode sits between Fast (quick, no thinking) and Thinking (full reasoning), forcing a deeper compute path than Auto Mode would select for the same query.

The honest answer: Expert Mode produces more thorough responses than Auto Mode for the same query, with longer latency. It is available across consumer tiers including the free Grok 4 access window when xAI opened that in August 2025. The mechanics beyond that are not documented by xAI itself. If your workflow depends on Expert Mode behavior, treat its current implementation as Volatile.



Formats, limits,
and parser fidelity.

Grok handles document upload and analysis through both the consumer app and the API. The supported format set covers most everyday workflows.

Supported formats

  • Text and code: plain text, Markdown, Python, JavaScript, CSV, JSON
  • Documents: PDF, DOCX
  • Images: PNG, JPEG, GIF, WebP
  • Archives: ZIP (scanned for security)
  • Video (Grok 4.3 API only): MP4, MOV, WebM up to 5 minutes, 1080p, 1-4 fps

File size and image limits

  • Chat UI: ~25 MB per file
  • API: 48 MB per file
  • Up to 3 images in chat UI
  • Up to 10 images per API request
  • Maximum image size: 20 MiB
  • Video input is unique to Grok 4.3 in the current frontier model lineup

API model restrictions and parser fidelity

API document processing is restricted to Grok 4 and newer models. Grok 3 and Grok 2 do not support file uploads through the API. The chat UI handles document upload across all consumer tiers, with Free tier subject to per-window rate limits.

PDF parsing is confirmed for both surfaces. The model can execute Python code on uploaded files via the code execution tool when enabled. For structured data uploads (CSV, XLSX), keep row counts under 200,000 to avoid timeouts.

What is not formally documented: DOCX table extraction fidelity, embedded image extraction, footnote handling, and OCR behavior on scanned PDFs. If your workflow depends on these specifics, test empirically rather than relying on documentation that has not been published.

Collections and RAG

Documents can be stored in Collections – a persistent vector store – and queried via the collections_search tool at $2.50 per 1,000 calls. File storage is billed at $0.025/GiB/day; collection storage at $0.10/GiB/day. File storage charges began April 20, 2026. This makes Grok competitive on the document-grounded use case, with the caveat that the reasoning variant’s Vectara New Dataset hallucination rate (20.2%) means Think Mode amplifies summarization fabrication.



Image and video generation
via the Aurora model.

Imagine is xAI’s image and video generation surface, accessible separately from the chat API at a dedicated Imagine endpoint. Image generation through the Aurora model has been available since before Grok 4. Video generation rolled out with the Grok 4 launch period in July 2025.

Image generation

The Aurora model handles image generation and editing. It supports both text-to-image and image-to-image (editing) workflows.

Free tier users get Aurora at a basic level with rate limits. SuperGrok and above include full Imagine including the higher-quality variants and editing modes. Image input limit: 20 MiB. Image generation API pricing was shown as a dash on the official docs page; verify at console.x.ai for current pricing.

Video generation

Grok Imagine Video generates text-to-video and image-to-video clips. SuperGrok Lite at $10/month: 15 videos per day at 480p, 6-second max. SuperGrok at $30/month: full Imagine. SuperGrok Heavy: maximum settings.

Imagine Video v1.0 launched February 2026 with native audio support including sound effects and ambient audio. Generation time averages ~30 seconds per clip. Imagine video quality at 480p is widely described as inadequate for professional use.



In-house voice training.
Camera mode launched July 2025.

Voice mode predates Grok 4 and was significantly upgraded with the Grok 4 launch on July 9, 2025. Camera mode launched simultaneously: users point their camera and Grok analyzes the visual scene while speaking.

The voice model is trained in-house using xAI’s RL framework and speech compression techniques. Pre-Grok 4 voice was described in independent reporting as “bolted-on rather than native”; the Grok 4 launch addressed that critique with a redesigned voice path. Basic voice on Free tier and above. Priority voice and camera mode require SuperGrok and above.

API voice pricing

Service
Rate

Realtime voice
$0.05/min ($3.00/hr)

Text-to-Speech
$4.20 per 1M characters

Speech-to-Text (REST)
$0.10/hr

Speech-to-Text (Streaming)
$0.20/hr

The Text-to-Speech API launched in general availability in early May 2026, expanding the developer voice surface beyond Realtime voice and STT endpoints.



3D animated AI characters.
Distinct from the standard chat interface.

Companions are 3D animated AI characters launched alongside Grok 4 on July 14, 2025. The current character lineup includes Ani (anime-style female companion), Rudy (a friendly red panda), Bad Rudy (vulgar variant), and Valentine (male companion, teased by Musk on July 17, 2025). Users access Companions through a dedicated tab in the consumer apps.

Companions use the Grok 4 underlying model with real-time image generation and persistent memory across conversations. Each character has a distinct personality and conversation style. Some Companions support NSFW mode; Ani specifically has been documented appearing in lingerie on command.

Tier and geographic availability

Companions require a SuperGrok subscription minimum at $30/month. The feature is not available on Free tier or X Premium tiers. NSFW mode for Ani is likely geo-restricted in some jurisdictions; specific countries are not disclosed.

Documented reception

The Companions feature received regulatory and public criticism for the explicit capabilities. Multiple outlets (Euronews, NBC News) covered the explicit content. AI Forensics’ January 2026 report on grok.com sexualized content cited the feature in its DSA arbitrage analysis.

For users who want a chat AI with a structured personality and conversation continuity, Companions are a documented feature. For users in professional contexts where the explicit-capable framing is a brand or compliance risk, the standard chat interface (with Memory and Projects) provides equivalent conversation continuity without the Companions framing.



Consumer apps have it.
The API does not.

Grok Memory is a documented consumer app feature launched in beta on April 19, 2025. Memory is stored outside the context window and persists across sessions; users can review, edit, and delete memory entries through settings. Memory is selectively injected at conversation start – only relevant context is retrieved, not loaded wholesale.

The mechanism documented in independent sources: when a user starts a new conversation, Grok queries its memory store for relevant prior facts and preferences, then injects those into the conversation context before generating a response. This reduces context-window pressure for long-running users while maintaining personalization.

The API gap

The standard xAI API does not have native cross-session persistent memory as of the research date. Developers building memory features for Grok-based applications must construct their own external memory layer using vector databases (e.g., Pinecone, Weaviate), specialized memory services (Mem0), or in-house infrastructure. ChatGPT and Claude have offered native API memory for over a year; this is a documented gap for Grok in the developer ecosystem.

User-facing limits

“Asking Grok to forget certain information does not automatically erase it” – manual deletion through settings is required. Memory often breaks across conversations (documented in independent reviews of Grok memory behavior). Memory was not available in certain regions at the April 2025 beta launch; specific regions were not disclosed.

For users who depend on memory continuity, the feature works in the consumer apps. For developers building memory-dependent applications, plan to build the memory layer yourself.



Workspaces are documented.
Tasks mechanics are not.

Projects (also called Workspaces) act as containers for related chats, files, and custom instructions. The official description from grok.com/project: “Supercharge Grok with Projects. Create custom workspaces, upload files for smarter chats, and collaborate securely.” Each workspace holds persistent files, conversation history, and custom prompts. Users train the workspace by uploading documents that subsequent chats can reference. Workspaces launched approximately April 12-15, 2025.

Projects: tier and use cases

Accessible across consumer tiers, with Free users getting a basic level. Grok Business at $30/seat/month adds dedicated team workspaces with sharing controls. File upload limits follow the documented 25 MB chat / 48 MB API split.

Most useful for sustained workflows on a defined topic: ongoing client engagements, multi-document research, code review across a repository, or product analysis. Hard limits per project (file count, total storage, depth) are not officially published.

Tasks: documented but undocumented

Tasks is an automation and scheduling capability accessible at grok.com/tasks. The page renders xAI’s standard marketing copy without a verbatim feature definition. Specific mechanics (trigger types, scheduling syntax, automation depth) are not documented in available official sources.

Independent sources describe Tasks as available on Free tier and above. If you need scheduled or triggered AI workflows comparable to Zapier, Make, or n8n, treat Tasks as a starting point pending xAI documentation updates.



The coding agent:
What’s known, what’s not.

Grok Build is a coding agent in pre-launch as of May 2026. Reports from January 2026 described “early look” coverage. xAI announced Build and a companion CLI tool launching in mid-to-late April 2026 (precise date not confirmed). Full public launch was not confirmed at the research date.

What Build is

Independent reporting describes Build as a “vibe coding” agent meant to take natural-language descriptions and produce deployable applications. The dual-track offering covers two surfaces: users run coding tasks locally through a CLI-backed agent or remotely through a web interface. Uses Grok 4.3 as the underlying model.

Distinguishing capabilities

  • Parallel agent spawning – up to 8 coding agents working concurrently on related tasks
  • Arena Mode – tournament-style evaluation of competing solutions, where multiple attempts are compared and the best wins

The documentation gap

No official xAI Build documentation exists at the time of this guide. Tier availability and pricing are not disclosed, though the feature set strongly suggests SuperGrok or higher will be required at launch. Treat all Build claims as Volatile until xAI publishes official documentation.

Build is a separate feature from Imagine and is not part of the standard chat interface. It is positioned to compete with Claude Code and similar coding agents from other vendors.



When and how citations are generated.

Grok’s citation system is feature-conditional. Citations are generated when server-side search tools (web_search, x_search, attachment_search, collections_search) are invoked. Without these tools enabled, Grok produces no citations even when responding to factual queries that would benefit from source attribution.

When tools are enabled, the agent records all accessed URLs and attaches citation metadata to relevant portions of the answer. In the UI, citations appear as inline clickable links. In the API, citations are returned as structured fields in the response. The return_citations: true parameter ensures URL list return.

The source mixing problem

Sources are mixed: web URLs and X posts are labeled, but no systematic distinction is visually enforced between X-sourced and web-sourced claims in most UI presentations. This matters because X content quality varies enormously. A peer-reviewed paper and a viral X thread can appear in the same citation list with similar visual treatment.

Per Suprmind’s AI Hallucination Rates and Benchmarks reference (May 2026 update), Grok-3 scored 94% citation hallucination on the Columbia Journalism Review test – the worst of any model tested. Citations were generated, but the claimed information did not match source content in the majority of cases tested. This is the most quoted reliability finding for Grok and the main reason citation-dependent research workflows pair Grok with a citation-grounded model like Perplexity rather than relying on Grok citations alone.

No documented per-query citation count limit exists. The 10-step DeepSearch limit caps the number of sources consulted per multi-step research task.



What you get at each tier.

Tier availability for some features is not enumerated in official xAI docs as of May 2026. Treat tier-specific limits as Volatile and verify at grok.com/plans before relying on the cap for production planning.

Feature
Free
SuperGrok Lite
SuperGrok
Heavy
API

DeepSearch
Limited
Limited
Full
Full priority
web_search, x_search

Think Mode
Yes
Yes
Yes
Extended
Reasoning toggle

Imagine images
Aurora basic
Basic Imagine
Full Imagine
Maximum
Imagine API

Imagine video
No
15/day at 480p
Full
Maximum
Imagine API

Voice
Basic
Basic
Priority
Priority
Voice API

Companions
No
No
Yes
Yes
No

Memory
Yes
Yes
Yes
Yes
Build your own

Projects
Basic
Basic
Full
Full
Custom

Document analysis
25MB chat
25MB chat
25MB chat
25MB chat
48MB API

Heavy mode
No
No
No
16-agent
grok-4-heavy

Build (pre-launch)
TBD
TBD
TBD
TBD
TBD



Grok Features: Frequently Asked Questions

What is Grok DeepSearch?

A multi-step research feature that searches the web, X, and news sources, cross-references results, and synthesizes a comprehensive answer. Activate via toggle in the consumer app or “Use DeepSearch:” prefix in a prompt. Hard limit: 10 search steps per query.

What is Think Mode?

Chain-of-thought reasoning with a visible “Thoughts” panel before the answer. Improves complex analytical reasoning. Increases summarization hallucination – reserve for open-ended analysis, turn off for document summary tasks.

What is Expert Mode?

A usage mode that forces higher compute and deeper reasoning than Auto Mode. xAI has not published a formal definition. Third-party descriptions place it between Fast Mode and Thinking Mode in the compute hierarchy. Available across consumer tiers.

Can Grok analyze documents?

Yes. Supported formats: PDF, DOCX, plain text, Markdown, code files (Python, JS), CSV, JSON. Image formats: PNG, JPEG, GIF, WebP. ZIP archives are scanned. Chat UI accepts up to 25 MB per file; API accepts up to 48 MB. API document processing requires Grok 4 or newer. PDF parsing confirmed for both surfaces.

Can Grok generate images?

Yes, through the Aurora model. Available on Free tier in basic form; full Imagine on SuperGrok and above. Both text-to-image and image-to-image (editing) workflows are supported.

Can Grok generate videos?

Yes, through Grok Imagine Video. SuperGrok Lite includes 15 videos per day at 480p / 6 seconds. SuperGrok includes full Imagine. SuperGrok Heavy includes maximum settings. Imagine Video version 1.0 launched February 2026 with native audio support.

Does Grok have voice mode?

Yes. Real-time voice conversation with TTS and STT. Camera mode (visual scene analysis while speaking) launched with Grok 4 in July 2025. Basic voice on Free tier; priority voice on SuperGrok and above.

What are Grok Companions?

3D animated AI characters with persistent memory and distinct personalities (Ani, Rudy, Bad Rudy, Valentine). Require SuperGrok ($30/month) minimum. Some Companions support NSFW mode. Launched July 2025 with Grok 4.

Does Grok have memory across conversations?

Yes in the consumer apps (grok.com, iOS, Android). Launched April 2025. Memory is stored outside the context window and selectively injected at conversation start. The standard API does not have native cross-session memory; developers must build their own memory layer.

What are Grok Projects?

Workspaces that hold persistent files, conversation history, and custom instructions for related chats. Available across consumer tiers; Grok Business adds team sharing at $30/seat/month. Launched April 2025.

What is Grok Build?

A coding agent in pre-launch as of May 2026. Features parallel agent spawning (up to 8 agents) and Arena Mode for tournament-style solution evaluation. Uses Grok 4.3 as underlying model. Tier availability and pricing not yet disclosed. Treat all Build claims as Volatile until xAI publishes formal documentation.

Why does Grok cite wrong sources?

Grok-3 scored 94% citation hallucination on the Columbia Journalism Review test, the worst of any model tested. The mechanism: citations are generated when search tools are enabled, but the claimed information often does not match source content. For citation-grounded research, pair Grok with Perplexity (which scored 37% on the same test, best of any model) rather than relying on Grok citations alone.



Twelve features. Real strengths.
Documented gaps. One catch.

Citation hallucination on Grok-3 was 94% on CJR. Vectara summarization hallucination on the reasoning variant is 20.2%, the worst of any frontier model. Suprmind orchestrates Grok alongside Claude, Perplexity, ChatGPT, and Gemini in one shared conversation – so when one model fabricates, others catch it before it reaches your decision.

7-day free trial. All five frontier models. No credit card required.



Disagreement is the feature.

Last verified May 7, 2026. Next refresh due August 7, 2026.