Claude Features in 2026: What Each One Does
Claude’s feature surface in 2026 spans persistent workspaces (Projects, Artifacts, Memory, Cowork), agentic capabilities (Claude Code, Computer Use, Skills), reasoning controls (Extended Thinking, Adaptive Reasoning), tool integration (MCP), and platform integrations (Microsoft 365, Bedrock, Vertex, GitHub Copilot, Cursor). Anthropic has shipped features at consistent monthly cadence through 2025-2026, with feature gates focused on usage volume rather than feature exclusivity. The notable structural absence: no native image, audio, or video generation – Claude is a text-and-image-input, text-output model.
This page covers each major feature – what it is, when it launched, how it works, which tiers receive it, and the documented limitations. For pricing and tier comparison, see the [INTERNAL_LINK: anchor “Claude pricing details” → /hub/claude/pricing/]. For comparison against competing AI assistants, see [INTERNAL_LINK: anchor “Claude vs ChatGPT vs Gemini” → /hub/claude/vs-other-ai/].
Adaptive Reasoning vs Extended Thinking
Extended Thinking, introduced with Claude 3.7 Sonnet on 2025-02-24, forces Claude to generate a visible chain-of-thought trace before answering. The developer sets a budget_tokens parameter that controls how much reasoning compute the model can spend. Adaptive Reasoning (also called Adaptive Thinking on the API), introduced with the 4.6 generation in February 2026, replaces this paradigm. The model evaluates problem complexity internally and decides whether and how much internal reasoning to allocate. The developer specifies an effort level (standard, high, xhigh, max) rather than a token budget. At high effort, Claude almost always thinks before responding. At lower effort levels, Claude may skip thinking for simple problems. The xhigh level introduced with Opus 4.7 sits between high and max and provides additional compute for hard tasks without committing to maximum spend.
Adaptive Reasoning automatically enables Interleaved Thinking, allowing reasoning between tool calls. This is the structural reason it is the correct paradigm for agentic workflows: the model can think, call a tool, see the result, think again, and proceed. Manual Extended Thinking via budget_tokens is deprecated for Opus 4.7 and later; attempting it returns a 400 error. Sonnet 4.6 still supports both paradigms during the transition period. On the claude.ai surface, thinking summaries are still presented for transparency – generated by a smaller model for approximately 5% of long thought processes per Anthropic’s documentation.
See also: Suprmind Multi-Model Divergence Index →
Projects and Artifacts
Projects create isolated workspaces where users upload reference documents and system instructions that persist across conversations. Claude performs retrieval-based reasoning over project content – relevant sections are pulled into active context rather than loading the entire project at once. Project content is cached and does not count against per-message usage limits. Per-chat file upload caps at 20 files maximum, 30 MB each, regardless of tier. With Opus 4.6 and Sonnet 4.6 on API, individual requests support up to 600 images or PDF pages. Enterprise plans expand chat context to 500K tokens; all other plans use 200K tokens in chat (1M tokens on API for Opus and Sonnet 4.6+). Projects launched September 2024 and expanded context 10x in June 2025.
Artifacts is Claude’s output format for code, documents, diagrams, and interactive content that can be rendered, edited, and exported directly from the conversation interface. When Claude generates substantial standalone content – code, HTML, SVG, Mermaid diagrams, React components, or formatted Markdown – a side panel opens with a live preview. Users can iterate on artifacts, share them publicly, or (on Team and Enterprise) share within organizational boundaries. Artifacts launched in preview June 2024 and reached general availability across all tiers on August 26, 2024. As of April 2026, Artifacts ships on all paid plans and inside Projects.
A documented friction point on Projects: users report that the context window can fill with project files, leaving limited space for the actual conversation. Project knowledge pulls only the most relevant content into active context per query; full retrieval is not guaranteed within a single response.
Claude Code
Claude Code is Anthropic’s terminal-first agentic coding tool, generally available since 2025-05-22 (it shipped as a research preview on 2025-02-24). It runs Claude as an autonomous coding agent that searches code, edits files, runs tests, and commits to GitHub. Native integrations include VS Code and JetBrains extensions (edits appear inline in files), GitHub PR tagging, and a Claude Code SDK for building custom agents. Claude Opus 4.7 raised the default effort level to xhigh for all plans at launch and introduced Task Budgets (public beta) for guiding token spend across longer agentic runs. The April 2026 launch also introduced the /ultrareview command for dedicated review sessions and a multi-session sidebar.
The Pro tier ($20/month) inclusion of Claude Code is volatile and contested as of 2026-05-07. The current anthropic.com/pricing page lists “Includes Claude Code” under Pro; an independent changelog tracker (scriptbyai.com, April 2026) states Anthropic removed Claude Code from Pro in April 2026. Conflict unresolved. Max plans confirmed to include Claude Code; Enterprise confirmed; API access via Claude Code SDK is uniformly available.
Claude Code’s benchmark performance: SWE-bench Verified 87.6%, SWE-bench Pro 64.3% (current industry high), CursorBench 70% (first model crossing 70% on this benchmark). Per Anthropic, Opus 4.7 reduced tool errors by approximately 33% over Opus 4.6 in some evaluations.
A widely covered “Claude got dumber” narrative emerged between March 4 and April 13, 2026. Anthropic published a full engineering postmortem on 2026-04-23 confirming three separate causes: default reasoning effort changed from high to medium on 2026-03-04 (reverted 2026-04-07); cache optimization bug clearing thinking history on every turn for stale sessions from 2026-03-26 (fixed 2026-04-10); system prompt verbosity constraint on 2026-04-16 causing 3% eval drop (reverted 2026-04-20). The “intentional degradation” accusation was unsubstantiated. Separately, a viral BridgeMind benchmark claiming a 15-point performance drop was based on n=6 tasks; an independent retest with n=30 showed negligible movement (87.6% to 85.4%).
See also: Claude Code pricing details →
Computer Use
Computer Use was originally released as beta with Claude 3.5 Sonnet on 2024-10-22, expanded across Claude 3.7 and Claude 4 generations, and reached general availability on claude.ai in March 2026. Developers provide Claude with computer use tools and a user prompt via the Messages API. Claude assesses the task and constructs tool use requests (with stop_reason: tool_use); the developer runs actions in a sandboxed virtual machine with X11/Xvfb display, lightweight desktop environment (Mutter/Tint2), and pre-installed applications. Results return via tool_result content blocks. The loop continues until task completion or iteration limit. Default maximum is 10 agent loop iterations (developer-adjustable).
Claude Opus 4.7 significantly improved Computer Use reliability via high-resolution image support, achieving 98.5% on XBOW’s visual-acuity benchmark vs 54.5% for Opus 4.6, and 78% on OSWorld – tied with GPT-5.5 at 78.7%. The earlier models’ low-resolution vision was the bottleneck; Opus 4.7 addressed it. Setup complexity remains the documented user pain point: it requires a sandboxed VM environment for safe execution, and Computer Use is not available as a standalone API endpoint – it is embedded in the Messages API.
Skills (Agent Skills)
Skills are file-system folders containing a required SKILL.md plus optional scripts and resources. Claude scans available skills, loads only minimal metadata first, then loads additional files only if the skill is relevant to the active task (progressive disclosure). Skills are composable – Claude coordinates multiple skills automatically. Skills run across claude.ai app, Claude Code, and the API via the /v1/skills endpoint. The initial release was 2025-10-15. An updated version (informally called “Skills 2.0” – full workflow packages with executable scripts) shipped Q1 2026. Anthropic shipped pre-built Skills for Excel, PowerPoint, Word, and PDF workflows in 2026.
Skills are available across the Claude app, Claude Code, and API. Tier-specific Enterprise deployment controls allow organizations to manage Skills across users.
Memory and Cowork
Memory operates in two modes. Chat memory derives summaries of past conversations and carries them across sessions, viewable and editable at Settings → Capabilities → Memory. File-system memory for agentic use writes to a /memory folder, read at session start, with optional auto-memory mode that lets Claude decide what to store. Opus 4.7 specifically improved file-system memory reliability for long multi-session agentic work. Chat memory shipped to Team and Enterprise plans in September 2025 and to Free in March 2026. The August 2025 data policy change extended conversation data retention to 5 years for users not opted out of training; this is distinct from active memory retention. Memory can be disabled in Settings.
Claude Cowork launched in research preview January 2026 and reached general availability across all paid plans in April 2026. Cowork grants Claude access to a user-specified folder on the local computer; Claude can read, edit, and create files autonomously, supporting multi-step task execution and sub-agent coordination for parallelizable work. Initial launch was macOS-only. Folder access must be explicitly granted; users have reported Claude making file modifications without prior review, so backup before use is recommended.
See also: Suprmind’s AI Hallucination Rates and Benchmarks reference →
MCP (Model Context Protocol)
MCP is an open standard Anthropic designed to allow Claude to connect to external tools, data sources, and services via a standardized interface. Anthropic launched MCP on 2024-11-18; one-click local MCP installation on Claude Desktop landed June 2025; remote MCP connectors landed January 2026. MCP servers expose tools that Claude can call – file system access, database queries, API calls – with user approval per action in desktop mode. Third-party MCP servers exist for Notion, Zapier, GitHub, and major IDE tools.
Claude Opus 4.7 scores 77.3% on MCP-Atlas, leading GPT-5.4 by 9.2 points and Gemini 3.1 Pro (73.9%) by 3.4 points – the strongest real-world tool-orchestration performance currently measured. Remote MCP connectors are available on Pro, Max, Team, and Enterprise plans, with some Free tier access per the January 2026 rollout note. Claude Desktop local MCP works on any plan that has the desktop app. No officially published hard limits on MCP server count or tool calls; setup complexity for local servers is the documented friction (requires config file editing).
Microsoft 365 Integration (Excel, Word, Outlook)
Claude in Excel launched as a beta research preview in October 2025, providing workbook understanding with cell-level citations for explanations and the ability to update assumptions while preserving formulas. Excel integration covers nested formulas and multi-tab dependencies. Claude for Word launched in April 2026 (Pro and Max). Claude for Microsoft 365 (Outlook, broader 365 surfaces) is included on Pro, Max, Team, and Enterprise tiers. Free tier does not include Microsoft 365 integration.
File Uploads and Document Handling
Files are attached directly to chat messages for real-time reference within the context window. Project knowledge provides persistent cross-session access via retrieval. Accepted formats: PDF, text files (.txt, .md), code files; images (PNG, JPEG, GIF, WEBP) for vision-enabled models; Office formats (Word, Excel) via Skills integrations; CSV and structured data via the code execution tool. Per-chat hard limits: 20 files maximum, 30 MB per file. With Opus 4.6 and Sonnet 4.6 on API, individual requests support up to 600 images or PDF pages. Enterprise plans: 500K context window. Other plans: 200K context window in chat (1M tokens on API for Opus and Sonnet 4.6+). Claude 3.5 and later process PDFs including embedded images.
Web Search and Research Mode
Web Search is integrated via a proprietary search layer, available as a toggle in claude.ai across all tiers since May 2025 and via the Web Search API at $10 per 1,000 searches. When enabled, Claude queries the web in real time and integrates results into responses with inline URL citations. When off, responses draw from parametric knowledge with a training cutoff of approximately January 2026 for Claude Opus 4.7. Without web search, Claude operates as a hybrid system with 2-4 week lag for non-real-time queries per Suprmind’s retrieval latency methodology – a structural disadvantage versus search-first models like Perplexity Sonar (24-48 hour lag on average).
What Claude cannot access via web search: paywalled content, private accounts, deleted content, content blocked by Claude-SearchBot in robots.txt, and content from sanctioned jurisdictions. Citation behavior: with web search enabled, Claude cites URLs inline. When answering from training data only, no URL citations are provided. The system does not surface mixed-mode indicators within a single response (whether a given claim came from web search or parametric knowledge) – a documented UX gap.
Research mode is an agentic research feature combining web search, Google Workspace access, and connected integrations to produce multi-source reports. It launched April 2025 with Google Workspace integration; mobile and advanced mode (web + Workspace + Integrations) launched May 2025. Research is available on Pro, Max, Team, and Enterprise. Free tier does not have Research per the pricing page.
The Tier-to-Model Disambiguation Gap
A documented limitation of the claude.ai consumer interface is the lack of a real-time per-message indicator of which underlying snapshot processed a given query. The model selector dropdown shows the available choice; system prompt probing reveals the dated snapshot ID; the persistent UI does not. The transition from one default model to another – for example, the Sonnet 4.5 to Sonnet 4.6 default switch in February 2026 – is announced via Anthropic newsroom but not via in-product notification for existing users. Developers using API alias IDs (claude-opus-4-7, claude-sonnet-4-6) receive the pinned snapshot associated with that alias at time of call. Note that the 4.6 generation introduced dateless API IDs that look like aliases but are pinned snapshots, not evergreen pointers.
See also: Claude vs ChatGPT comparison →
Sources
- platform.claude.com (API and feature documentation)
- support.claude.com (feature support articles)
- anthropic.com/news (feature launches)
- anthropic.com/engineering/april-23-postmortem (Claude Code regression)
- modelcontextprotocol.io (MCP specification)
- Anthropic changelog via scriptbyai.com (release dates)
- Suprmind Multi-Model Divergence Index (multi-model performance data)
- Suprmind AI Hallucination Rates and Benchmarks (per-feature reliability)
Last verified 2026-05-07.
FAQ
Frequently Asked Questions
How does Claude extended thinking work?
Extended Thinking, introduced with Claude 3.7 Sonnet, allocates a dedicated pre-response reasoning budget shown as a visible “Thinking” section above Claude’s response. The developer sets budget_tokens to control how much reasoning compute the model spends. Performance on complex tasks improves logarithmically with thinking tokens. Adaptive Reasoning, introduced with the 4.6 generation, replaces this paradigm: the developer sets an effort level (standard, high, xhigh, max) and Claude allocates compute internally. Manual budget_tokens is disabled for Opus 4.7+ (returns 400 error).
What is Claude Projects?
Projects is a workspace feature that groups related conversations, uploaded files, and custom instructions under a persistent context. The project’s instructions, files, and conversation history are accessible across all chats within that project. File uploads are capped at 20 files per chat at 30 MB each. Project content is cached and does not count against per-message usage limits. Available on Free with limits; Pro, Max, Team, and Enterprise have unlimited Projects.
What are Claude Artifacts?
Artifacts are Claude’s output format for code, documents, diagrams, and interactive content that can be rendered, edited, and exported directly from the conversation interface. When Claude generates substantial standalone content – code, HTML, SVG, Mermaid diagrams, React components, formatted Markdown – a side panel opens with a live preview. Available on all paid plans and Free since August 2024.
How does Claude’s memory feature work?
Memory operates in two modes. Chat memory: Claude derives a summary of past conversations and carries it across sessions, viewable and editable at Settings → Capabilities → Memory. File-system memory for agentic and Claude Code use: Claude writes notes to a /memory folder, read at session start. Memory can be disabled in Settings; conversation data retention can be opted out separately at Settings → Privacy → Data Usage. Available on all tiers including Free since 2026-03-02.
What is Model Context Protocol (MCP) in Claude?
MCP is an open standard Anthropic designed to allow Claude to connect to external tools, data sources, and services via a standardized interface. Third-party MCP servers exist for Notion, Zapier, GitHub, and major IDE tools. Claude Opus 4.7 scores 77.3% on MCP-Atlas, leading Gemini 3.1 Pro (73.9%) by 3.4 points and GPT-5.4 by 9.2 points. Remote connectors available on Pro, Max, Team, Enterprise; local MCP works with any plan that has the desktop app.
Does Claude have web search?
Yes. Web search is a toggle in claude.ai across all tiers since May 2025 and is available via the Web Search API at $10 per 1,000 searches. When enabled, Claude queries the web in real time and cites URLs inline. When off, responses draw from parametric knowledge (training cutoff approximately January 2026 for Opus 4.7).
What is Claude Cowork?
Cowork is a desktop tool that grants Claude access to a user-specified folder on the local computer. Claude can read, edit, and create files autonomously, supporting multi-step task execution. Launched in research preview January 2026, generally available across all paid plans April 2026. Initial launch was macOS-only. Backup before use is recommended.
Can Claude generate images, audio, or video?
No. Claude Opus 4.7 and the entire Claude 4 generation do not generate images, audio, or video. Accepted inputs: text and image. Third-party integrations pair Claude with image generation models, but these are not native Claude capabilities. The dossier records image_gen: false, video_gen: false, audio_gen: false. This is a deliberate architectural choice, not a roadmap gap.
What is Claude Code?
Claude Code is Anthropic’s terminal-first agentic coding tool, generally available since 2025-05-22. It delegates coding tasks to Claude from the developer’s terminal, supporting agentic loops including searching and editing code, running tests, and committing to GitHub. Native integrations: VS Code and JetBrains extensions, GitHub PR tagging, Claude Code SDK. Claude Opus 4.7 raised default effort to xhigh and introduced Task Budgets and /ultrareview. SWE-bench Verified 87.6%; SWE-bench Pro 64.3% (industry high). Pro tier inclusion is contested.
Why does Claude lose context in long conversations?
Claude’s 1M token context window is the advertised limit. As conversations approach the window, oldest content is gradually displaced. Symptoms: forgotten formatting rules, re-asked questions, contradictory responses from partial recall. This is mechanical context overflow, not “forgetting.” Mitigation: summarize the session state into a single anchor message before the window fills. Claude Code (Opus 4.7) handles context compaction automatically in agentic sessions; API users have a beta context compaction feature.
Stop guessing. Start cross-checking.
Suprmind runs your prompt across ChatGPT, Claude, Gemini, Grok, and Perplexity in parallel. See where they agree, where they disagree, and which insights only one model surfaced — before you act.