- Documentation MCP: Search and retrieve Confidence documentation
- Flags MCP: Create and manage feature flags
- Experiments MCP: Explore and analyze experiments and their results
Documentation MCP Server
The Documentation MCP provides AI assistants with access to Confidence documentation through semantic search, regular expression search, and SDK integration guides.Integration Setup
No authentication required for the Documentation MCP server. The server uses streamable HTTP as the transport and the URL ishttps://mcp.confidence.dev/mcp/docs.
- Claude Code
- Cursor
- VS Code
Available Tools
searchDocumentation
Searches the Confidence documentation using semantic search to learn about experiments, A/B tests, rollouts, feature flags, metrics, surfaces, insights, and other experiment-related topics. Returns the 10 most relevant chunks using Maximum Marginal Relevance (MMR) to balance relevance and diversity.getFullSource
Retrieves the full text content from a specific Confidence documentation source URL.grepDocumentation
Searches the Confidence documentation content using regular expressions. Returns a list of documentation pages that contain matches for the given regular expression pattern.getCodeSnippetAndSdkIntegrationTips
Gets complete integration guides with code examples and README documentation for integrating Confidence feature flags. Returns full integration examples including OpenFeature provider setup, best practices, and configuration for the specified SDK.Example Usage
Once integrated, ask your AI assistant to:- “Search Confidence documentation for information about A/B testing”
- “Show how to integrate Confidence with Python”
- “Summarize the A/B test quickstart”
Flag Management MCP Server
The Flag Management MCP provides AI assistants with tools to manage Confidence feature flags, including creation, modification, targeting rules, and testing.Integration Setup
The Confidence Flags MCP requires authentication. Allow access from your IDE or developer tool where you use the MCP server. The server uses streamable HTTP as the transport and the URL ishttps://mcp.confidence.dev/mcp/flags.
- Claude Code
- Cursor
- VS Code
/mcp and select confidence-flags to authenticate.Available Tools
listClients
Lists all available Confidence clients for flag operations.listFlags
Lists all active feature flags with summary information including names, schemas, and variants.getFlag
Retrieves detailed information about a specific feature flag, including its complete schema, variants, and targeting rules.createFlag
Creates a new feature flag with schema and variant definitions.addFlagVariant
Adds a new variant to an existing feature flag.updateFlagSchema
Updates the schema definition of an existing feature flag.createOverrideRule
Creates targeting rules to override flag behavior for specific entities.testResolveFlag
Tests flag resolution for specific clients and entities and explains why the system selects a particular variant.analyzeFlagUsage
Finds unused Confidence feature flags or flags fully rolled out, helping you clean up feature flag code.Example Usage
Once integrated, ask your AI assistant to:- “List all feature flags in Confidence”
- “Create a new flag called ‘new-checkout-flow’ with a boolean schema”
- “Add a variant to the ‘new-checkout-flow’ flag”
- “Create an individual targeting rule for user ‘test-user’ to see the ‘variant-b’ of ‘experiment-flag’”
- “Test flag resolution for ‘new-checkout-flow’ with user ‘john@example.com’”
- “Analyze what flags you can clean up in this codebase”
Experiments MCP Server
The Experiments MCP provides AI assistants with tools to explore and analyze Confidence experiments, including A/B tests and rollouts.Integration Setup
The Confidence Experiments MCP requires authentication. Allow access from your IDE or developer tool where you use the MCP server. The server uses streamable HTTP as the transport and the URL ishttps://mcp.confidence.dev/mcp/experiments.
- Claude Code
- Cursor
- VS Code
/mcp and select confidence-experiments to authenticate.Available Tools
list_experiments
Lists Confidence experiments (A/B tests and rollouts). Returns experiment names, display names, states, owners, and creation times. Supports two modes:- List mode (default): Use the
filterparameter withLucenequery syntax (for example,state:livefor running experiments) andorderByto sort results. - Search mode: Provide a
queryparameter for full-text search. In search mode, results order by relevance and thefilterandorderByparameters have no effect.
workflows/abtest/... or workflows/rollout/...).
get_experiment
Retrieves detailed information about a specific experiment. By default, returns a compact summary with key fields:- Name, state, owner, and creation time
- Metrics with types, minimum detectable effect (MDE), and non-inferiority margin (NIM)
- Decision outcome (for ended A/B tests)
- Available analysis result names
summary=false for full details including configuration, checks, stats, and state history.
get_results
Retrieves statistical analysis results for an experiment. By default, returns a summary with:- Relative effect estimates and confidence intervals
- Significance status and status messages
- Sample sizes per treatment
- Shipping recommendations per treatment
workflows/abtest/instances/{id}) or a specific analysis result name. When given an instance name, the tool automatically resolves the primary analysis result.
Set summary=false for the full detailed output.
get_resource
Looks up details of Confidence resources referenced in experiment data. Supports:metrics/*- Metric definitionsentities/*- Entity definitionssurfaces/*- Surface configurationssegments/*- Segment definitionsfactTables/*- Fact table definitionsflags/*- Feature flag configurationsidentities/*- User identitiesassignmentTables/*- Assignment table definitionsclients/*- Client configurations
Example Usage
Once integrated, ask your AI assistant to:- “List all running experiments”
- “Search for experiments related to checkout”
- “Show me details about the homepage-redesign experiment”
- “Get the results for the checkout-flow A/B test”
- “What’s the status of the metrics in my onboarding rollout?”

