Feature Flags

What are Overrides?

Overrides manually force a specific user or group into a particular feature flag variant, bypassing the normal assignment logic.

Overrides manually force a specific user or group into a particular feature flag variant, bypassing the normal assignment logic. They're primarily used for QA, debugging, and internal testing: you want to verify that the "new checkout flow" variant works correctly before real users see it, so you override your own user ID into that variant.

Without overrides, testing a feature flag in production means waiting for random assignment to put you in the right group, or redeploying a special build. Neither is practical when you need to verify a specific variant on a specific device in the next five minutes.

How do overrides work?

An override is a rule that takes priority over all other flag evaluation logic. When the flag resolver encounters a user ID (or segment, or device ID) that has an override defined, it returns the specified variant immediately, skipping targeting conditions, percentage-based allocation, and bucket hashing entirely.

In Confidence, overrides sit at the top of the flag rule evaluation chain. A typical setup looks like:

  1. Check overrides. If the user matches, return the overridden variant.
  2. Evaluate targeting rules in order (first match wins).
  3. Fall through to the default variant.

This ordering means overrides always win. That's by design for QA, but it also means you need to clean up overrides when testing is done. A forgotten override that forces a product manager into the treatment group will silently exclude them from the normal assignment logic, and any metrics generated by overridden users should be excluded from experiment analysis.

When should you use overrides?

QA and verification. Before launching an experiment, you want to confirm that each variant renders correctly, handles edge cases, and doesn't crash. Override a test account into each variant and walk through the experience.

Demos and stakeholder reviews. Product managers and designers often need to see a specific variant on demand. Overrides give them a reliable way to view it without affecting the experiment.

Debugging production issues. If a user reports a problem and you suspect it's related to a specific variant, overriding your own account into that variant lets you reproduce the issue.

Internal dogfooding. Some teams override an internal employee segment into the treatment group for a few days before opening the experiment to external users. This catches obvious issues early without burning experiment traffic.

What should you avoid with overrides?

Overrides are a sharp tool. A few common mistakes:

Overriding large groups defeats the purpose of controlled rollouts. If you override 500 employees into treatment, you've created an uncontrolled exposure that doesn't generate clean experiment data and might skew production metrics.

Forgetting to remove overrides is the most common problem. Stale overrides accumulate over time, creating a growing set of users whose experience is manually pinned rather than dynamically assigned. Confidence's flag management surfaces make overrides visible, but the discipline of cleaning them up is on the team.

Including overridden users in experiment analysis contaminates results. Overridden users weren't randomly assigned, so they violate the randomization assumption that makes causal inference valid. Most analysis systems, including Confidence's, exclude overridden users from the statistical analysis automatically.