Experiment Analysis

What are Exposure Filters?

Exposure filters are criteria applied during experiment analysis to include or exclude users based on their exposure to the treatment.

Exposure filters are criteria applied during experiment analysis to include or exclude users based on their exposure to the treatment. They define which users count in the analysis by specifying conditions like "only users who loaded the changed screen" or "only users who triggered the new code path during the experiment window."

Exposure filters sit between raw assignment data and the final statistical comparison. They're the mechanism that turns a broad experiment population into the focused sample that trigger analysis operates on.

How do exposure filters differ from segment filters?

Segment filters divide users by pre-existing attributes: country, device type, subscription tier, signup cohort. These attributes exist independently of the experiment. You can apply segment filters to any analysis without risk of bias, because the segments were defined before the treatment could influence them.

Exposure filters are different. They condition on something that happens during the experiment: whether the user encountered the change. This distinction matters because exposure can be influenced by the treatment itself. If the treatment makes a page load faster, more users might reach it in the treatment group than in control. Filtering on "reached the page" now selects a different population in treatment vs. control, which can introduce bias.

This is why exposure filters need to be defined carefully. The safest approach is to use exposure conditions that are symmetric between treatment and control: both groups fire the same trigger event, in the same code path, at the same point in the user journey. Confidence logs exposure events for both treatment and control variants to support this.

When should you apply exposure filters?

The most common use case is reducing dilution. If only a subset of assigned users can encounter the change, filtering to that subset improves statistical power without inflating false positives, as long as the filter is defined correctly.

Apply exposure filters when:

  • The feature under test sits behind a user action (visiting a page, opening a menu, starting a flow) that not all users take
  • You want to understand the effect on users who actually experienced the change, not just the intent-to-treat effect across all assigned users
  • The trigger condition is symmetric between treatment and control

Be cautious when:

  • The trigger condition itself could be affected by the treatment (for example, if the treatment changes whether users reach the trigger point)
  • The filter selects a post-randomization behavior that correlates with the outcome you're measuring

In Confidence, trigger analysis applies exposure filters automatically using logged exposure events. The platform presents both the filtered and unfiltered results so teams can compare and make informed decisions about which estimate answers their question.