Exposure logging is the practice of recording exactly when and whether each user was actually exposed to a specific experiment variant. Rather than relying solely on assignment (which user was put in which group), exposure logging captures the moment the user's experience was affected by the treatment: the screen rendered, the feature loaded, the changed code path executed.
Without exposure logging, you're left guessing which users actually saw the change. Assignment data tells you who could have been affected. Exposure data tells you who was.
Why does exposure logging matter for experiment analysis?
Assignment and exposure are different events. A user can be assigned to a treatment group at app launch but never visit the screen where the change lives. If you only track assignment, your analysis includes users who were nominally "treated" but experienced nothing different from control. This is the root cause of dilution.
Exposure logging enables trigger analysis by providing the data needed to identify which users actually encountered the change. In Confidence, exposure events write directly to your data warehouse alongside assignment data and metric events, giving you a complete picture of what each user experienced.
The quality of your exposure logging determines the quality of your trigger analysis. If exposure is logged too broadly (for example, logging everyone who opens the app when the change only affects one tab), you still include unexposed users. If it's logged too narrowly, you risk excluding users who were genuinely affected by the change.
What makes a good exposure logging implementation?
Good exposure logging captures the point of actual impact, not just the point of assignment. For a checkout flow change, the exposure event should fire when the user reaches the checkout screen, not when they open the app. For a recommendation algorithm change, exposure should log when the user sees a recommendation, not when the model runs in the background.
A few practical principles:
The exposure event should be as close to the user experience as possible. Server-side flag evaluations are useful for assignment, but a client-side event that fires when the UI renders gives you a more accurate picture of who actually saw the change.
Exposure events need the same identifiers as your assignment and metric data. If you can't join exposure logs to metric events at the user level, the data doesn't help.
Log exposure for both treatment and control users. Control exposure data is just as important: it confirms that control users experienced the baseline version, which validates your trigger analysis comparison.
Confidence's warehouse-native architecture means exposure logs land in BigQuery, Snowflake, Redshift, or Databricks alongside all other experiment data. There's no separate system to query or reconcile.