AI Quality as Surrogacy for Idealized Deliberation: Technical Appendix
Formal framework with precise definitions, identification results, influence functions, and asymptotic theory.
Abstract. Formal framework with precise definitions, identification results, influence functions, and asymptotic theory for treating AI quality measurement as a surrogate endpoint problem. We establish three regimes of surrogacy (no surrogacy, local, and global transport), provide estimators for Direct, IPS, and DR modes with oracle-uncertainty-aware inference, and give testable diagnostics for transportability.
Canonical Definitions
For canonical definitions of Y vs Y*, assumptions (A0, J1, S1-S3, L1-L2), and core concepts, see the CIMO Glossary.
Prerequisites: This appendix assumes familiarity with semiparametric efficiency theory, influence functions, and causal inference. For the conceptual introduction, see the main post.
0. Notation and spaces
- Context space , action space (text, code, plans), score space .
- A policy maps to a distribution on . Let be a class of admissible policies.
- denotes the population distribution of contexts; we treat single-turn first and extend to trajectories in §10.
- An Operational Oracle is the measurable, expensive evaluation label we can collect (e.g., human preference, GPT-5 judgment, expert audit).
- An Idealized Deliberation Oracle (IDO) is a functional representing the normalized evaluation under idealized deliberation. See the utility semantics box below for a precise definition.
- A judge (or surrogate measurement process) maps to a random score . We allow a ladder of rungs
induced by a filtration with measurable w.r.t. .
Target quality
For any ,
IDO semantics (utility view)
Fix an outcome space , a kernel , a utility , an optional social aggregator , a risk/aggregation functional (e.g. mean, CVaR), and a strictly increasing normalization .
Defaults: if single-stakeholder (), is identity; otherwise pick (e.g., weighted sum, max–min). expectation, reference-policy anchoring: . Record in the assumptions ledger.
1. Axioms for the IDO (normative)
Let be the limiting value of a deliberation procedure.
- A1 (Deliberative stability). There exists a sequence of increasing-effort labels s.t. in as .
- A2 (Evidence monotonicity). If , then
- A3 (Instrumental invariance). If two procedures yield the same world-state relevant to the objective, they have equal .
A1–A3 make a well-defined limit of a "deliberation ladder."
1.1. The Bridge Assumption: Connecting Y to Y*
In practice, we cannot directly measure the idealized oracle . Instead, we collect operational oracle labels —expensive but measurable evaluations such as human preferences, expert audits, or high-quality model judgments (e.g., GPT-5).
The Bridge Assumption (A0) formalizes the alignment between the operational oracle and the idealized target:
A0 (Bridge Assumption)
The operational oracle is sufficiently aligned with the idealized deliberation oracle such that optimizing for approximates optimizing for .
Validation: A0 is validated via the Bridge Validation Protocol (BVP), which consists of three pillars:
- Pillar 1: Predictive Transportability Experiment (PTE) — Empirical test that predicts held-out outcomes on -relevant metrics (e.g., user satisfaction, task success).
- Pillar 2: Construct Validity Audits — Expert review and stakeholder feedback confirming that captures the intended welfare construct.
- Pillar 3: Stability Monitoring — Continuous tracking of the Y→Y* relationship to detect drift (see CLOVER governance framework in §6).
Implication for Calibration: The statistical calibration machinery (Assumptions S1-S2 below, Sections 3-5) operates on the measurable label Y. The Bridge Assumption (A0) ensures that optimizing policy value serves the idealized target . This separation keeps the statistical framework (Layers 2-6) operating on observables while making the alignment to a governance question (Layer 0).
Note. If change across environments, selection enters and the calibration will not transport (§3.5 table, row ""). Record in the assumptions ledger (§12).
2. Surrogacy (structural) and transport (stability) assumptions
- S1 (Oracle-surrogate sufficiency at rung ). There exists a measurable s.t.on (the joint support of logging and evaluated policies). Optionally add monotonicity in a one-dimensional risk index .
Scope: S1 is required only on the overlap region, not for arbitrary actions outside . This is the same support condition needed for standard overlap (S3).
Note: S1 targets the operational oracle , not the idealized . Under the Bridge Assumption (A0), calibrating to serves .
- S2 (Transportability across policies/time). For a collection of environments (policies, cohorts, time), the same works: for all ,on in each environment.Graphical test (Pearl & Bareinboim, 2014[7]): In a selection diagram modeling environment differences via selection nodes, S2 holds if is S-admissible: in the diagram with incoming arrows to removed, where represents selection nodes. Intuitively: calibration transports if no selection node points into given the surrogate. See §2.5 for the ladder of surrogacy regimes.
- S3 (Positivity/overlap for off-policy re-use). If estimating from logs of , then a.s.
- S4 (Judge availability). For any used in Direct mode, we can obtain at scale; for OPE/DR, we have in logs.
- L1 (Oracle MAR). Let indicate whether an example received an oracle label . Then . Oracle labeling is ignorable conditional on observed surrogates and covariates.
- L2 (Oracle positivity). on the support where will be applied. Ensures calibration function is identifiable and transportable.
2.5. A Ladder of Surrogacy Regimes
We organize identification and estimation strategies into three regimes, from weakest to strongest. Throughout, all assumptions are required only on the relevant support (the states/actions seen under logging and candidate policies). This is the same support condition required for standard overlap and does not strengthen any results.
Regime 1: No Surrogacy (K&M-style fallback)
Assumptions. We make no sufficiency assumption about . We assume the standard conditions needed to learn from partially labeled : (i) missingness of is conditionally random given and logging data, (ii) overlap for actions taken by vs. .
Identification. Value is identified via standard IPW/DR machinery using the available labels in each evaluation context.
Estimator. Use your DR estimator with on labeled rows; use only as features for the outcome/propensity models (efficiency only).
Label burden. Requires labels whenever you change environment or substantially shift .
When to use. Diagnostics indicate surrogacy is unreliable (or the transport diagrams fail), but you still need a valid evaluation. See §4.6 for the K&M drop-in estimator.
Note: If your primary target is the IDO outcome, take and use standard IPW/DR with labeled in each context; when comparing two policies as treatments (), the K&M estimator in §4.6 applies directly. For multi-policy evaluations, K&M can be applied pairwise by encoding each comparison as a binary , but the original theory is for a 2-arm ATE.
Literature: This corresponds to the Kallus–Mao (2020) setting: surrogates aid efficiency but do not replace , so each evaluation context requires labels. K&M is framed around binary treatment ATE (). See §4.6 for the estimator.
Regime 2: Local Surrogacy (single-environment amortization)
Assumption (Local S1). In a fixed environment ,
Identification. Once is calibrated using labels in ,
Estimator. Replace by in the value estimator; use standard OPE (e.g., DR) within .
Label burden. Labels needed once per environment you care about (re-calibrate if you move to ).
Diagnostics. (i) Held-out calibration of vs. inside , (ii) sensitivity to action mix and covariate shift within .
When to use. You do not need cross-environment transport (single deployment context), or your invariance tests are inconclusive.
Literature: Closest in spirit to Athey–Chetty–Imbens–Kang when the target is a binary-treatment ATE and surrogacy is assumed only within an environment (Prentice-style).
Regime 3: Global Surrogacy with Transport (flagship CJE, "Causal Judge Evaluation")
Assumptions (S1 + S2).
- S1 (Surrogacy sufficiency): for all in the set of admissible environments and on .
- S2 (Invariance/transport): The same is valid across those environments; i.e., transports under the selection diagram conditions (S-admissibility).
Identification. Calibrate once (in any admissible environment with labels), then for any admissible :
Estimator. Your CJE estimator as written: calibrate once, evaluate across policies and environments using judge scores only, subject to the transport diagnostics.
Label burden. One-time (per surrogate family ) provided diagnostics keep passing as you move across .
Diagnostics. Your existing selection diagrams + invariance tests on across ; backstop to Regime 2 or 1 if they fail. See §3.5 and §6.
Literature: Flagship CJE: S1+S2 yield "calibrate once, evaluate many" across admissible environments.
| Regime | Surrogacy assumption | Re-labels needed? | Where you can evaluate without new Y* | Typical use | Literature |
|---|---|---|---|---|---|
| 1. No surrogacy | None | Yes, every environment | Nowhere beyond the labeled context | Strict validity when surrogacy fails | Kallus–Mao (2020) |
| 2. Local | S1 in one g★ | Once per environment | Any policy within g★ | Single deployment context | Athey et al. (2019) for binary ATE |
| 3. Global (CJE) | S1 + S2 across admissible g | Once total | Policies and environments in admissible set | "Calibrate once, evaluate many" | CJE (this work) |
Practical decision rule
- Try Regime 3. Run transport/invariance diagnostics for . If they pass → use CJE as the mainline.
- If transport is shaky: Drop to Regime 2; re-calibrate in the target environment, then evaluate policies there using .
- If even local surrogacy is weak: Use Regime 1 (DR with labels) until you can improve the surrogate set or collect more labels.
Where Athey–Chetty–Imbens–Kang (2019) fits
ACIK estimate binary-treatment ATEs using short-run surrogates under Prentice surrogacy . This is weaker than our S1 and targets a different estimand: effects of a binary treatment on , not policy value over rich action spaces. They also allow and bound surrogacy violations. Use ACIK-style methods when your target is a binary ATE and you can collect in each evaluation context; otherwise prefer Regime 2 or 3.
See [8] for details.
2.6. The Causal Requirement: Mediation vs. Correlation under Optimization
The surrogacy regimes (1-3) address evaluation—estimating for a fixed policy . They rely on the Prentice criteria (Prentice, 1989), which define surrogacy based on statistical sufficiency: . This ensures the surrogate is predictive of the outcome , enabling unbiased estimation.
However, Prentice sufficiency is insufficient for optimization (e.g., RLHF, Best-of-N sampling), where the surrogate becomes the target and the policy is actively modified to maximize it.
| Use Case | Goal | Surrogate Role | Requirement |
|---|---|---|---|
| Evaluation (Regimes 1-3) | Estimate for fixed | S predicts Y* (passive measurement) | Prentice Sufficiency Correlation / Predictive validity |
| Optimization (RLHF, BoN) | Improve to maximize Y* via S | S guides optimization (active control) | Causal Mediation Optimization flows through welfare |
2.6.1. Regime 4: Optimization
We now formally introduce Regime 4: Optimization, where the surrogate is no longer a passive measurement instrument but an active control signal for policy improvement.
Definition (Optimization Regime)
Given a surrogate , a policy family , and a welfare outcome , the optimization problem is:
That is, we seek to maximize by optimizing against the surrogate .
Requirement: Causal Mediation. For this optimization to be safe (i.e., for increases in to reliably correspond to increases in ), the surrogate must satisfy Causal Mediation (Frangakis & Rubin, 2002). Formally, this requires that the causal effect of on flows through :
This is a stronger condition than Prentice sufficiency (). Causal mediation requires blocking side channels—alternative causal paths from to that do not pass through (e.g., , ).
Failure Mode: Dissociative Effects. When causal mediation is violated, optimization exploits Dissociative Effects (F&R terminology)—changes to that are not mediated by . This is precisely the mechanism underlying the Surrogate Paradox and reward hacking.
The Surrogate Paradox
When optimization pressure is applied, models exploit any correlation that increases the surrogate, even if it harms the outcome. This is the Surrogate Paradox (illustrated by the CAST study in medicine: anti-arrhythmic drugs suppressed irregular heartbeats but increased mortality). In AI, this manifests as reward hacking—verbosity bias, sycophancy, or confident hallucination (Gao et al., 2022). Gao et al. further demonstrate that this is a scaling phenomenon: the divergence between and follows a predictable parabolic curve as optimization pressure increases.
Metrics for Optimization Robustness. To quantify the safety of optimization in Regime 4, CIMO will introduce two new metrics (formalized in an upcoming technical post):
- Goodhart Point (GHP): The level of optimization pressure (e.g., KL divergence, Best-of-N sample count) at which the gold reward peaks and begins to crash. A higher GHP indicates greater optimization robustness.
- Optimization Gap (OG): The divergence under optimization pressure. A smaller gap indicates the surrogate remains aligned with welfare even when actively optimized.
These metrics extend CJE's static validation framework to dynamic stress testing, enabling practitioners to measure whether a judge remains valid when used as an optimization target.
Topology Enforcement in Practice. In the CIMO stack, Y*-Alignment and the Standard Deliberation Protocol (SDP) are the mechanisms that enforce this causal topology. By requiring the judge () to evaluate the process of welfare generation (via SDP), we block the "side channels" (e.g., length, tone) that allow the model to increase without increasing .
Summary
CJE uses Prentice sufficiency for estimation (Regimes 1-3). Regime 4: Optimization requires Causal Mediation, which the broader CIMO stack strengthens through Y*-Alignment and SDP. Robustness is quantified by the Goodhart Point (GHP) and Optimization Gap (OG).
For a detailed explanation of how SDP strengthens mediation through side-channel cost elevation, see The Surrogate Paradox.
3. Identification
Let be the calibrated reward on the IDO scale.
Proposition 1 (Direct identification)
Under S1 (and S2 + L1–L2 if learned out-of-domain),
Proof sketch. . Take expectations over . See §2.5 for local vs. global surrogacy regimes.
Proposition 2 (IPS identification)
Under S1, S3 (and S2 + L1–L2 if learned out-of-domain), from logs ,
Proposition 3 (DR identification)
Under S1, S3 (and S2 + L1–L2 if learned out-of-domain). Let be any outcome model ("critic"). Then
where .
This holds even if either or is misspecified (doubly robust).
3.5. Transport formulas (cross-environment evaluation)
When evaluating in a target environment that differs from the calibration source, Pearl & Bareinboim's transport framework [7] tells us exactly which target quantities to measure. Below are the three common deployment scenarios:
Case A: Covariate shift only (selection into X)
Scenario: Prompt distribution changes (new user population, different time period), but judge mechanism and oracle meaning are invariant.
Transport formula:
What you need in target: (ability to draw prompts from target population). Can keep and trained on source data.
Case B: Judge/measurement shift (selection into )
Scenario: Judge model changes (GPT-4.1-nano → GPT-4.5-nano), instrumentation updates, or deliberation depth increases, but prompt distribution and oracle meaning are invariant.
Transport formula:
What you need in target: (new judge channel). Can keep if S-admissibility holds (no selection into ). If prompts also shift, replace by in the outer expectation (i.e., use Case C).
Case C: Covariate + judge shift (selection into X and )
Scenario: Both prompt distribution and judge mechanism change (e.g., deploying to new geography with different user base and updated judge model).
Transport formula:
What you need in target: Both and .
When transport fails: Selection into Y*
If selection points into (oracle meaning changed—e.g., safety standards shifted, evaluation criteria evolved), S-admissibility is violated and does not transport. You must recalibrate with new oracle labels in the target environment, or adopt the Kallus & Mao estimator (§4.6) that targets directly per context without assuming transport.
| Selection node location | f_k transports? | Required target measurements | Source pieces you keep |
|---|---|---|---|
| only | ✓ | ||
| only | ✓ | ||
| ✓ | |||
| ✗ | New oracle labels to recalibrate | — |
4. Estimators
Let index examples with expensive IDO labels (at a top rung one can afford); others have only .
4.1. Calibrator
Estimate on by:
- Monotone (isotonic): nondecreasing in and mean-preserving on the oracle slice.
- Two-stage: Fit (e.g., spline in ), then isotonic with mean preservation.
Note on mean preservation: Mean preservation holds on the calibration slice; after transport to new domains/policies, the mean can differ unless S2 (transport) and L1–L2 (oracle MAR/positivity) hold. Use the transport test (§6) to validate.
Use K-fold cross-fitting: train on folds , predict on fold , to obtain out-of-fold .
4.2. Direct (fresh draws)
With prompts scored under ,
4.3. IPS (logs only)
4.4. DR (logs + critic ± fresh draws)
Fit via cross-fitting. If fresh draws from are available, approximate by Monte Carlo. Then
4.5. Weight stabilization (optional, off-policy)
Project raw weights to a mean-one, score-indexed monotone cone (SIM-style calibration) to boost ESS. This is a bias–variance tradeoff: stabilized weights can introduce small bias unless they converge to the true importance ratio. Use weight stabilization inside DR estimators (where outcome models guard against modest weight misspecification), and report diagnostics (ESS, tails).
4.6. Regime 1: Kallus–Mao estimator (no S1)
If diagnostics suggest surrogacy is unreliable, estimate effects on directly using a doubly-robust Kallus–Mao estimator that treats as auxiliary information (no sufficiency assumed). You'll need a MAR-sampled set of labels in the evaluation context; cross-fit nuisances ; then compute:
Report IF-based SEs with cross-fitting. [1]
5. Influence functions and inference
Assume pathwise differentiability and regularity (bounded moments, entropy conditions satisfied via cross-fitting).
5.1. Efficient influence function (EIF) for V(π)
Under S1 and known ,
With DR structure and nuisances ,
which is Neyman-orthogonal to first-order perturbations of holding fixed. Uncertainty from learning on the oracle slice is added separately via OUA (§5.3). If desired, one can treat as a nuisance and cross-fit it jointly to achieve formal orthogonality. We separate it and account for uncertainty via OUA for transparency and modularity.
5.2. Asymptotics and SEs
With K-fold cross-fitting,
Estimate variance with the empirical variance of (cluster-robust if needed).
5.3. Oracle-uncertainty aware (OUA) variance
If is learned from a finite oracle slice, add delete-one-fold jackknife over oracle folds:
Total variance: . Use Satterthwaite df for small-sample t-intervals if desired.
5.4. Relationship to Conformal Prediction
Conformal Prediction (CP) (Vovk et al., 2005; Angelopoulos & Bates, 2021) provides distribution-free, finite-sample coverage guarantees for prediction intervals (uncertainty about a future observation ). OUA addresses a different problem:inference on a population parameter (the policy value ).
CP guarantees coverage for assuming the calibration function is fixed.
OUA quantifies the epistemic uncertainty of having learned from a finite oracle slice.
CJE requires OUA because we need valid confidence intervals on , which necessitates propagating the uncertainty of the learned calibrator itself, not just the prediction uncertainty of individual outcomes.
When to use each: Use CP when you need coverage for individual predictions (e.g., "What is the range of for this specific user?"). Use OUA when you need inference on aggregate quantities (e.g., "What is the expected policy value across all users, with honest uncertainty?").
6. Testable diagnostics (falsifiable implications)
- Transport test (policy/time). Per-group residual mean test:where indexes groups (policies, time periods, domains). Use labeled subset; apply multiple-testing correction (e.g., Bonferroni). This is a weaker, testable implication of S-admissibility—if you lack labels in multiple domains, you can only partially test S2.
- Coverage of surrogate support. Compare histograms of on oracle-labeled vs. full sets; flag extrapolation if tails are unlabeled.
- Overlap diagnostics (off-policy). Effective sample size , weight CV, max/median ratio, Hill tail index.
- OUA share. Report to guide budget (more labels vs. more prompts).
- Prentice test (surrogacy sufficiency / S1). On oracle-labeled subsets, regress on and test whether adding (and ) improves fit. Failing to reject supports S1 (surrogacy sufficiency). For S-admissibility (S2, cross-domain), use a domain indicator and test on pooled labeled data across domains: does (and ) improve prediction? If yes, does not transport—recalibrate or use K-M estimator (§4.6).
7. Learning with the IDO objective
For parametric , the policy learning problem is
A plug-in gradient follows from the policy gradient identity with calibrated rewards:
optionally replacing by an advantage . This "RL with calibrated reward" aligns training with IDO.
For safe deployment, maximize a lower confidence bound .
8. Multiple stakeholders and social choice
Let index stakeholders with oracles . A social aggregator defines
Common choices: weighted utilitarian (), max-min (), or constrained variants. Surrogacy extends with ; calibrate each and plug into .
9. The deliberation ladder as information order
Model rungs by a filtration . Define
Then by Blackwell/Doob ordering, implies . If is Blackwell more informative than , a calibrated estimator at rung is (weakly) more efficient than at rung .
10. Extension to trajectories (agents)
Let a trajectory with policy and environment . Define an IDO trajectory value
Surrogates may be terminal () or stepwise (). Direct/IPS/DR estimators extend with clustering by trajectory; sequential IPS is typically ill-conditioned, so prefer Direct or DR with trajectory-level critics.
11. Limits (scope conditions)
- Non-regular targets. If or induces non-differentiable functionals (e.g., maxima, boundary problems), first-order theory fails; use selective/subsampling or shape-constrained methods.
- Severe non-transport. If S2 fails (e.g., adversarial policy styles), drop to Regime 2 or 1 (§2.5): recalibrate locally per environment, or use K&M estimation with new oracle labels.
- Overlap failures. If S3 fails, IPS/DR is unreliable even with stabilized weights; collect fresh draws and use Direct.
12. Minimal "assumptions ledger" (for every deployment)
| Code | Statement | Used by | Test / Diagnostic | Mitigation |
|---|---|---|---|---|
| A0 | (Bridge Assumption) | All Layers | BVP (Pillar 1: PTE; Pillar 2: Audits; Pillar 3: Stability) | SDP-Gov: SDP Patching and Governance |
| S1 | All | Incremental signal; residual vs. fk | Add covariates; richer judge; higher rung | |
| S2 | (S-admissibility); fk transports when no selection nodes (Sel) point into Y | All (cross-environment) | Per-group residual test (§6); Cross-domain Prentice test with G indicator; diagram review (§3.5) | If selection into X or S(k): measure target distributions (§3.5 table). If selection into Y: recalibrate with target oracle labels |
| S3 | (overlap) | IPS/DR | ESS, tail index, max/median | Weight stabilization; collect draws |
| A1–A3 | IDO well-posed | All | Rung stability checks | Clarify oracle definition; adjust W |
| L1 | (Oracle MAR) | All (calibration) | Oracle selection independent of residuals | Randomize oracle sampling; stratify by S,X |
| L2 | (Oracle positivity) | All (calibration) | Coverage plots; extrapolation warnings | Label tail regions; flag OOD predictions |
| OUA | Finite oracle labels | Inference | OUA share | Add labels if OUA dominates |
| N | Strictly increasing normalization to [0,1]; anchored to (πlow, πhigh) (or specified benchmarks) | All (comparability & reporting) | Anchor stability check across releases; report raw F and anchored Y* when anchors change | Re-anchor or freeze anchors; append change log when re-anchoring |
13. What you report (template)
For each :
- on the IDO scale with 95% CI (main + OUA), and DF rule.
- Diagnostics: transport test p-values, ESS (if OPE/DR), OUA share, oracle coverage plots.
- If choosing a policy: a decision with one-sided CI (safety margin).
Summary
- Definition:
- Mechanism: use surrogates and a calibration so that
- Identification: Direct (fresh draws), IPS (reweight logs), DR (two chances)
- Uncertainty: influence-function variance + oracle-learning variance (OUA)
- Governance: multi-party encodes whose IDO matters and how
This turns "AI should do what you'd do with unlimited time" into a measurable target, with estimators, CIs, and failure tests you can run.
Citation
If you use this work, please cite:
BibTeX
@misc{landesberg2025surrogacy,
author = {Landesberg, Eddie},
title = {AI Quality and Surrogacy: Technical Appendix},
year = {2025},
month = {November},
url = {https://cimolabs.com/research/ai-quality-surrogacy-technical},
note = {CIMO Labs Technical Report}
}Plain Text
Landesberg, E. (2025). AI Quality and Surrogacy: Technical Appendix. CIMO Labs Technical Report. https://cimolabs.com/research/ai-quality-surrogacy-technical
