feat: add conditional_pt parameter to survey DGP#295
Conversation
|
Overall Assessment Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Path to Approval
|
…imulation Adds a `conditional_pt` parameter to `generate_survey_did_data()` that creates X-dependent time trends violating unconditional parallel trends while preserving conditional PT. When nonzero, treated units' x1 is drawn from N(1,1) instead of N(0,1), and the outcome includes `conditional_pt * x1 * (t/T)`. This unblocks simulation scenario 4 for the survey variance paper: DR/IPW with covariates recovers truth while no-covariate estimators are biased. Also adds `paper/` to .gitignore for local manuscript files and marks the conditional PT DGP gap as resolved in the survey roadmap. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Reviewer note: I could not execute the test suite in this sandbox because the installed Python environment lacks Path to Approval
|
P1: Add conditional_pt construction and icc approximation caveat to Survey DGP section of REGISTRY.md. P2: Add tests for informative_sampling + conditional_pt interaction and dgp_truth diagnostics with conditional_pt + icc. P3: Fix stale roadmap parameter count (8 -> 9) and scenario 4 text, update return_true_population_att docstring to include conditional_pt_active key. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment
Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Path to Approval
|
The x1 mean shift applies to all ever-treated units. Without a never-treated group, treated and control units share the same x1 distribution and unconditional PT is not violated - breaking the conditional_pt contract. Now raises ValueError with explanation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment
Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Path to Approval
|
Small never_treated_frac values (e.g., 0.01 with n_units=50) floor to zero never-treated units via int(), silently breaking the conditional_pt contract. Now checks int(n_units * never_treated_frac) >= 1 and reports the realized count in the error message. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment
Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Path to Approval
|
The guard checked for zero never-treated units but not zero treated units. With never_treated_frac=1.0, all units are never-treated and the x1 shift never fires. Now requires both groups to be present. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment Executive Summary
Methodology
Code Quality Performance Maintainability Tech Debt Security Documentation/Tests
Path to Approval
|
The unconditional-PT-fails and conditional-PT-holds tests only ran in panel mode. Adds cross-section versions using regression-based checks on period-level data (since units don't persist across periods in repeated cross-sections). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment
Executive Summary
Methodology
Code Quality No findings. Performance No findings. Maintainability No findings. Tech Debt No findings. There is no mitigating entry under TODO.md:L51 for the open repeated-cross-section coverage gap, and this class of P1 would not be downgraded by TODO tracking anyway. Security No findings. The Documentation/Tests
Path to Approval
|
The prior cross-section tests checked period-2 level gaps, which could pass from the _beta1*x1 level effect alone without the conditional_pt trend being active. Now uses group-mean DID across periods 1-2 (isolates the trend term since level effects cancel) and a pooled DID regression with x1*post interaction (validates conditional PT holds). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment ✅ Looks good Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
|
Summary
conditional_ptparameter togenerate_survey_did_data()for simulating conditional parallel trends scenariosconditional_pt * x1 * t/T), so unconditional PT fails but conditional PT holds after covariate adjustmentdocs/survey-roadmap.mdpaper/to.gitignorefor local manuscript filesTest plan
TestSurveyDGPResearchGradeall passtest_prep.pypass with 0 regressionsconditional_pt=0.0produces identical output for fixed seeds🤖 Generated with Claude Code