Skip to content

feat: add conditional_pt parameter to survey DGP#295

Merged
igerber merged 7 commits intomainfrom
survey-paper
Apr 12, 2026
Merged

feat: add conditional_pt parameter to survey DGP#295
igerber merged 7 commits intomainfrom
survey-paper

Conversation

@igerber
Copy link
Copy Markdown
Owner

@igerber igerber commented Apr 12, 2026

Summary

  • Adds conditional_pt parameter to generate_survey_did_data() for simulating conditional parallel trends scenarios
  • When nonzero, shifts treated units' x1 distribution by +1 SD and adds X-dependent time trends (conditional_pt * x1 * t/T), so unconditional PT fails but conditional PT holds after covariate adjustment
  • Adds 7 tests covering validation, distribution shift, unconditional PT failure, conditional PT preservation, backward compatibility, and panel/cross-section modes
  • Marks the conditional PT DGP gap as resolved in docs/survey-roadmap.md
  • Adds paper/ to .gitignore for local manuscript files

Test plan

  • 7 new tests in TestSurveyDGPResearchGrade all pass
  • 193 total tests in test_prep.py pass with 0 regressions
  • Backward compatibility: conditional_pt=0.0 produces identical output for fixed seeds
  • Multi-seed scenario verification confirms DR with covariates removes bias (RMSE ratio ~6x across 10 seeds)

🤖 Generated with Claude Code

@github-actions
Copy link
Copy Markdown

Overall Assessment
⚠️ Needs changes. Highest unmitigated issue is a P1: the PR bundles an undocumented removal of the shipped dCDH reversible-treatment estimator surface into a survey-DGP feature PR.

Executive Summary

Methodology

Code Quality

Performance

  • No material findings. The reviewed conditional_pt additions are simple vector shifts and scalar additions; nothing in the diff suggests a meaningful hot-path regression.

Maintainability

  • Severity: P2. Impact: The PR mixes a small, reviewable survey-DGP enhancement with a wholesale removal of an unrelated estimator family, docs surface, and metadata. That makes the change difficult to audit and harder to bisect if regressions appear later (diff_diff/prep_dgp.py:L1163, docs/survey-roadmap.md:L167, docs/methodology/REGISTRY.md:L460). Concrete fix: keep this PR scoped to the survey DGP changes and move any estimator deprecation/removal into a separate PR.

Tech Debt

  • No separate deferrable finding. The dCDH removal is not tracked in TODO.md, so it cannot be downgraded as accepted deferred work (TODO.md:L51).

Security

  • No security findings in the reviewed diff.

Documentation/Tests

  • Severity: P2. Impact: The diff deletes the load-bearing dCDH validation suite and parity assets: the worked-example/cohort-recentering methodology tests, the DIDmultiplegtDYN golden-value parity tests, and the reversible-DGP generator tests. Even aside from the estimator deletion, that removes the regression net for known correctness contracts (tests/test_methodology_chaisemartin_dhaultfoeuille.py:L56, tests/test_methodology_chaisemartin_dhaultfoeuille.py:L226, tests/test_chaisemartin_dhaultfoeuille_parity.py:L1, tests/test_prep_dgp_reversible.py:L1). Concrete fix: restore those tests and committed golden fixtures/scripts alongside the dCDH surface.
  • Severity: P3. Impact: CITATION.cff is being reverted from current 3.0.1 / 2026-04-07 metadata to 2.7.1 / 2026-03-18, which does not match current package/changelog state and appears unrelated to the survey-DGP feature (CITATION.cff:L9, CHANGELOG.md:L25). Concrete fix: keep the current citation metadata unless this PR is intentionally part of a coordinated release rollback.
  • Verification note: I could not run the project tests in this environment because pytest and numpy are not installed; assessment is based on the supplied diff and surrounding source.

Path to Approval

  1. Restore the dCDH estimator surface in this PR: diff_diff/chaisemartin_dhaultfoeuille.py, diff_diff/chaisemartin_dhaultfoeuille_bootstrap.py, diff_diff/chaisemartin_dhaultfoeuille_results.py, the dCDH exports/alias in diff_diff/__init__.py, and the public generate_reversible_did_data export in diff_diff/prep.py.
  2. Restore the dCDH source-of-truth and regression net: the ChaisemartinDHaultfoeuille section in docs/methodology/REGISTRY.md, docs/api/chaisemartin_dhaultfoeuille.rst, tests/test_chaisemartin_dhaultfoeuille.py, tests/test_methodology_chaisemartin_dhaultfoeuille.py, tests/test_chaisemartin_dhaultfoeuille_parity.py, tests/test_prep_dgp_reversible.py, and the committed R golden fixtures/script.
  3. Keep the survey DGP enhancement isolated: retain the conditional_pt implementation/tests, and add one short Survey DGP note to docs/methodology/REGISTRY.md documenting the new conditional-PT construction and ICC-approximation caveat.
  4. Restore CITATION.cff to the current release metadata unless this PR is intentionally coordinated with a version rollback.

…imulation

Adds a `conditional_pt` parameter to `generate_survey_did_data()` that creates
X-dependent time trends violating unconditional parallel trends while preserving
conditional PT. When nonzero, treated units' x1 is drawn from N(1,1) instead of
N(0,1), and the outcome includes `conditional_pt * x1 * (t/T)`. This unblocks
simulation scenario 4 for the survey variance paper: DR/IPW with covariates
recovers truth while no-covariate estimators are biased.

Also adds `paper/` to .gitignore for local manuscript files and marks the
conditional PT DGP gap as resolved in the survey roadmap.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 12, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: f499f66029307b43ba655647aa0826873ac305cd


Overall Assessment

⚠️ Needs changes — highest unmitigated issue is a P1 undocumented methodology change in the Survey DGP registry.

Executive Summary

Methodology

  • Severity: P1. Impact: The affected method is generate_survey_did_data. There is no external paper governing this synthetic DGP, so the Methodology Registry plus the function docstring are the source material. The implementation/docstring now add a conditional-PT DGP (x1 mean shift for ever-treated plus conditional_pt * x1 * (t / n_periods) in untreated outcomes) and explicitly note that icc targeting becomes approximate under this regime, but the Survey DGP registry section still documents only the old icc and informative-sampling behavior. That leaves the project’s methodology source of truth inaccurate for a behavior-changing path (docs/methodology/REGISTRY.md:2691, docs/methodology/REGISTRY.md:2693, diff_diff/prep_dgp.py:1305, diff_diff/prep_dgp.py:1315). Concrete fix: add a Survey DGP note in docs/methodology/REGISTRY.md describing the conditional_pt construction, the intended identification story, and the conditional_pt + icc approximation caveat; if exact ICC targeting is required, reject that parameter combination instead of silently approximating.

Code Quality

Performance

  • No material findings. The added work is limited to vector shifts and scalar additions in already linear-time paths.

Maintainability

  • No material findings beyond the documentation and test-coverage gaps below.

Tech Debt

  • No separate finding. I did not find a mitigating TODO.md entry for the registry mismatch, so the P1 above remains unmitigated.

Security

  • No findings. The .gitignore addition of paper/ looks benign (.gitignore:92).

Documentation/Tests

  • Severity: P2. Impact: The new tests cover validation, core pre-trend behavior, backward compatibility, and panel vs repeated cross-section mode, but they do not exercise the two other branches this PR changed: informative-sampling pre-ranking and return_true_population_att / icc diagnostics. Because the implementation duplicates conditional_pt logic across those branches, the missing regression coverage leaves room for future drift in non-default paths (diff_diff/prep_dgp.py:1511, diff_diff/prep_dgp.py:1555, diff_diff/prep_dgp.py:1694, tests/test_prep.py:1971, tests/test_prep.py:2091). Concrete fix: add one informative_sampling=True regression test that checks the treated x1 shift / pre-trend signal survives the ranking path, and one icc + conditional_pt + return_true_population_att=True test that checks the diagnostic contract on df.attrs["dgp_truth"].
  • Severity: P3. Impact: The user-facing docs are internally inconsistent after the feature landed. docs/survey-roadmap.md still says Phase 10b added 8 parameters and later still says conditional PT “Requires DGP extension”, and the return_true_population_att docstring still documents the old dgp_truth key set even though the code now adds conditional_pt_active (docs/survey-roadmap.md:110, docs/survey-roadmap.md:161, docs/survey-roadmap.md:200, diff_diff/prep_dgp.py:1288, diff_diff/prep_dgp.py:1694). Concrete fix: update the roadmap wording and extend the docstring’s diagnostics-key list to include conditional_pt_active.

Reviewer note: I could not execute the test suite in this sandbox because the installed Python environment lacks numpy; this review is based on static inspection of the changed code and tests.

Path to Approval

  1. Update the Survey DGP section of docs/methodology/REGISTRY.md so it explicitly documents the new conditional_pt construction and the conditional_pt + icc approximation caveat. That removes the only blocking P1.

P1: Add conditional_pt construction and icc approximation caveat to
Survey DGP section of REGISTRY.md.

P2: Add tests for informative_sampling + conditional_pt interaction
and dgp_truth diagnostics with conditional_pt + icc.

P3: Fix stale roadmap parameter count (8 -> 9) and scenario 4 text,
update return_true_population_att docstring to include
conditional_pt_active key.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 12, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: 8b4ac96830ba33cd7817f2c35c396bc53df4e1dc


Overall Assessment

⚠️ Needs changes — highest unmitigated issue is a P1 methodology/assumption gap in the new conditional_pt DGP contract.

Executive Summary

  • The prior blocking registry/docstring issue is resolved: the PR now documents conditional_pt and the conditional_pt + icc approximation caveat in both the Methodology Registry and the function docstring (docs/methodology/REGISTRY.md:L2703-L2714, diff_diff/prep_dgp.py:L1306-L1318).
  • Severity P1 [Newly identified]: the implementation shifts x1 for all ever-treated cohorts equally, so the advertised unconditional-PT failure is not guaranteed for future-treated-only / all-eventually-treated comparison designs.
  • The prior non-default branch coverage concern is partly resolved: the PR now adds direct tests for informative-sampling and dgp_truth paths.
  • A remaining P2 test gap is that the new tests only check pooled ever-treated vs never-treated behavior, not the never_treated_frac=0.0 / not_yet_treated interaction behind the P1 above.
  • Static review only: I could not execute the suite in this sandbox because pytest and numpy are unavailable.

Methodology

Code Quality

  • No material findings. Validation and propagation through ranking, outcome generation, and dgp_truth are internally consistent.

Performance

  • No material findings. The added work is vectorized and linear in existing generation paths.

Maintainability

  • No material findings. The new parameter was threaded through the touched branches consistently.

Tech Debt

  • No separate finding. I did not find a TODO.md entry that mitigates the P1 above, so it remains unmitigated.

Security

  • No findings. The .gitignore addition for paper/ looks benign.

Documentation/Tests

  • Severity: P2. Impact: the new tests validate pooled ever-treated vs never-treated gaps and branch plumbing, but they do not lock the control-group interaction behind the P1. test_conditional_pt_unconditional_pt_fails and test_conditional_pt_conditional_pt_holds use first_treat > 0 vs first_treat == 0, so they would still pass even if an all-eventually-treated / not_yet_treated design were unaffected (tests/test_prep.py:L2009-L2033, tests/test_prep.py:L2035-L2077). Concrete fix: add one estimator-level test, or at minimum a cohort-vs-future-treated control-mask test, with never_treated_frac=0.0 so the intended contract is explicit.

Path to Approval

  1. Narrow or enforce the conditional_pt contract. Either document in diff_diff/prep_dgp.py:L1306-L1318, docs/methodology/REGISTRY.md:L2703-L2714, and docs/survey-roadmap.md:L167-L170 / docs/survey-roadmap.md:L195-L202 that unconditional-PT failure is only guaranteed when the comparison group differs in x1 composition, and add a warning/error for conditional_pt != 0 with never_treated_frac=0.0; or make the x1 shift cohort-specific so future-treated controls are also valid stress cases.
  2. Add a regression test for conditional_pt with never_treated_frac=0.0 and a not_yet_treated control design. If the current implementation is kept, assert the documented warning/error. If the DGP is generalized, assert that the no-covariate comparison is biased and the covariate-adjusted comparison recovers truth.

The x1 mean shift applies to all ever-treated units. Without a
never-treated group, treated and control units share the same x1
distribution and unconditional PT is not violated - breaking the
conditional_pt contract. Now raises ValueError with explanation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 12, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: c86c0dc208d1c1894053e5362999587c6e38a074


Overall Assessment

⚠️ Needs changes — the highest unmitigated issue is a P1 [Newly identified] edge-case gap in the new conditional_pt control-group contract for the survey DGP.

Executive Summary

Methodology

  • Severity: P1 [Newly identified]. Impact: the affected method is the conditional_pt regime in generate_survey_did_data(). The new validation only rejects literal never_treated_frac == 0.0 diff_diff/prep_dgp.py:L1438-L1446, but treatment assignment later floors the realized count via n_never = int(n_units * never_treated_frac) diff_diff/prep_dgp.py:L1495-L1504. For inputs like n_units=50, never_treated_frac=0.01, n_never is still 0, so all units become ever-treated and the DGP loses the never-treated comparison group that the new contract relies on. That silently reintroduces the same control-group flaw the re-review was supposed to close, while the docs still describe the requirement as only never_treated_frac > 0 docs/methodology/REGISTRY.md:L2703-L2712, docs/survey-roadmap.md:L167-L170, docs/survey-roadmap.md:L195-L202. Concrete fix: validate the realized count (n_never >= 1) when conditional_pt != 0, or enforce at least one never-treated unit by construction, and update the public contract to describe the finite-sample requirement as “at least one never-treated unit.”
  • No other methodology findings. The exact-zero all-eventually-treated case is now documented and rejected, and the conditional_pt + icc approximation is documented in the Registry.

Code Quality

  • No findings.

Performance

  • No findings.

Maintainability

  • No findings.

Tech Debt

  • No separate finding. I did not find a mitigating entry for this issue under TODO.md:L51-L97, so the P1 remains untracked and unmitigated.

Security

  • No findings.

Documentation/Tests

  • Severity: P2. Impact: the new tests validate never_treated_frac=0.0 rejection and the main panel/cross-section/informative-sampling branches, but they do not exercise the small-positive-fraction case that still floors to zero never-treated units tests/test_prep.py:L1971-L1979, tests/test_prep.py:L2019-L2164. Concrete fix: add a regression test such as n_units=50, never_treated_frac=0.01, add_covariates=True, conditional_pt=0.3 and assert the documented behavior (preferred: ValueError; alternative: enforced minimum of one never-treated unit), then align the docstring/Registry wording with that rule.

Path to Approval

  1. Tighten the conditional_pt guard to reject cases where the realized never-treated count is zero (int(n_units * never_treated_frac) == 0), not just the literal fraction 0.0.
  2. Update the public contract in diff_diff/prep_dgp.py:L1306-L1319, docs/methodology/REGISTRY.md:L2703-L2712, and docs/survey-roadmap.md:L167-L170 / docs/survey-roadmap.md:L195-L202 to describe the requirement as “at least one never-treated unit.”
  3. Add a regression test for a small positive never_treated_frac that floors to zero and assert the intended behavior.

Small never_treated_frac values (e.g., 0.01 with n_units=50) floor to
zero never-treated units via int(), silently breaking the conditional_pt
contract. Now checks int(n_units * never_treated_frac) >= 1 and reports
the realized count in the error message.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 12, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: b2f4dc4d1e734371debd19f65a4043eba9a47234


Overall Assessment

⚠️ Needs changes — the highest unmitigated issue is a P1 [Newly identified] missing assumption check in the new conditional_pt survey DGP regime.

Executive Summary

Methodology

Code Quality

  • No findings.

Performance

  • No findings.

Maintainability

  • No findings.

Tech Debt

  • No separate findings. I did not find a mitigating TODO.md entry for the P1 above, and in any case TODO tracking would not mitigate a methodology assumption-check gap.

Security

  • No findings.

Documentation/Tests

  • No separate findings beyond the Methodology issue. The new tests do cover the previously reported floor-to-zero case and the main panel/cross-section/informative-sampling branches in tests/test_prep.py:L1971-L2171. I could not execute them in this environment because pytest and numpy were unavailable.

Path to Approval

  1. Reject conditional_pt != 0 when the realized treated count is zero (n_units - int(n_units * never_treated_frac) < 1), not just when the realized never-treated count is zero.
  2. Update the public contract in diff_diff/prep_dgp.py:L1306-L1316, docs/methodology/REGISTRY.md:L2703-L2712, and docs/survey-roadmap.md:L167-L170 to require at least one ever-treated and one never-treated unit for the conditional_pt regime.
  3. Add a regression test for never_treated_frac=1.0, add_covariates=True, conditional_pt>0 asserting the intended behavior. If the all-never-treated case is intentionally kept allowed, the test should instead assert that the docs/diagnostics do not label the conditional-PT regime as active.

The guard checked for zero never-treated units but not zero treated
units. With never_treated_frac=1.0, all units are never-treated and
the x1 shift never fires. Now requires both groups to be present.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 12, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: 60de2fc397960e6273cd376687ed605926e22e63


Overall Assessment
⚠️ Needs changes — the highest unmitigated issue is a P1 [Newly identified] coverage gap on the new conditional_pt repeated-cross-section path.

Executive Summary

Methodology

Code Quality
No findings.

Performance
No findings.

Maintainability
No findings.

Tech Debt
No findings. I did not find a mitigating TODO.md entry for the P1 below, and this kind of new-parameter/code-path gap would not be downgraded by TODO tracking anyway.

Security
No findings. The .gitignore addition for paper/ is benign.

Documentation/Tests

Path to Approval

  1. Add a panel=False regression test for conditional_pt showing unconditional pre-trends differ across pre-treatment periods in repeated cross-sections, rather than only in panel data.
  2. Add a panel=False regression test showing that the treated/control pre-trend gap materially shrinks after controlling for x1; if informative-sampling cross-sections are part of the intended contract, run the same assertion with informative_sampling=True so diff_diff/prep_dgp.py:L1570-L1583 is covered as well.

The unconditional-PT-fails and conditional-PT-holds tests only ran in
panel mode. Adds cross-section versions using regression-based checks
on period-level data (since units don't persist across periods in
repeated cross-sections).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 12, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: d60cba920210a49884a805013117bde055363f98


Overall Assessment

⚠️ Needs changes: the conditional_pt implementation appears aligned with the documented survey DGP, but the prior P1 on repeated-cross-section validation is only partially addressed.

Executive Summary

Methodology

  • Severity: P3. Impact: No unmitigated methodology defect in the implementation itself. The survey DGP applies the treated-group x1 shift and the conditional_pt * x1 * (t / n_periods) outcome term in the documented branches, and the conditional_pt + icc caveat is explicitly documented in docs/methodology/REGISTRY.md:L2714 and diff_diff/prep_dgp.py:L1318. Concrete fix: none.

Code Quality

No findings.

Performance

No findings.

Maintainability

No findings.

Tech Debt

No findings. There is no mitigating entry under TODO.md:L51 for the open repeated-cross-section coverage gap, and this class of P1 would not be downgraded by TODO tracking anyway.

Security

No findings. The .gitignore addition for paper/ is benign.

Documentation/Tests

  • Severity: P1. Impact: The re-review blocker remains open. The panel tests at tests/test_prep.py:L2032 correctly use period-to-period changes to identify pre-trend differences. The new repeated-cross-section tests at tests/test_prep.py:L2102 and tests/test_prep.py:L2134 instead regress period-2 outcomes on treatment status, with and without x1. That only tests a level gap. Since the PR also shifts treated units’ x1 mean at diff_diff/prep_dgp.py:L1592 and outcomes already include _beta1 * x1 at diff_diff/prep_dgp.py:L1611, both assertions can still succeed even if the repeated-cross-section conditional_pt * x1 * (t / n_periods) term at diff_diff/prep_dgp.py:L1613 is broken. That means the repeated-cross-section path still lacks a direct regression test of the registry claim in docs/methodology/REGISTRY.md:L2703 that unconditional PT fails because untreated trends diverge over time, and then holds conditional on x1. Concrete fix: replace the period-2-only checks with repeated-cross-section pre-period DID-style assertions across periods 1 and 2; for the conditional version, use a pooled model with treated, post, treated:post, x1, and x1:post (or equivalent separate-period adjusted gaps). Add an informative_sampling=True variant to cover the changed cross-section ranking path at diff_diff/prep_dgp.py:L1570.

Path to Approval

  1. Add a panel=False pre-period DID regression, or equivalent mean-gap contrast, using periods 1 and 2 that fails if the repeated-cross-section conditional_pt * x1 * (t / n_periods) term is removed.
  2. Add a panel=False conditional-PT regression using periods 1 and 2 with time-varying x1 control, e.g. outcome ~ treated + post + treated:post + x1 + x1:post, and assert the treated:post term shrinks materially.
  3. Repeat the repeated-cross-section assertion with informative_sampling=True to exercise the changed re-ranking branch at diff_diff/prep_dgp.py:L1570.

The prior cross-section tests checked period-2 level gaps, which could
pass from the _beta1*x1 level effect alone without the conditional_pt
trend being active. Now uses group-mean DID across periods 1-2 (isolates
the trend term since level effects cancel) and a pooled DID regression
with x1*post interaction (validates conditional PT holds).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 12, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: 4304e8778c7ed43eb917361a0cf4b9bc597f56f3


Overall Assessment

✅ Looks good

Executive Summary

Methodology

  • Severity: P3. Impact: No unmitigated methodology defect. The DGP change is internally consistent with the new registry/docstring contract for conditional parallel trends in both panel and repeated-cross-section generation, and the only methodology deviation I found is the conditional_pt + icc approximation, which is explicitly documented at docs/methodology/REGISTRY.md:L2703 and diff_diff/prep_dgp.py:L1306. Concrete fix: None.

Code Quality

  • No findings.

Performance

  • No findings.

Maintainability

  • No findings.

Tech Debt

  • No findings.

Security

  • No findings. The .gitignore addition at .gitignore is benign.

Documentation/Tests

  • Severity: P2. Impact: The prior re-review blocker is closed by the new repeated-cross-section DID tests at tests/test_prep.py:L2102 and tests/test_prep.py:L2142, but the modified informative_sampling ranking branches at diff_diff/prep_dgp.py:L1535 and diff_diff/prep_dgp.py:L1581 are still not directly asserted. tests/test_prep.py:L2202 would still pass if the new conditional_pt * x1 * (t / n_periods) ranking term were removed, so there is a small regression hole around the documented “rank on full Y(0)” behavior. Concrete fix: Add a low-noise informative_sampling=True test, in panel and repeated-cross-section modes, that compares period-specific weights or weighted pre-period DID behavior under conditional_pt=0 versus conditional_pt>0 and would fail if the ranking-term contribution were omitted.

@igerber igerber added the ready-for-ci Triggers CI test workflows label Apr 12, 2026
@igerber igerber merged commit 622b3ab into main Apr 12, 2026
23 of 26 checks passed
@igerber igerber deleted the survey-paper branch April 12, 2026 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-ci Triggers CI test workflows

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant