Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), Prong 1 says expert methodology should be “testable, and ideally tested by someone other than the proponent.” For software-driven forensic schedule analysis, that is a real exposure. If the expert wrote the engine, and wrote the reference implementation, and wrote the test fixtures, the expert has tested their own work. Nobody else has.
It is not a fatal exposure. Software-based forensic methods clear Daubert every day. But when opposing counsel is competent, “who else has tested this?” is the first question. The answer cannot be “nobody yet.”
Critical Path Partners has shipped infrastructure that closes this gap in three layers. None of it is theoretical. All three are live, public, and verifiable today by anyone with a laptop. This post walks through what each layer does, what it proves, and what it still does not prove.
A forensic methodology is only as defensible as the audit a stranger can run against it without your help. Everything below is designed to be run by someone who has never spoken with us.
Layer 1 — Public Continuous Integration
The verification suite runs on every commit, on hardware we do not control
cpp-cpm-engine repository. The matrix covers nine OS × Node combinations: Ubuntu, macOS, and Windows runners; Node 18, 20, and 22. The suite includes 43 unit tests, 83 cross-validation cases, plus DCMA-14 and AACE 29R-03 compliance checks. The whole world can watch the runs at github.com/danafitkowski/cpp-cpm-engine/actions/workflows/verify.yml.Layer 2 — Sigstore Attestation on the Public Transparency Log
The witness file is cryptographically signed by the workflow itself, on a public log nobody can edit
cosign tooling. The signature and certificate are recorded on Rekor, Sigstore’s public transparency log at rekor.sigstore.dev.cosign. Run cosign verify-blob --certificate-identity-regexp ... --certificate-oidc-issuer https://token.actions.githubusercontent.com witness.json. The command returns the workflow path, the commit SHA, the run ID, and a success status. If the witness has been tampered with after signing, verification fails. If the certificate identity does not match the published workflow, verification fails.Layer 3 — One-Command Local Reproduction
Any third party can clone the repo and produce a bit-identical witness on their own hardware
git clone https://github.com/danafitkowski/cpp-cpm-engine followed by npm run verify. The script runs the same suite the CI runs. It emits a local witness JSON. The third party can then diff their local witness against the CI witness file. Matching SHA-256s and matching test counts equal mechanical reproduction.npm run verify, open the resulting witness JSON, and compare the engine SHA-256 plus per-test outputs against the CI witness. If they match, the build is reproducible on independent hardware. If they do not match, the discrepancy is the story.Cross-Examination Walk-Through
Picture the exhibit on the screen: the engine version, the topology hash, the CI run URL. Opposing counsel rises.
DAUBERT.md §3.1 of the repository.”The point of the layered posture is that the expert does not have to ask the court to take their word. The expert points to the public log and steps back.
What This Does Not Close
Mechanical reproduction is not peer review. Daubert Prong 2 wants “subjected to peer review and publication.” That is a different bar, and the three layers above do not meet it on their own.
What the layers establish:
- Reproducibility. A third party can independently confirm that the same input yields the same output. Prong 1.
- Tamper-evidence. A third party can confirm the build that produced the disclosure is the build that is public.
- Auditability. A third party can read the engine, read the tests, read the workflow, and read the verification log without our cooperation.
What still requires work:
- External peer review. CPP is preparing a submission to the AACE International Total Cost Management (TCM) Forum on the engine’s float-burndown and half-step XER methods. That is the formal peer-review track for forensic-scheduling methodology.
- External cross-validation against a second implementation. The roadmap includes running the same XER pairs through MPXJ’s CPM evaluator and publishing the diff. Two independently authored engines reaching the same numbers on the same inputs is a stronger Prong-1 record than one engine plus its own tests.
- Independent academic attestation. A peer-reviewed academic study of the engine, ideally from a construction-management or computer-science program, is a future deliverable.
None of those are vapor. They are scheduled. The three layers shipped today are the floor, not the ceiling.
Layer 1 plus Layer 2 plus Layer 3 satisfy Prong 1 (“testable, and ideally tested”). They do not satisfy Prong 2 (“peer-reviewed and published”) on their own. The expert who cites this posture should distinguish the two prongs explicitly in disclosure and on the stand.
Where to Read the Detail
The canonical write-up lives in the engine repository at DAUBERT.md §3.1. It records the round-7 hardening that produced the public CI, the Sigstore attestation, and the local-reproduction path. The engine itself is MIT-licensed and lives at github.com/danafitkowski/cpp-cpm-engine.
Building this verification posture into your own claim?
If you are a forensic scheduler or a law firm interested in this verification posture — whether you want to use the open-source engine directly or have CPP run the analysis with the disclosure-ready audit trail attached — reach us at hello@criticalpathpartners.ca.
Talk to CPP →