Two weeks ago we shipped cpp-cpm-engine as an MIT-licensed public repository. Today we shipped v2.9.13 to npm, to the Model Context Protocol registry, and bit-identical across the four canonical CPP deployment surfaces. The engine that drives every Critical Path Partners forensic deliverable is now reproducible on your own machine in under a minute.

This is not a marketing post. It is a methodology post. Under FRE 702 as amended in December 2023, and the proposed FRE 707 currently moving through the rules committee, an expert's reasoning and methodology have to be reliable to a preponderance of the evidence. Closed-engine forensic-scheduling tools cannot deliver that. The vendors in this space — SmartPM, Nodes & Links, ALICE, Acumen Fuse — treat the CPM math as proprietary. None of them can hand an opposing expert a one-command reproduction path. We can.

The test (under one minute)

From any terminal with Node 18+:

npm install cpp-cpm-engine

Or, to reproduce the full verification suite:

git clone https://github.com/danafitkowski/cpp-cpm-engine && cd cpp-cpm-engine && npm install && npm run test:all

Measured at five seconds on a 2023-era laptop (Win11, Node 22, NVMe SSD). 744 unit tests pass. 32 fixtures times 346 cross-validation checks pass. DCMA-14 and AACE 29R-03 compliance checks pass.

1. The four public deployment surfaces, locked to one SHA-256

One engine. Four canonical install paths. Bit-identical hash across all four.

Why this matters
A forensic exhibit cites the engine version that produced it. The opposing expert must be able to fetch exactly that build from a public source that is not under CPP's control, and confirm by hash that the build is what we say it is. One canonical surface is fragile. Four parallel surfaces, with the same hash, is robust. If three vendors go dark next week the fourth still serves the same binary.
Surface 1 — npm
npm install cpp-cpm-engine · npmjs.com/package/cpp-cpm-engine. Public registry. Globally mirrored. Versions immutable once published.
Surface 2 — GitHub releases
github.com/danafitkowski/cpp-cpm-engine/releases. Each tag is signed and pinned. Tag v2.9.13 matches today's npm publish, byte for byte.
Surface 3 — Railway-hosted MCP
https://mcp.criticalpathpartners.ca/cpm-engine.js. The same file served live by the CPP MCP server. Anyone can curl it and hash it.
Surface 4 — Local skill bundle
The cpm-engine.js that ships inside the CPP forensic skill suite, packaged with every claim-workbench, windows-analysis, and TIA deliverable.
Indexed at
The Model Context Protocol registry entry io.github.danafitkowski/cpp-cpm-engine at registry.modelcontextprotocol.io points to the same canonical repository. AI-driven forensic-analysis workflows discover the server there.

2. What the test suite actually proves

744 unit tests, 32 fixtures times 346 cross-validation checks, DCMA-14 and AACE 29R-03 compliance gates.

Unit tests — 744
Every algorithmic primitive of the engine has unit coverage: forward pass, backward pass, total float, free float, longest-path identification, calendar arithmetic across 66 jurisdictions, constraint propagation (FNLT, SNLT, MSO, MFO, As-Late-As-Possible), driving-predecessor selection under float ties, in-progress activity handling, hammock semantics across SS / FF / SF relationships, retained-logic vs progress-override convergence.
Cross-validation — 32 fixtures times 346 checks
The engine ships in two synchronized implementations — a JavaScript port for browser, MCP, and dashboard surfaces, and a Python sibling (cpm.py) that drives batch claim-preparation pipelines. They are kept bit-identical via a 32-fixture, 346-check cross-validation harness that runs in CI on every commit. JS and Python disagree on a single Early Start by even one minute and the build fails. This is how an opposing expert can be confident the dashboard they are reviewing and the DOCX they are reading were computed from the same primitives.
DCMA-14 compliance gate
The 14 Defense Contract Management Agency schedule-health metrics are evaluated in CI against fixture schedules with known-good and known-bad results. Any regression in dangling-logic detection, lag-abuse counting, hard-constraint flagging, or float-distribution analysis breaks the build.
AACE 29R-03 compliance gate
Every method-id string the engine emits matches the AACE Recommended Practice 29R-03 canonical labels. MIP 3.3 for Observational/Static/Periodic Windows. MIP 3.6 for Prospective/Modeled/Single-Base TIA. MIP 3.7 for Prospective/Modeled/Multi-Base TIA. MIP 3.8 for Retrospective/Modeled/Single-Base Collapsed-As-Built. Peer reviewers and opposing experts can match these strings against the AACE recommended practice without translation.

3. Sigstore-signed CI runs on a public transparency log

The build is signed by GitHub Actions OIDC and logged to Rekor. CPP cannot edit either.

Why this matters
A test suite that only the vendor can run is not independently verifiable. The CPP test suite runs on Ubuntu, macOS, and Windows runners against Node 18, 20, and 22 — a 9-cell matrix on every push to main. The runs execute on hardware GitHub provides, not on CPP infrastructure. Their outputs are signed by GitHub Actions' OIDC token using the Sigstore cosign protocol and the signatures are appended to the Rekor public transparency log.
What this means for opposing counsel
If CPP claims a particular build of the engine passed 744 / 32x346 / DCMA-14 / AACE 29R-03, opposing counsel can look up the Sigstore attestation, follow it to the Rekor entry, and read the original GitHub Actions log on hardware CPP does not control and cannot edit. The disclosure is not "trust the expert" — it is "trust the public log."
How to verify
Visit github.com/danafitkowski/cpp-cpm-engine/actions/workflows/verify.yml. Each run links to its Sigstore attestation. The Rekor log entry is reachable from the attestation page. There is no CPP-side step in this chain.

4. AACE-canonical, Daubert-disclosed

Every method-id matches AACE. The Daubert disclosure is built into the engine.

AACE method-id alignment
The engine exposes method_id on every output. The label set is restricted to AACE-canonical strings. There is no "MIP 3.7 windows" mislabel (which circulates in some commercial tooling); windows analysis is MIP 3.3 per AACE 29R-03, and the engine refuses to emit any other label for that method. Same discipline for MIP 3.6, MIP 3.7, MIP 3.8.
Daubert disclosure builder
The engine ships buildDaubertDisclosure(), which produces a four-prong methodology statement aligned with FRE 702 (testability, peer review, error rate, general acceptance). The output names the engine version, the method-id, the topology hash of the input data, the calendar version, and a link to the canonical methodology document. An expert disclosure that uses this output can be cross-examined against the underlying engine code, line for line.
Verifiable Provenance Tool
verifyReport() recomputes the topology hash from a disclosed report and confirms engine_version lock-step. A CLI (cli_verify.py) lets opposing experts run the check without installing the entire suite. The CPP MCP server exposes a public /verify endpoint for the same purpose.

5. The math is sixty years old. The discipline is new.

None of this should be commercial confidential in 2026.

Forward and backward pass
Kelley & Walker, "Critical-Path Planning and Scheduling," Proceedings of the Eastern Joint IRE-AIEE-ACM Computer Conference (1959). Public domain. Free.
Topological sort
Kahn, "Topological Sorting of Large Networks," Communications of the ACM 5(11):558–562 (1962). Public domain. Free.
Cycle detection (strongly-connected components)
Tarjan, "Depth-First Search and Linear Graph Algorithms," SIAM Journal on Computing 1(2):146–160 (1972). Public domain. Free.
What is genuinely new
The forensic discipline surrounding the math. CI-enforced data-truncation prohibition. AACE-canonical labeling. Sigstore-signed reproducibility. SHA-256 topology hashing in every output. A 66-jurisdiction holiday calendar covering CA-FED plus 13 provinces and territories, US-FED plus 50 states and DC. None of that is in any closed-engine vendor's product.

6. What CPP is asking the industry to do

If a forensic engine cannot be reproduced by the opposing expert, it should not be admissible.

For forensic schedulers
Fork the engine. Run the test suite. File an issue on any defect. Submit a fixture that broke another engine. The math is the same math everyone is computing; the discipline is what differs. Adopt the discipline.
For counsel building expert disclosures
Ask your expert which forensic engine they are using and whether you can hand the opposing expert a one-command reproduction path. If the answer is no, ask why. The 2023 FRE 702 amendment is real. The proposed FRE 707 is moving. Methodology must be inspectable.
For closed-engine vendors
The bar is rising. The market for forensic-grade work will not survive a future Daubert ruling that the methodology must be independently reproducible if your engine cannot be reproduced. Open the methodology. Keep the proprietary UI, the proprietary workflow tools, the proprietary services. The math is not the moat.

How to participate

The minimal path:

npm install cpp-cpm-engine

The full reproduction path:

git clone https://github.com/danafitkowski/cpp-cpm-engine
cd cpp-cpm-engine
npm install
npm run test:all

That is it. No license server. No SaaS portal. No phone call with a sales engineer. MIT license. Fork it. Audit it. File an issue if you find a bug. If it stands up to peer review, it stands up to court.

The brand-discipline rules — no truncation in user-facing output, AACE-canonical labels, Daubert disclosure built in — are documented in CONTRIBUTING.md. Pull requests that violate them fail CI.

The companion repositories shipped at the same time:

The whole world can watch the verification runs at github.com/danafitkowski/cpp-cpm-engine/actions/workflows/verify.yml. We would rather correct the record than defend it.