Every schedule consulting firm's website now says "AI-powered." Most of them mean they used ChatGPT to rewrite their marketing copy. Some of them mean they bolted a chatbot onto an existing SaaS dashboard. A few of them mean nothing at all — it's just the word you put on the website in 2026 because it sounds expensive.

Here's what it looks like when AI is actually wired into the forensic schedule analysis workflow — not as a gimmick, not as a chatbot wrapper, but as production tooling built by someone who's been writing delay claims and sitting through arbitration hearings for twenty-five years. And here's where the human still has to show up.

What CPP Actually Built

Critical Path Partners is, as far as I can determine, the only forensic schedule consultancy operating with a fully AI-native analysis stack. Not "we use AI." Not "AI-assisted." The entire forensic workflow — from XER parsing to DCMA assessment to windows analysis to collapsed as-built validation to Monte Carlo risk quantification to claims packaging — runs on purpose-built tools that are exposed as a public server anyone can connect to.

This isn't a SaaS product built by a software company that hired a scheduler as a consultant. This is the other way around: a working forensic analyst who built his own tools because the existing ones didn't do what he needed, then made the entire stack available for AI assistants to call directly.

The numbers are specific because the tools are real:

16
Production Skills
13
MCP Tools
~90k
Lines of Code
734
Tests Passing

Every one of those sixteen skills encodes a specific forensic methodology — not a generic prompt template, not a chatbot persona, but executable analysis code with the AACE recommended practice references, the heuristic gates, the conservation-rule attribution math, and the audit-trail discipline built in. You can browse every skill and what it does at the-suite.html.

Three Layers of the Stack

The system is built on three pieces of Anthropic's AI infrastructure, all in production. If you don't care about the technical architecture, skip to the next section — but if you're a construction lawyer evaluating expert tools or a PM trying to understand what you're looking at, this matters.

Layer 1: Claude Skills

Each Skill is a self-contained forensic capability — code, methodology references, and trigger documentation that the AI loads only when the task requires it. There's a Skill for XER parsing. A Skill for DCMA 14-point assessment. A Skill for windows analysis. A Skill for collapsed as-built. A Skill for Monte Carlo risk simulation. A Skill for claims packaging. Sixteen in total, each one built to produce the specific deliverable that methodology requires.

The key distinction: these aren't prompt templates that ask an AI to "act like a forensic analyst." They're executable analysis engines that perform the actual calculations on real schedule data. When the windows analysis Skill runs, it's computing critical path shifts across every analysis period, applying conservation-rule delay attribution, and generating per-window activity-level results. The AI orchestrates the workflow. The Skill does the math.

Layer 2: MCP Server (Model Context Protocol)

Thirteen of those sixteen skills are published as tools on a public MCP server at mcp.criticalpathpartners.ca. MCP is an open standard from Anthropic that lets any compatible AI assistant call external tools. That means Claude (in the browser, on desktop, or via API), Cursor, Cline, and any other MCP-aware client can connect to CPP's engine and run forensic analyses directly.

No signup. No license key. No sales call. Connect the server, drop your XER, and run a DCMA-14 or a forensic windows analysis in your own AI session. The tools are just there — the same way a construction lawyer can call an expert and say "I need a delay analysis by Friday."

Layer 3: Agent Orchestration

The third layer is where the AI decides which tools to call, in what order, and what to do with the results. A full forensic delay claim doesn't run on one tool — it chains four or five: parse the XER, run the windows analysis, validate with collapsed as-built, quantify risk with Monte Carlo, then package the findings into a claims submission with cover letter, per-event exhibits, and supporting documentation.

The orchestration layer handles that sequencing automatically, with an audit trail of every tool call, every input, and every output. When the final deliverable lands, you can trace every conclusion back through the chain to the specific XER data that produced it.

Try It Yourself

The fastest way to understand what this does is to use it. Connect the MCP server from your own Claude session in 30 seconds — here's the JSON snippet. Or drop an XER into the live engine without connecting anything.

What the AI Actually Does

Parses a 50,000-line XER file in seconds and surfaces the structural problems — missing logic, orphan activities, constraint-driven false criticality, calendar mismatches — that a human analyst would spend half a day identifying manually.

Runs a complete DCMA 14-point assessment in 10 seconds or less that would take a human analyst four to six hours to produce, including the interactive dashboard, drill-down on every criterion, and specific fix recommendations per activity.

Sequences a full forensic analysis chain — windows analysis into collapsed as-built validation into Monte Carlo risk quantification into claims packaging — in a single orchestrated workflow, with every intermediate result available for review.

Generates the dashboards, CSVs, narratives, and exhibits that make up the deliverable set. Interactive HTML forensic dashboards. Per-window delay attribution tables. Executive summary reports. Claims-ready cover letters and per-event exhibits.

Handles the mechanical precision that humans are bad at. Conservation-rule arithmetic across twenty analysis windows with concurrent delays in fourteen of them. Activity-level slip tracking across 3,000+ activities. Baseline-vs-current variance calculations on every predecessor relationship in the network. The AI doesn't get tired, doesn't transpose numbers, and doesn't skip Window 12 because it's Friday afternoon.

What the AI Does NOT Do

This is the part that matters more than everything above, and it's the part that every "AI-powered" marketing page conveniently leaves out.

It doesn't write deposable expert opinions. An expert opinion is a professional judgment backed by experience, credentials, and the willingness to defend it under oath. The AI produces the analysis. The analyst writes the opinion, signs it, and sits for the deposition. Those are different things and they cannot be substituted.

It doesn't make judgment calls on concurrent delay attribution. The conservation-rule math has gates — decision points where the analyst has to determine whether two concurrent delays are truly independent, whether one is pacing the other, or whether the contractor's delay was caused by an upstream owner delay. Those calls require reading the correspondence, understanding the site conditions, and knowing what "reasonable" looks like in that specific construction context. The AI flags the decision point. The analyst makes the call.

It doesn't survive cross-examination. Nobody is putting an AI on the stand. When opposing counsel challenges a specific delay attribution in Window 7, they're challenging the analyst's interpretation of the evidence, not the software that ran the calculation. The tools produce defensible analysis. The analyst defends it.

It doesn't replace the pattern recognition that takes decades to develop. Knowing that a particular constraint configuration is hiding a float manipulation. Recognizing that an owner's planner is refusing to provide native schedule files because the as-built data would undermine their position. Seeing that three apparently unrelated RFI delays all trace back to the same incomplete design package. That pattern recognition comes from having done the work on dozens of projects across twenty-five years — not from a training dataset.

The AI Is the Easy Part

Building AI tools is a software problem. Writing the methodology in a way that a tool can execute correctly, defensibly, and without fabricating conclusions requires having actually written the claim, sat through the cross-examination, and watched the arbitrator's pen move. The skills carry the disclosures, the heuristic gates, the conservation-rule attribution math, and the audit-trail discipline that comes from twenty-five years of doing the work. That's the part that doesn't get replicated by downloading a framework and prompting an LLM.

The Competitive Landscape

The forensic scheduling industry breaks into three categories right now, and none of them overlap:

CategoryWhat You GetWhat It CostsWhat's Missing
Traditional firmsStatic PDF report, narrative, Gantt screenshots$3,000–$5,000+ per reportNo interactivity, no drill-down, no programmatic access, no AI
SaaS platformsAutomated dashboard, schedule health scores$57,000–$135,000+/yearNo forensic analyst, no deposable opinions, no claims packaging
CPPAI-native tooling + working forensic analystEngine is free to try; analyst engagement priced per project

The traditional firms have the analyst but not the technology. The SaaS platforms have the technology but not the analyst. CPP has both — and the tooling is public, testable, and open for anyone to connect to and evaluate before they ever pick up the phone.

That's a deliberate decision. The engine is free because the engine is the easy part. The hard part — the interpretation, the expert opinion, the claims strategy, the cross-examination preparation — that's what you book a consultation for. Letting people run the engine first means they show up to that conversation already understanding what the tools can do and what they need the analyst for.

Why This Matters Right Now

AI assistants are becoming the default research interface for construction lawyers, project managers, and claims consultants. When a construction lawyer asks their AI assistant to "run a DCMA-14 health check on this XER and tell me which contractor's schedule looks defensible," the tools that are wired in get called. The tools that exist as PDFs on a consulting firm's website do not.

CPP's thirteen MCP tools are callable from any MCP-aware AI client right now — Claude in the browser, Claude Desktop, Cursor, Cline, and any future client that supports the protocol. That means the engine is discoverable and usable by every AI assistant that a construction professional might be working with, without CPP needing to be in the room.

This is not a future-state roadmap. Every tool described in this post is in production and publicly accessible today. The XER parser, the DCMA assessment, the forensic windows analysis, the collapsed as-built, the Monte Carlo simulation, the claims workbench — all of them. The sixteen skills, the thirteen MCP tools, the ~90,000 lines of code, the 734 passing tests. All live. All testable.

Twenty-five years of forensic methodology, twenty of them in nuclear and energy programs, encoded into production tools that run on the same AI stack that's becoming the infrastructure layer for professional services. I built these tools because I needed them for my own analysis work. I made them public because the industry needs them too — and because the best way to demonstrate what you can do is to let people try it themselves.

See the engine. Talk to the analyst.

Drop your XER into the live engine and see what comes back. Or connect the MCP server to your own Claude session. When you're ready for the part the AI can't do, book a consultation.

Try It Live → Schedule a Consultation →