You just received a DCMA 14-Point Schedule Health Assessment on your construction project. It's got percentages, pass/fail grades, something called a Baseline Execution Index, and a compliance score that doesn't look great. Your scheduler says it's fine. Your owner says it's not. And you need to figure out who's right before tomorrow's OAC meeting.

This guide breaks down what each metric in a DCMA 14-point report actually means, what the scores tell you about your project's real risk, and what to do when your schedule fails.

What Is a DCMA 14-Point Assessment?

The Defense Contract Management Agency created this framework in 2005 to evaluate the quality of project schedules across thousands of U.S. Department of Defense contracts. The 14 criteria check whether a CPM schedule is logically sound, realistically sequenced, and reliable enough to use for management decisions.

Since then, the construction industry has adopted it as the standard schedule quality benchmark — not just for defense projects, but for commercial, institutional, and infrastructure work across North America. If your owner, CM, or legal team is asking for a "schedule health check," this is what they mean.

The assessment doesn't tell you whether your project will finish on time. It tells you whether your schedule is trustworthy enough to answer that question.

The 14 Criteria — What Each One Checks

Logic Checks (Criteria 1–4)

Missing Predecessors and Missing Successors measure the percentage of activities that have incomplete logic — meaning they're not properly connected to the rest of the schedule network. The DCMA threshold is 5% for each. If your report shows 12% missing predecessors, it means roughly 1 in 8 activities is floating free without anything driving its start date. Those activities aren't being controlled by the schedule's logic — their dates are essentially arbitrary.

Leads (Negative Lags) are relationships where a successor starts before its predecessor finishes, represented as negative lag values. The DCMA threshold is zero. Not 5%, not "a few" — zero. This is one of the strictest criteria because negative lags mask unrealistic assumptions about how work overlaps. If your report shows 84 leads, every one of those is a relationship where the schedule assumes two things can happen simultaneously without proving they actually can. In dispute proceedings, negative lags are one of the first things opposing counsel attacks.

Lags are positive time delays between activities. The threshold is 5% of relationships. Unlike leads, some lag is normal — concrete cure time, procurement lead times, inspection waiting periods. But excessive lag suggests the scheduler used time delays instead of actual activities to model the work, which hides what's really happening in the schedule.

Constraint and Float Checks (Criteria 5–7)

Hard Constraints are fixed dates imposed on activities that override the schedule's calculated logic. The threshold is 5% of incomplete activities. A "Must Finish On" constraint will show an activity finishing on a specific date regardless of whether the logic supports it. A schedule loaded with hard constraints isn't logic-driven — it's date-driven, and there's a critical difference when you need to analyze delays.

High Float measures activities with total float exceeding 44 working days. The threshold is 5%. High float usually means an activity isn't properly connected to the rest of the network. If 29% of your activities have high float, nearly a third of your schedule is effectively disconnected from the critical path. Those activities could slip by weeks without the schedule showing any impact.

Negative Float should be zero. Any negative float means the schedule is calculating that the project cannot finish on time based on current logic and constraints. If your report shows negative float, the schedule is already telling you you're late.

Duration and Date Checks (Criteria 8–9)

High Duration flags activities longer than 44 working days (roughly two months). The threshold is 5%. Long activities are hard to manage, hard to measure progress against, and tend to mask delays.

Invalid Dates catches activities with forecast dates in the past or actual dates in the future — dates that are logically impossible based on the status date. Invalid dates indicate the schedule hasn't been properly updated, which undermines every other metric in the report.

Resource and Performance Checks (Criteria 10–14)

Resources checks whether activities have resource or cost assignments. This is the one criterion many construction schedules legitimately skip — resource loading isn't always required by contract. If your report shows this as "Not Scored" or "N/A," that's usually not a concern unless your contract specifically requires a resource-loaded schedule.

Missed Tasks is one of the most telling metrics. It measures the percentage of activities that finished later than their baseline finish date. The threshold is 5%. If your report shows 29% missed tasks, nearly a third of your completed activities finished late compared to the plan. A Baseline Execution Index (BEI) below 0.95 confirms systemic slippage — the project team is consistently falling behind the baseline schedule.

Critical Path Test verifies that the schedule has a single, continuous critical path from the status date to project completion. If this fails, the schedule's critical path is broken — meaning the software can't reliably calculate which activities are actually driving the finish date.

Critical Path Length Index (CPLI) measures the efficiency required to finish on time. A CPLI of 1.00 means the project needs to execute perfectly from here out. Below 0.95 indicates the math says on-time completion is unlikely without acceleration or scope reduction.

Baseline Execution Index (BEI) measures actual throughput against the baseline plan. A BEI of 1.00 means you're executing exactly as planned. A BEI of 0.23 — which I've seen on real projects — means for every 100 tasks that should have been completed by now, only 23 actually were. That's not a scheduling problem; that's a project in crisis.

What Does the Overall Score Mean?

Most DCMA reports roll up the 14 criteria into an overall compliance score or health percentage. Here's how to interpret it:

80–100% (Green): The schedule is well-built and reliable. Failed criteria are minor and can be addressed in the next update cycle. You can use this schedule with confidence for forecasting, resource planning, and delay analysis.

50–79% (Yellow): The schedule has structural issues that affect its reliability. It may still be useful as a management reference, but the failed criteria mean the dates it produces should be treated with caution. If you're headed toward a dispute, this schedule will face challenges under cross-examination.

Below 50% (Red): The schedule has fundamental quality problems. The critical path calculation may be unreliable, float values may be meaningless, and completion forecasts should not be trusted. A schedule remediation effort is needed before this data can support management decisions or delay claims.

What To Do When Your Schedule Fails

A failed DCMA assessment doesn't mean your project is doomed. It means the tool you're using to manage your project has problems that need fixing.

If you failed 1–3 criteria: These are usually quick fixes — add missing logic, remove a few negative lags, clean up some constraints. Your scheduler should be able to address these in the next update cycle. Ask for a re-assessment after the fixes.

If you failed 4–6 criteria: This requires a focused remediation effort. Prioritize the failures that affect the critical path first (logic gaps, negative lags, high float). Don't try to fix everything at once — address the criteria in order of their impact on the driving path.

If you failed 7+ criteria: The schedule likely needs significant rework. Consider engaging a scheduling specialist to perform a complete resolution — identifying every affected activity, fixing the logic, re-establishing the critical path, and re-baselining if necessary. At this point, the schedule in its current state is a liability, not an asset.

Why This Matters for Delay Claims

If your project ends up in a dispute — and on construction projects, the probability is higher than any of us would like — the schedule is the foundation of any delay analysis. A schedule that fails DCMA criteria gives opposing counsel ammunition to challenge the entire analysis. Missing logic means the critical path can't be trusted. Negative lags mean activity overlaps are assumed, not proven. A low BEI means the baseline was never realistic in the first place.

Getting your schedule healthy isn't just about project management. It's about protecting your position if things go sideways.

Get a Professional Assessment

Critical Path Partners provides DCMA 14-Point Schedule Health Assessments for construction projects across Ontario and Canada.

$499
Schedule Health Check

Complete DCMA 14-Point assessment with interactive dashboard, executive report, and remediation roadmap. 48-hour delivery.

$1,999
Complete Resolution

Full assessment plus activity-level fixes, logic remediation, and re-assessment to confirm compliance.

Book Your Analysis →

Dana Fitkowski has 23 years of experience in construction scheduling and project controls, including 17 years on nuclear refurbishment projects at Bruce Power. He has performed forensic delay analyses for construction disputes and has presented schedule assessments to the Office of the Auditor General of Canada.