Performance Metrics
SECC performance data is derived from structured testing against expert manual review baselines. We publish the methodology, not just the numbers.
Reduction in Manual Review Time
Across multiple program review scenarios, SECC reduced the calendar time required for full document-set review by 50% compared to baseline expert manual review workflows.
Measured from document upload to final issue report delivery. Baseline: 2 senior systems engineers reviewing a 7-document package over 3 days. SECC: automated analysis completed in under 4 hours, followed by 1-hour expert validation.
Issue Detection Accuracy
SECC detected 85% or more of the issues identified by experienced systems engineers in blinded comparison testing — while generating a low false-positive rate.
Measured using precision and recall against a manually-curated ground-truth issue set. Expert panel of three systems engineers with 10+ years experience each independently reviewed the same document sets. SECC results were compared against the union of expert findings.
Cost Reduction per Review Cycle
When fully integrated into the program review workflow, SECC reduces the total cost of a document set review cycle by 89% compared to equivalent manual review labor.
Cost model inputs: labor rate for senior systems engineers, calendar time, and overhead. SECC cost model includes licensing, setup, and expert validation time. The 89% figure reflects total cost of ownership at steady-state operation — not initial deployment.
Multi-Attribute Utility Theory (MAUT)
The SECC System Health Score is not a heuristic — it is a mathematically grounded composite metric derived from MAUT, an established decision-science methodology used in systems engineering trade studies and program risk assessments.
Each of four document quality dimensions is measured independently and converted to a 0–100 score. Those scores are combined using pre-defined weights to produce a single System Health Score — reproducible, auditable, and comparable across review cycles.
Weights are configurable at deployment time to reflect program-specific priorities. Default weights shown above are validated against INCOSE SE Handbook quality criteria.
SECC vs. Manual Review
SECC is designed to augment expert systems engineers, not replace them. Understanding where automation adds value — and where human judgment is irreplaceable — is foundational to how we built it.
Validate These Numbers With Your Own Data
Request a demo with your program document set and see SECC performance metrics against your own baseline.
Request a Demo