Validated Performance

Performance Metrics

SECC performance data is derived from structured testing against expert manual review baselines. We publish the methodology, not just the numbers.

50%

Reduction in Manual Review Time

Across multiple program review scenarios, SECC reduced the calendar time required for full document-set review by 50% compared to baseline expert manual review workflows.

Measured from document upload to final issue report delivery. Baseline: 2 senior systems engineers reviewing a 7-document package over 3 days. SECC: automated analysis completed in under 4 hours, followed by 1-hour expert validation.

85%+

Issue Detection Accuracy

SECC detected 85% or more of the issues identified by experienced systems engineers in blinded comparison testing — while generating a low false-positive rate.

Measured using precision and recall against a manually-curated ground-truth issue set. Expert panel of three systems engineers with 10+ years experience each independently reviewed the same document sets. SECC results were compared against the union of expert findings.

89%

Cost Reduction per Review Cycle

When fully integrated into the program review workflow, SECC reduces the total cost of a document set review cycle by 89% compared to equivalent manual review labor.

Cost model inputs: labor rate for senior systems engineers, calendar time, and overhead. SECC cost model includes licensing, setup, and expert validation time. The 89% figure reflects total cost of ownership at steady-state operation — not initial deployment.

Methodology

Multi-Attribute Utility Theory (MAUT)

The SECC System Health Score is not a heuristic — it is a mathematically grounded composite metric derived from MAUT, an established decision-science methodology used in systems engineering trade studies and program risk assessments.

Each of four document quality dimensions is measured independently and converted to a 0–100 score. Those scores are combined using pre-defined weights to produce a single System Health Score — reproducible, auditable, and comparable across review cycles.

30%
Inconsistency Score
Proportion of document pairs with detected contradictions, weighted by severity (numerical vs. terminology vs. scope conflicts).
25%
Traceability Score
Percentage of requirements with complete trace chains from source through design to V&V, penalized for orphaned or broken links.
25%
Semantic Clarity Score
Density of undefined terms, ambiguous language, and terminology inconsistencies normalized against document word count.
20%
Compliance Score
Proportion of applicable INCOSE and regulatory standard clauses satisfied, with critical gaps weighted more heavily.

Weights are configurable at deployment time to reflect program-specific priorities. Default weights shown above are validated against INCOSE SE Handbook quality criteria.

SECC vs. Manual Review

SECC is designed to augment expert systems engineers, not replace them. Understanding where automation adds value — and where human judgment is irreplaceable — is foundational to how we built it.

Aspect
Manual Review
SECC
Review Time
2–5 business days
2–4 hours
Consistency
Varies by reviewer
Deterministic per version
Coverage
Limited by human attention
100% of document text
Cross-doc Analysis
Challenging at scale
Native capability
Audit Trail
Manual notes / red lines
Structured, exportable
Contextual Judgment
Domain expert insight
Requires expert validation

Validate These Numbers With Your Own Data

Request a demo with your program document set and see SECC performance metrics against your own baseline.

Request a Demo