Trust Framework

Evaluation Methodology: Reproducible Verdicts, Not Marketing Claims

This page documents how SaaSVerdict converts test evidence into rankings, comparisons, and procurement guidance with explicit scoring weights, blocker rules, and correction workflows.

Updated: 2026-04-04 | Scope: lawful QA, account safety, and reliability-led decision support.

Why This Exists

What Makes a Verdict Credible

Credibility rule 1: repeatable tests across more than one run.
Credibility rule 2: clear separation between signal evidence and editorial interpretation.
Credibility rule 3: visible no-buy conditions, not just conversion paths.

Methodology protects readers from price-only decisions and protects operators from avoidable reliability debt.

Scoring Model

Weighted Criteria Used Across Reviews

Profile integrity and session consistency
38%
API lifecycle reliability and observability
26%
Operational cost under sustained usage
21%
Team governance and handoff resilience
15%

Weights may be adjusted only through documented methodology updates, never by campaign priorities.

Reproducibility Protocol

Minimum Conditions Before Publication

Category Requirement Blocker condition
Run count At least 3 repeated runs per critical workflow Critical signals drift between repeated runs
Signal coherence Main thread, worker, and network narratives stay consistent Identity contradictions across runtime layers
Connection hygiene WebRTC, DNS, and proxy checks stay stable Persistent leak in production-relevant flow
Documentation quality Assumptions, caveats, and no-buy criteria are visible Missing caveats on known reliability limitations

Evidence Quality Levels

  • Level A: repeated clean runs with no critical contradictions.
  • Level B: minor non-critical drift with explicit caveats.
  • Level C: unstable signals requiring no-buy guidance.

Only Level A and qualified Level B evidence can support procurement recommendations.

Decision Output Template

stack_id: candidate-2026-q2
run_count: 3
signal_coherence: pass
connection_hygiene: pass
evidence_level: A
known_limits:
  - elevated variance on weak residential pools
no_buy_conditions:
  - persistent DNS mismatch
  - worker or runtime identity contradiction
recommendation_state: eligible_after_disclosure

Corrections and Freshness

Update SLA and Revision Rules

Routine review: monthly methodology and content audit.
Urgent correction: material errors are corrected after verification with revision notes.
Major change: scoring logic updates trigger metadata refresh and cross-page alignment.

Report potential issues at [email protected] with URL, evidence, and impact summary.

Version Traceability

How Methodology Changes Are Published

Rule 1: every scoring or blocker change requires version bump.
Rule 2: every version bump gets impact note in changelog.
Rule 3: monthly benchmark reports declare active methodology version.

Disclosure Controls

How We Prevent Commercial Bias

  • Affiliate and partnership context is disclosed in relevant pages.
  • Commercial relationships cannot alter blocker rules.
  • No-buy criteria remain visible even when promotions exist.
  • Recommendations require evidence grade and caveats.

Reproduce in One Hour

Fast Path for Your Team

Step 1: run detection tests and collect baseline artifacts.
Step 2: run connection leak checks with repeated sessions.
Step 3: apply ops SOP gates and document no-buy criteria.
Step 4: use compare pages and promo verification pages only after evidence stabilizes.

FAQ

Methodology Questions

How many repeated runs are required before publishing a verdict?

At least three repeated clean runs per critical flow are required, and any critical contradiction blocks publication until resolved.

Do affiliate relationships influence scoring weights?

No. Commercial relationships are disclosed but do not alter scoring weights or blocker criteria.

How are corrections handled after publication?

Verified material errors are corrected with revision notes and reflected in updated metadata and methodology records.

How often is this methodology reviewed?

The methodology is reviewed monthly and whenever browser, proxy, or automation ecosystem changes can affect reliability signals.