METHODOLOGY

PROTOCOL

How the Archive shows its work.

Most win-rate tables are verdicts. Protocol is the part that comes before the verdict — the question of whether the verdict can be trusted in the first place. Sample size, confidence intervals, the difference between a real shift and a coin flip in a small dataset. The Archive’s stance is that methodology should not be hidden behind the answer; it should be the answer’s foreground.

WHAT’S HERE NOW
Nothing yet — this section is a Phase 2 build. What’s coming is below.
COMING IN PHASE 2
CI Explorer
Visualize the confidence interval around an aggregator’s reported win rate. How wide is the uncertainty around that 54%? When is a gap between two factions statistically real?
Power Analysis
Given a sample size, what differences are even detectable? Given a difference you want to detect, what sample size would you need? A reality check for “the meta has shifted” claims.
Two-Proportion Test
Rigorously compare two faction win rates, or pre-update vs post-update for the same faction. Tells you whether the gap is signal or noise.
Methodology Library
Short writeups on how to read aggregator data correctly: what tournament datasets can and cannot tell you, when ELO-adjusted rates matter, where the common analysis pitfalls live.

Until these tools land, the section sits empty by design. The Archive would rather show no methodology than fake methodology.