Preparing TactiQ’s evaluation logic, principles, and product disciplines.
Most platforms publish every number they compute. TactiQ does not. Before any score or interpretation reaches a public surface, it passes through a quality gate. The check is deterministic — given the same evidence packet, it always produces the same outcome.
The gate is deterministic. Given the same evidence packet, it always produces the same outcome.
The evidence is fresh, complete, and supported by sufficient confidence. Shown in full with its confidence label and — where available — the AI consensus interpretation.
The evidence passes the basic quality bar but carries a caveat. This happens when AI agents partially agreed, the evidence is thinner than ideal, or the score is based on fewer seasons than the full model requires. A provisional score is a real score — we are explicit about its limits.
The evidence fails a fundamental quality check. Too old, too many missing fields, or too few matches. A withheld score does not appear on public surfaces. Users see "score pending" with an explanation — not an empty or misleading number.
Has the data been updated recently enough to be meaningful? Evidence older than the acceptable freshness window fails this check regardless of its completeness.
Is enough of the expected statistical picture present? A player who has never had a tackle recorded (position miscategorised, or data genuinely absent) may have a misleading sub-score — this check catches that.
Does the accumulated evidence justify publication? Small sample sizes and missing historical seasons reduce confidence. Below the threshold, the score is withheld or shown as provisional.
For scores that also have an AI interpretation, an additional check runs: did the agents reach sufficient agreement to support the interpretation? The score and its AI interpretation can have different publication outcomes.
This is intentional. If a score appears to be wrong, the correct response is to fix the underlying data or scoring logic — not to approve the output manually.
Manual overrides would introduce editorial bias into the system and undermine the trust that transparent methodology is designed to build.
That is diagnostic information. The gate is telling you where to look. Either the data pipeline hasn't processed recent matches yet, the data is genuinely thin, or something in the scoring logic needs attention.
The gate does not lie. It surfaces problems rather than hiding them behind confident-looking outputs.
Match forecasts require that match data is present and key probability fields are populated — deterministic checks only, no AI consensus requirement. Forecasts are always displayed with explicit uncertainty framing ("projected" outcomes, not confident predictions).
TactiQ surfaces are never populated with scores that say nothing
No score built on unacceptable data reaches public display
No AI interpretation appears when AI itself wasn't confident enough to assert it
How we evaluate player and club quality.
How three agents must agree before an interpretation is published.
The data quality layer that sits between raw data and every score.
How league strength adjusts scores to enable fair comparisons.