Preparing TactiQ’s evaluation logic, principles, and product disciplines.
A single AI reading a dataset can produce confident-sounding analysis that reflects nothing but the biases of that model call. TactiQ addresses this the way science does: multiple independent readers, each with a specific mandate, who must reach agreement before their interpretation is published.
Which specific dimensions are driving the number up or down, and how confident are we in each?
Is this player improving, declining, or showing volatility that the headline figure obscures?
Is a score of 74 in La Liga equivalent to 74 in the Championship? The AI addresses this directly.
All three agents receive blinded inputs — no player names, club names, or nationalities. This is a deliberate design choice: name recognition causes even sophisticated models to anchor on reputation rather than data. You will never see an TactiQ AI summary that simply confirms what media narratives already say about a player.
Agents are called in parallel. Each produces an independent evaluation and confidence level before any consensus is checked.
Evaluates the underlying metrics
Evaluates trajectory and volatility
Evaluates the environment
All three agents are called in parallel. Their outputs are compared. Publication outcome depends on agreement level.
The consensus calculation produces one of four outcomes. The outcome determines what reaches users.
AI interpretation published at full confidence alongside the score.
Interpretation published with a provisional flag — informative but marked as partial.
Data may not yet support confident interpretation. Score is shown; AI commentary is not.
Score shown without AI commentary until enough data accumulates to support evaluation.
What the score reflects and what is driving it in plain language
Which aspects of the player's evidence stand out positively
Aspects of the data that deserve attention or suggest caution
Elite / strong / solid / mixed / weak — a quick orientation
Every interpretation is clearly dated and associated with the evidence packet version it was generated from. When the underlying data changes significantly, the interpretation is regenerated.
The TactiQ Score is set by the deterministic scoring engine. The AI layer cannot increase or decrease a player's number.
A score that fails the data quality checks is withheld regardless of what an AI agent might conclude.
Language like "this suggests", "the data indicates", and "worth monitoring" is intentional. It reflects appropriate epistemic humility about what statistics can and cannot show.
TactiQ's AI layer surfaces patterns and reasons about evidence. Football knowledge, context, and human judgement remain essential. We build tools that inform judgement, not replace it.
How we evaluate player and club quality.
The data quality layer that sits between raw data and every score.
When TactiQ shows a score — and when it doesn't.
How league strength adjusts scores to enable fair comparisons.