TactiQ
Sign InGet Started
TactiQ
Football Intelligence
Founding Beta

TactiQ is built around Player & Club Data, Match Intelligence, Predictive Modeling, and Research & Visualization — understand the system, not the surface.

Core
Club football as the permanent base
Launch
World Cup as the launch amplifier
Transparency
Public roadmap and visible system progress
The standard
Methodology →

Every score is deterministic, evidence-gated, and confidence-labelled. Football intelligence should be explainable — not a black box with a number on the front. The methodology is part of the product, not a legal page.

Deterministic scoringMulti-agent consensus gatedPublication gate active
Core
PlayersClubsMatchesWorld Cup 2026Roadmap
Product
CompareRankingsForecastMethodologyMembership
Legal
PrivacyTerms© 2026 TactiQ. All rights reserved.
Methodology

Loading methodology...

Preparing TactiQ’s evaluation logic, principles, and product disciplines.

Interpretation system

Three agents. One conclusion.

A single AI reading a dataset can produce confident-sounding analysis that reflects nothing but the biases of that model call. TactiQ addresses this the way science does: multiple independent readers, each with a specific mandate, who must reach agreement before their interpretation is published.

See it on a player← Methodology overview
Why AI interpretation?

A number tells you where. Not why.

What does the score reflect?

Which specific dimensions are driving the number up or down, and how confident are we in each?

Is the score about to change?

Is this player improving, declining, or showing volatility that the headline figure obscures?

What does the context mean?

Is a score of 74 in La Liga equivalent to 74 in the Championship? The AI addresses this directly.

Agents evaluate the evidence, not the reputation

All three agents receive blinded inputs — no player names, club names, or nationalities. This is a deliberate design choice: name recognition causes even sophisticated models to anchor on reputation rather than data. You will never see an TactiQ AI summary that simply confirms what media narratives already say about a player.

The three agents

Each agent has a specific mandate.

Agents are called in parallel. Each produces an independent evaluation and confidence level before any consensus is checked.

Performance Agent

Evaluates the underlying metrics

→Is this player's output consistent with their score?
→Are there defensive contributions the headline figure misses?
→Are there red flags suggesting the score may be misleading?
Risk & Form Agent

Evaluates trajectory and volatility

→Is the player currently in better or worse form than their TactiQ Score suggests?
→Do their numbers vary dramatically match to match?
→Are there sample-size risks that make the score less reliable?
Context Agent

Evaluates the environment

→Does the league and team context make this score more or less impressive?
→Are there environmental factors inflating or deflating the metrics?
→Is the player benefiting from or fighting against their team situation?
P
Performance Agent
R
Risk & Form Agent
C
Context Agent
✓
Consensus

All three agents are called in parallel. Their outputs are compared. Publication outcome depends on agreement level.

Agreement outcomes

Four possible publication states.

The consensus calculation produces one of four outcomes. The outcome determines what reaches users.

Strong agreementFull publication

AI interpretation published at full confidence alongside the score.

Partial agreementProvisional

Interpretation published with a provisional flag — informative but marked as partial.

Significant disagreementInterpretation withheld

Data may not yet support confident interpretation. Score is shown; AI commentary is not.

Insufficient dataNo interpretation

Score shown without AI commentary until enough data accumulates to support evaluation.

What you get

What a published AI interpretation contains.

Score summary

What the score reflects and what is driving it in plain language

Identified strengths

Which aspects of the player's evidence stand out positively

Watchpoints

Aspects of the data that deserve attention or suggest caution

Final label

Elite / strong / solid / mixed / weak — a quick orientation

Every interpretation is clearly dated and associated with the evidence packet version it was generated from. When the underlying data changes significantly, the interpretation is regenerated.

Important limits

What AI does not do.

✕
AI does not determine the score

The TactiQ Score is set by the deterministic scoring engine. The AI layer cannot increase or decrease a player's number.

✕
AI does not override the publication gate

A score that fails the data quality checks is withheld regardless of what an AI agent might conclude.

✕
AI does not claim certainty

Language like "this suggests", "the data indicates", and "worth monitoring" is intentional. It reflects appropriate epistemic humility about what statistics can and cannot show.

✕
AI is not the last word

TactiQ's AI layer surfaces patterns and reasons about evidence. Football knowledge, context, and human judgement remain essential. We build tools that inform judgement, not replace it.

Also in Methodology
Core metric
The TQ Score

How we evaluate player and club quality.

Data quality
Evidence Packets

The data quality layer that sits between raw data and every score.

Quality control
Publication Gate

When TactiQ shows a score — and when it doesn't.

Context adjustment
League Difficulty Index

How league strength adjusts scores to enable fair comparisons.