TactiQ
Sign InGet Started
TactiQ
Football Intelligence
Founding Beta

TactiQ is built around Player & Club Data, Match Intelligence, Predictive Modeling, and Research & Visualization — understand the system, not the surface.

Core
Club football as the permanent base
Launch
World Cup as the launch amplifier
Transparency
Public roadmap and visible system progress
The standard
Methodology →

Every score is deterministic, evidence-gated, and confidence-labelled. Football intelligence should be explainable — not a black box with a number on the front. The methodology is part of the product, not a legal page.

Deterministic scoringMulti-agent consensus gatedPublication gate active
Core
PlayersClubsMatchesWorld Cup 2026Roadmap
Product
CompareRankingsForecastMethodologyMembership
Legal
PrivacyTerms© 2026 TactiQ. All rights reserved.
Methodology

Loading methodology...

Preparing TactiQ’s evaluation logic, principles, and product disciplines.

Quality control

Not every score earns publication.

Most platforms publish every number they compute. TactiQ does not. Before any score or interpretation reaches a public surface, it passes through a quality gate. The check is deterministic — given the same evidence packet, it always produces the same outcome.

Browse players← Methodology overview
Why a gate exists

Reliable, current, and agreed upon — or not shown.

Three matches of data
Not a reliable score
Data three weeks old
Not a current score
AI agents cannot agree
Interpretation may not be trustworthy
Three outcomes

Every score gets one of these states.

The gate is deterministic. Given the same evidence packet, it always produces the same outcome.

✓
Approved

The evidence is fresh, complete, and supported by sufficient confidence. Shown in full with its confidence label and — where available — the AI consensus interpretation.

~
Provisional

The evidence passes the basic quality bar but carries a caveat. This happens when AI agents partially agreed, the evidence is thinner than ideal, or the score is based on fewer seasons than the full model requires. A provisional score is a real score — we are explicit about its limits.

–
Withheld

The evidence fails a fundamental quality check. Too old, too many missing fields, or too few matches. A withheld score does not appear on public surfaces. Users see "score pending" with an explanation — not an empty or misleading number.

The gate checks

Three properties are evaluated.

01
Freshness

Has the data been updated recently enough to be meaningful? Evidence older than the acceptable freshness window fails this check regardless of its completeness.

02
Completeness

Is enough of the expected statistical picture present? A player who has never had a tackle recorded (position miscategorised, or data genuinely absent) may have a misleading sub-score — this check catches that.

03
Confidence

Does the accumulated evidence justify publication? Small sample sizes and missing historical seasons reduce confidence. Below the threshold, the score is withheld or shown as provisional.

For scores that also have an AI interpretation, an additional check runs: did the agents reach sufficient agreement to support the interpretation? The score and its AI interpretation can have different publication outcomes.

No exceptions

The gate has no bypass mechanism.

This is intentional. If a score appears to be wrong, the correct response is to fix the underlying data or scoring logic — not to approve the output manually.

Manual overrides would introduce editorial bias into the system and undermine the trust that transparent methodology is designed to build.

If the gate withholds a score you expect to see

That is diagnostic information. The gate is telling you where to look. Either the data pipeline hasn't processed recent matches yet, the data is genuinely thin, or something in the scoring logic needs attention.

The gate does not lie. It surfaces problems rather than hiding them behind confident-looking outputs.

Match forecasts use a simpler gate

Match forecasts require that match data is present and key probability fields are populated — deterministic checks only, no AI consensus requirement. Forecasts are always displayed with explicit uncertainty framing ("projected" outcomes, not confident predictions).

The practical result

What appears on the platform has earned its place.

Never empty

TactiQ surfaces are never populated with scores that say nothing

Never unreliable

No score built on unacceptable data reaches public display

Never unconfident

No AI interpretation appears when AI itself wasn't confident enough to assert it

Also in Methodology
Core metric
The TQ Score

How we evaluate player and club quality.

Interpretation system
AI Consensus Layer

How three agents must agree before an interpretation is published.

Data quality
Evidence Packets

The data quality layer that sits between raw data and every score.

Context adjustment
League Difficulty Index

How league strength adjusts scores to enable fair comparisons.