TactiQ
Sign InGet Started
TactiQ
Football Intelligence
Founding Beta

TactiQ is built around Player & Club Data, Match Intelligence, Predictive Modeling, and Research & Visualization — understand the system, not the surface.

Core
Club football as the permanent base
Launch
World Cup as the launch amplifier
Transparency
Public roadmap and visible system progress
The standard
Methodology →

Every score is deterministic, evidence-gated, and confidence-labelled. Football intelligence should be explainable — not a black box with a number on the front. The methodology is part of the product, not a legal page.

Deterministic scoringMulti-agent consensus gatedPublication gate active
Core
PlayersClubsMatchesWorld Cup 2026Roadmap
Product
CompareRankingsForecastMethodologyMembership
Legal
PrivacyTerms© 2026 TactiQ. All rights reserved.
Methodology

Loading methodology...

Preparing TactiQ’s evaluation logic, principles, and product disciplines.

Data quality

Every score starts with a question.

Before any score is computed, before any AI agent is called, and before any result reaches a public surface — an evidence packet is constructed and validated. It answers three questions: Is this data fresh? Complete? Sufficient to publish?

Browse players← Methodology overview
The problem

Football data is messy.

Inconsistent appearance counts

A player can show 7 appearances in one table and 3 in another depending on cup competition handling.

Incomplete statistical coverage

A metric can be present for one season and absent for the next due to data source changes.

Confident-looking bad scores

If a system reads directly from raw data, inconsistencies produce confident-looking scores derived from incomplete evidence.

TactiQ solves this with evidence packets — a structured, self-describing data layer that sits between raw data and every system that acts on it.

The three questions

What every evidence packet must answer.

01

Is this data fresh?

Has the evidence been updated recently enough to be trustworthy? Stale data produces stale scores. An evidence packet that hasn't been refreshed falls below the freshness threshold and is flagged accordingly.

02

Is this data complete enough?

Are enough of the expected statistical fields present to derive a meaningful result? A score built on five metrics is less reliable than one built on fifteen. The completeness score is computed across all expected dimensions and applied as a confidence penalty — not hidden.

03

Is the evidence sufficient to publish?

Even fresh, complete data may not reach the publication threshold if the player's sample size is too small. A player with three competitive appearances has genuinely less to say about them than one with thirty. The system treats both honestly.

If the answer to any of these questions is "no", the packet signals that downstream. Scoring engines, AI agents, and the publication gate all act on the packet signal — not on their own raw reads.

The chain of trust

One auditable path from raw data to public output.

No stage can skip a previous stage. AI agents cannot read raw tables. The publication gate cannot be bypassed.

01
External data

SportMonks: match results, player stats, fixture data

02
Canonical storage

Structured and normalised into TactiQ's schema

03
Evidence packetEvidence packet

Freshness, completeness, confidence assessed

04
Scoring engine

TactiQ Score computed deterministically from the packet

05
AI consensus

Three agents read the packet — never raw tables

06
Publication gate

Reads packet eligibility before approving display

07
Public display

Score, confidence label, and interpretation shown

Player evidence packet

What a player packet contains.

1
Identity

Player, club, league, detected role

2
Normalised metrics

TactiQ Score, form score, sub-scores

3
Context summary

Seasons included, LDI applied, role weights used

4
Freshness assessment

Last updated timestamp vs. acceptable window

5
Completeness assessment

Which fields are present vs. null — scored

6
Confidence score

Blended: statistical confidence + completeness

7
Publication eligibility

Binary decision: yes/no, with reason recorded

Club evidence packets

Club packets follow the same structure but draw from match-level aggregates across the eight sub-score dimensions. Clubs typically have higher completeness scores than individual players — they play every match regardless of squad rotation.

Why it matters

Honest uncertainty over false precision.

Evidence packets are the reason TactiQ scores carry explicit confidence labels rather than presenting all scores as equally reliable.

A player with 30 appearances across three seasons and a full statistical profile has a richly evidenced packet. A player who joined a league mid-season and has 8 appearances has a thin one. The system treats both honestly — and tells you which is which.

High confidence label

Underlying evidence packet passed all quality checks with room to spare

Low confidence or provisional

Data is real but limited — we surface that rather than hiding it

Score withheld

Evidence packet below the publication threshold — not shown publicly

Also in Methodology
Core metric
The TQ Score

How we evaluate player and club quality.

Interpretation system
AI Consensus Layer

How three agents must agree before an interpretation is published.

Quality control
Publication Gate

When TactiQ shows a score — and when it doesn't.

Context adjustment
League Difficulty Index

How league strength adjusts scores to enable fair comparisons.