Preparing TactiQ’s evaluation logic, principles, and product disciplines.
Before any score is computed, before any AI agent is called, and before any result reaches a public surface — an evidence packet is constructed and validated. It answers three questions: Is this data fresh? Complete? Sufficient to publish?
A player can show 7 appearances in one table and 3 in another depending on cup competition handling.
A metric can be present for one season and absent for the next due to data source changes.
If a system reads directly from raw data, inconsistencies produce confident-looking scores derived from incomplete evidence.
TactiQ solves this with evidence packets — a structured, self-describing data layer that sits between raw data and every system that acts on it.
Has the evidence been updated recently enough to be trustworthy? Stale data produces stale scores. An evidence packet that hasn't been refreshed falls below the freshness threshold and is flagged accordingly.
Are enough of the expected statistical fields present to derive a meaningful result? A score built on five metrics is less reliable than one built on fifteen. The completeness score is computed across all expected dimensions and applied as a confidence penalty — not hidden.
Even fresh, complete data may not reach the publication threshold if the player's sample size is too small. A player with three competitive appearances has genuinely less to say about them than one with thirty. The system treats both honestly.
If the answer to any of these questions is "no", the packet signals that downstream. Scoring engines, AI agents, and the publication gate all act on the packet signal — not on their own raw reads.
No stage can skip a previous stage. AI agents cannot read raw tables. The publication gate cannot be bypassed.
SportMonks: match results, player stats, fixture data
Structured and normalised into TactiQ's schema
Freshness, completeness, confidence assessed
TactiQ Score computed deterministically from the packet
Three agents read the packet — never raw tables
Reads packet eligibility before approving display
Score, confidence label, and interpretation shown
Player, club, league, detected role
TactiQ Score, form score, sub-scores
Seasons included, LDI applied, role weights used
Last updated timestamp vs. acceptable window
Which fields are present vs. null — scored
Blended: statistical confidence + completeness
Binary decision: yes/no, with reason recorded
Club packets follow the same structure but draw from match-level aggregates across the eight sub-score dimensions. Clubs typically have higher completeness scores than individual players — they play every match regardless of squad rotation.
Evidence packets are the reason TactiQ scores carry explicit confidence labels rather than presenting all scores as equally reliable.
A player with 30 appearances across three seasons and a full statistical profile has a richly evidenced packet. A player who joined a league mid-season and has 8 appearances has a thin one. The system treats both honestly — and tells you which is which.
Underlying evidence packet passed all quality checks with room to spare
Data is real but limited — we surface that rather than hiding it
Evidence packet below the publication threshold — not shown publicly
How we evaluate player and club quality.
How three agents must agree before an interpretation is published.
When TactiQ shows a score — and when it doesn't.
How league strength adjusts scores to enable fair comparisons.