Our Approach

How We Analyze Truth

Truth isn't binary. Neither is our analysis. We use Bayesian probability to separate facts from framing.

Facts + Framing + Context

01

Probabilistic, Not Binary

We don't say "TRUE" or "FALSE." We calculate Bayesian probabilities (0-100%) based on evidence strength, source quality, and author credibility.

02

Dual Credibility Scores

Two separate scores: Factual Credibility (are the facts true?) and Presentation Credibility (is the truth being manipulated?).

03

Full Transparency

Every analysis shows our work: evidence sources, Bayesian calculations, detected biases, framing violations. No black box.

The Process

6-Stage Analysis Workflow

Powered by Grok AI • Multimodal Analysis (Text + Images)

1

Author Identification & Research

Extract author from URL (X/Twitter, YouTube). Research historical truthfulness, expertise, bias indicators. Build credibility profile as Bayesian evidence.

Output: Author truthfulness score (0-100%), track record, expertise areas
2

High-Level Analysis (Multimodal)

Analyze text + images. Generate summary, truth assessment, TL;DR. Extract representative interpretation. Detect image manipulation, temporal mismatch, spatial context.

Output: Summary, truth vs. stated, TL;DR, image analysis (manipulation, OCR text)
3

Source Verification

Crawl external links. Compare link content vs. author's claims about links. Detect cherry-picking, misrepresentation, omission framing.

Output: Match status (accurate | cherry-picked | misrepresented | fabricated)
4

Item-by-Item Claims Analysis (Chunked)

Split long content into ~2000 char chunks. Extract every claim (factual, emotive, opinion, prediction). Bayesian analysis: Set priors (base rates) → Gather evidence (web search) → Calculate posteriors (updated probabilities).

P(Claim | Evidence) = P(Evidence | Claim) × P(Claim) / P(Evidence)
Prior (base rate) + Evidence (sources, author credibility) = Posterior (final probability)
Output: Claims with prior/posterior probabilities, evidence, confidence intervals
4.5

Predicted Outcomes Analysis Conditional

Triggered if predictions detected. Analyze specificity, timeframes, success criteria. Set up future verification to track prediction accuracy.

Output: Prediction tracking (outcome, verification date, measurability)
5

Framing, Biases & Psychological Flags

Detect framing violations (omission, emphasis, causal). Identify logical fallacies (ad hominem, straw man). Flag cognitive biases (confirmation bias, fear appeals).

Output: Violations with severity (LOW | MEDIUM | HIGH | CRITICAL)
6

Dual Credibility Scoring

A) Factual Credibility: Weighted average of Bayesian posteriors from claims.
B) Presentation Credibility: Deviation analysis (truth vs. stated, interpretation spin, framing severity) with exponential decay.
C) Overall: Content-type adaptive weighting (news = 70% factual, 30% presentation).
Output: Factual (0-100%), Presentation (0-100%), Overall (0-100%)
Dual Credibility Model

Facts vs. Framing

Content can be factually accurate but misleadingly framed. That's why we score both separately.

Example: Cherry-Picked Statistics

Claim:
"Crime increased 15% this year"
Factual Credibility:
92%
Why?
FBI data confirms 15% increase (accurate stat)
Presentation Credibility:
45%
Why?
Omits 20-year declining trend, presents spike without context
Overall:
69%
Bayesian Methodology

How We Calculate Credibility

From prior probability to posterior probability in 3 steps

1

Set Prior (Base Rate)

What's the baseline probability before seeing evidence?

Claim Type: Economic stats (70%), health claims (50%), political claims (40%)
Author Expertise: Domain expert (+10-15%), out-of-domain (-5-10%)
Specificity: Precise numbers (60%), vague claims (75%)
Example: "Unemployment is 3.7%" → Prior: 70%
2

Gather Evidence (Likelihood)

How strong is the evidence for or against this claim?

🟢
Strong
Official source, expert consensus → +20-30%
🟡
Moderate
Credible secondary sources → +10-20%
🔴
Contradictory
Evidence refutes claim → -30-50%
Example: Bureau of Labor Statistics confirms 3.7% → Likelihood: 0.95
3

Calculate Posterior

Updated belief after considering evidence

Prior × (Likelihood / Prior)
= Posterior
70% × (0.95 / 0.70)
= 95%
Result: "Unemployment is 3.7%" → 95% credible
Why Bayesian?

Because truth is probabilistic

Traditional fact-checkers use binary labels (TRUE/FALSE). We use probabilities (0-100%). Why? Because most claims aren't absolutely certain—they're supported by evidence of varying quality.

Traditional

  • Binary labels (TRUE/FALSE/MISLEADING)
  • No uncertainty shown
  • Ignores base rates and priors
  • Conflates facts with framing
  • Subjective "expert opinion"

TruthSignal

  • Probability scores (0-100%)
  • Confidence intervals shown
  • Uses Bayesian priors and posteriors
  • Separate factual vs. presentation scores
  • Transparent mathematical reasoning
v6.0

Current Algorithm

November 3, 2025

6-stage analysis with dual credibility scoring (factual vs. presentation). Multimodal analysis (text + images), source verification, chunked claims processing, Bayesian probability calculations, and comprehensive framing detection.

Grok-4-fast-reasoningGrok-4-fastGeminiClaude
See it in action →

Try It Yourself

Tag @truthsignal_ai on any X post for instant Bayesian analysis