Core Concepts

The mathematical and epistemic foundations of Abel's live causal world model.

The One Sentence

Abel is building a live causal world model — computing what drives what across 200,000+ financial and macroeconomic variables, with causal structure refreshed daily and predictions updated hourly.

Pearl's Three Layers

Judea Pearl defined the Causal Hierarchy (also called the Ladder of Causation) — three levels of increasingly powerful reasoning, where each level is mathematically irreducible to the one below it.

Layer 1Association

“When I observe X, what do I expect for Y?”

P(Y | X)

This is what all LLMs do — predict the next token given context. Google Search, ChatGPT, and every ML model operates here. Powerful for pattern matching, useless for decisions.

Layer 2Intervention

“If I do X, what happens to Y?”

P(Y | do(X=x))

Requires a causal graph + do-calculus. Cannot be answered from observational data alone, no matter how much data you have. This is where Abel begins.

Layer 3Counterfactual

“If X had been different, would Y have been different?”

P(Y_x' | X=x, Y=y)

Requires a full Structural Causal Model (SCM). The hardest form of reasoning — imagining alternative histories. Abel's counterfactual() primitive operates here.

Three Principles

Abel has six internal epistemic principles. Three face outward. These three are sufficient for any external audience.

Principle 1: The World Changes

Reality is non-stationary. Causal structures drift, relationships reverse, regimes shift. A model trained on historical averages is a model waiting to be wrong.

Daily PCMCI structural refresh200K+ variables, 6M nodes
Hourly causal predictionsLatest structural graph + live data
Regime change detectionGraph topology shift alerts
Edge lifecycle trackingEdges are born, strengthen, weaken, die

Principle 2: Structure, Not Surface

Correlation describes the surface. Causal structure describes the mechanism. Abel discovers which variables drive which — with direction, magnitude, and timing.

Directed causal edgesDirection, β, τ, p-value
Markov Blanket extraction“What do I actually need to watch?”
do(X) intervention simulationStructural downstream propagation
Feedback loop detectionBidirectional and circular pathways

Principle 3: Claims Must Be Verifiable

Every output Abel produces is testable, falsifiable, and accountable. If a causal edge can't be verified against future data, it's not knowledge — it's opinion.

p-values on every edgeStatistical significance attached
β with confidence intervalsEffect sizes with uncertainty bounds
Full provenance trailPCMCI run, data window, hyperparams
Zero-LLM inference pathComputed, not generated

The Mathematical Impossibility Gap

Pearl & Bareinboim (2022) proved that higher-layer questions cannot be answered by lower-layer methods. This is not an engineering limitation — it is a mathematical theorem.

No matter how large GPT becomes, how much data it trains on, or how clever its reasoning chains are — if the architecture operates at Layer 1, it cannot answer Layer 2 questions. It can only produce plausible-sounding guesses based on textual patterns.

How Abel Bridges the Gap

Abel uses two ingredients that LLMs lack:

  1. Causal graph discovery — PCMCI (conditional independence-based causal discovery for high-dimensional time series) and 38 other algorithms learn the directed graph from observational data. This graph encodes who causes whom, with directed edges carrying β coefficient, time lag τ, and p-value.
  2. do-calculus — Pearl's algebraic rules transform Layer 2 queries into computable expressions. Given the graph, Abel can calculate P(Y|do(X)) from observed data.

LLMs handle language. Abel handles causal computation. Complementary, not competitive.

The Architecture

Abel operates on two timescales by design. This is not a limitation — it reflects the physics of causal discovery.

Daily: Structural Discovery

GPU-accelerated PCMCI across 200K+ variables (6M causal spatiotemporal nodes at 30 time steps). Output: directed causal graph stored in Neo4j with full edge metadata.

Hourly: Causal Prediction

Using the latest daily graph as the structural backbone, Abel generates forward predictions every hour by combining structural graph with real-time data.

The causal graph is stored in Neo4j — structure as a first-class citizen. Access is gated through a three-zone model (Green / Yellow / Red) with anti-distillation protections, because the graph topology itself is the core IP.

The Social Physical Engine

Abel's causal graph is not limited to one domain. The Social Physical Engine models the interconnected causal structure of human behavior (social), physical constraints (supply chains, resources), and market signals (prices, volumes).

Financial markets serve as the signal layer — a holographic encoder of real-world consensus. Every career choice, policy change, and technology shift eventually collapses into price signals. Abel decodes those signals back into directed causal chains with effect size, time lag, and confidence.