Core concepts behind
the Decision Layer
20 terms across causal foundations, decision frameworks, and Abel Platform architecture.
Showing 20 of 20 terms.
Causal AI
FoundationsA branch of AI focused on modeling cause-and-effect relationships rather than only statistical association. Causal AI combines machine learning with formal causal models so systems can estimate the effects of actions, not just predict likely observations.
Core formula
P(Y | do(X = x)) ≠ P(Y | X = x)Example
A retailer does not just ask which customers are likely to churn. It asks whether a price change, shipping policy, or promotion would actually reduce churn without hurting margin elsewhere.
Why it matters
Product teams need more than forecasts when they must decide what action to take next and justify why.
Causal Inference
FoundationsThe process of estimating the effect of actions or treatments from data and assumptions. It asks questions like whether changing X would alter Y, and by how much.
Example
Estimating whether a marketing campaign increased revenue by 5%, after controlling for seasonal trends and competitor pricing.
Why it matters
Without causal inference, data analysis can only report associations — it cannot tell you what to do differently.
Causal Reasoning
FoundationsUsing a causal model to answer questions about effects, mechanisms, interventions, and counterfactuals. It moves from representation to computation.
Example
Given a causal graph of supply chain variables, reasoning about what happens to delivery times if a port closes.
Why it matters
Reasoning is the active use of a model; discovery and inference build it, but reasoning is what makes it useful for decisions.
Causal Graphs
FoundationsGraphical representations of variables and the directed causal relationships among them. A causal graph encodes variables as nodes and causal relationships as directed edges.
Example
A DAG showing that Fed rate changes cause USD strength, which in turn causes BTC price changes, with estimated time lags on each edge.
Why it matters
Graphs give a compact picture of how change propagates through a system, making interventions and their downstream effects visible.
Confounders
FoundationsVariables that influence both a candidate cause and an outcome, creating misleading associations if ignored.
Example
Ice cream sales and drowning deaths are correlated — but temperature is the confounder driving both. Without adjusting for it, you might wrongly conclude ice cream causes drowning.
Why it matters
Ignoring confounders is the most common source of wrong causal conclusions in both business analytics and academic research.
Counterfactuals
FoundationsStatements about what would have happened to the same case under a different action or condition. More specific than a population-level causal effect.
Core formula
P(Y_{x'} | X=x, Y=y)Example
"If this patient had received treatment B instead of treatment A, would they have recovered faster?" — a question about one specific case, not an average.
Why it matters
Counterfactuals are the highest level of Pearl's causal hierarchy and enable explanations, blame attribution, and personalized decisions.
Structural Causal Model
FoundationsA causal model that specifies each variable as a function of its direct causes and exogenous factors. An SCM combines a causal graph with structural equations and exogenous noise terms.
Example
Y = f(X, U) where X is the treatment, U is unobserved noise, and f specifies the mechanism. The graph tells you which variables enter each equation.
Why it matters
SCMs are the formal foundation for all three levels of Pearl's hierarchy — association, intervention, and counterfactual.
Do-Calculus
FoundationsA set of rules for transforming intervention queries using a causal graph and conditional independence assumptions. Judea Pearl's algebra for reasoning about interventions.
Core formula
P(Y | do(X)) via rules 1-3 of do-calculusExample
Converting "what is the effect of setting X=1 on Y" into a formula that can be estimated from observational data, without actually performing the intervention.
Why it matters
Do-calculus bridges the gap between what you can observe and what you need to know about actions — it is the mathematical engine behind intervention reasoning.
Decision Intelligence
DecisionsThe discipline of combining models, objectives, constraints, and uncertainty to support better actions. Broader than analytics because it includes intervention logic, tradeoffs, and accountability.
Example
A system that not only predicts customer churn but recommends which retention action to take, estimates its cost, and flags when the recommendation is uncertain.
Why it matters
Analytics tells you what happened. Decision intelligence tells you what to do about it.
Decision-Making
DecisionsThe act of choosing an action under goals, constraints, and uncertainty. In consequential systems, that choice should depend on more than a point prediction.
Example
Deciding whether to expand to a new market by weighing causal estimates of demand drivers, not just extrapolating a trend line.
Why it matters
Every organization makes decisions. The quality of those decisions depends on whether the reasoning is causal or merely correlational.
What-If Analysis
DecisionsEvaluating how outcomes might change under a specified hypothetical adjustment or intervention. In causal systems, the useful version is intervention-aware rather than purely spreadsheet-based.
Example
Using Abel to answer "what happens to CPI if oil hits $120?" — not by assuming a linear relationship, but by computing the causal propagation through the world model.
Why it matters
Spreadsheet what-if analysis ignores feedback loops and confounders. Causal what-if analysis respects the structure of the system.
Scenario Analysis
DecisionsComparing multiple coherent future states to understand how a decision performs across different conditions. Each scenario bundles assumptions about regimes, external events, timing, and constraints.
Example
Testing a portfolio allocation under three scenarios: rate hike, rate hold, rate cut — each with its own causal propagation through the macro graph.
Why it matters
Single-point forecasts hide the range of outcomes. Scenario analysis makes the decision robust to regime changes.
Intervention Modeling
DecisionsRepresenting actions explicitly so their downstream effects can be computed rather than guessed. It turns decisions into formal objects instead of narrative prompts.
Example
Modeling "if we raise prices 10% in the US market" as a do-calculus intervention with defined scope, entry point, and measurable downstream variables.
Why it matters
Without formal intervention modeling, decisions are justified by intuition or correlation. With it, effects are computed and auditable.
CAP (Causal Agent Protocol)
Abel PlatformAn Abel-native protocol for representing causal state, interventions, and outcomes as machine-operable objects. The protocol layer that turns causal concepts into portable computational objects.
Example
An AI agent calling client.intervene("Fed_Rate", "BTC", 0.5) through a standard CAP primitive, receiving a structured causal result with effect size, confidence interval, and chain.
Why it matters
Without a protocol, every causal integration is custom. CAP standardizes causal operations the way SQL standardized data querying.
Schema-as-API
Abel PlatformA design pattern where the causal schema itself defines what can be queried, simulated, and acted on. The structure determines which interventions, paths, and decision objects exist.
Example
An LLM agent calling GET /schema/variables?search="oil price" to discover the exact variable names, communities, and available operations before making a causal query.
Why it matters
Schema-as-API enables zero-LLM-cost routing into Abel's graph. The agent reads the map, then calls structured primitives — deterministic and auditable.
Two Surfaces
Abel PlatformA separation between the surface that builds or curates the causal model and the surface that queries or executes decisions from it. One surface is for model governance, the other for action execution.
Example
Abel App (human surface) lets a user ask "should I buy a house in Austin?" while CAP API (agent surface) lets an AI agent call the same underlying engine programmatically.
Why it matters
Separating model building from decision execution reduces confusion and lets each surface optimize for its audience.
Decision Layer
Abel PlatformThe system layer that computes the consequences of action and turns them into accountable decision objects. Abel's central product idea: a layer that exists to compute action consequences, not just generate text.
Example
When an agent asks "what happens if the Fed raises rates 50bp?", the decision layer returns a structured object with effect, chain, confidence interval, and method — not a prose paragraph.
Why it matters
LLMs produce fluent language. The decision layer produces computable, verifiable consequences of actions.
Computation-Gated Access
Abel PlatformA pattern where access to outputs or actions depends on whether the system can ground them in valid causal computation. If the schema is missing or assumptions are weak, access narrows rather than hallucinating confidence.
Example
A query about a variable outside Abel's graph returns an explicit "not supported" rather than a fabricated answer. The system admits what it does not know.
Why it matters
This is the opposite of LLM behavior — where confidence is uniform regardless of grounding. Computation-gated access preserves epistemic integrity.
Canary Edge
Abel PlatformA monitored causal relationship used as an early warning signal for drift, breakage, or regime change in the model. If its strength, sign, or timing changes unexpectedly, that may indicate the underlying system has shifted.
Example
A known strong edge between oil prices and airline costs is monitored daily. If the edge weakens significantly, it triggers a regime-change alert.
Why it matters
Causal graphs are not static. Canary edges provide continuous model validation without requiring full re-discovery on every cycle.
Social Physical Engine
Abel PlatformAn execution model that combines social behavior, institutional rules, and physical constraints in one causal system. Abel's way of treating real-world systems as mixtures of material constraints and human behavior.
Example
A causal model where Fed policy (institutional), consumer sentiment (social), and commodity supply (physical) all interact through directed edges with measured time lags.
Why it matters
Real-world systems are not purely physical or purely social. The engine must handle both to produce useful causal structure.