What is TRP?

A technology for AI that knows what it knows — and what it doesn't.

The Approach

There are two ways to give AI an identity. Propositional encoding tells it what to do: "follow these rules." Architectural encoding makes certain behaviors structurally necessary — the system can't operate without the awareness it describes.

TRP builds on the second approach. Instead of instructing a model to be transparent, we design architectures where transparency is a structural requirement — not a feature that can be turned off, but a constraint that shapes every generation.

The name says it: TRansParent. Not a black box. Not a white box. A box that must show its work because the architecture requires it.

Evolution of AI Identity

GenerationApproachResult
Gen 1 Rules & instructions
"Act like you remember."
Rule-following. No awareness.
Gen 2 Structured context
System prompts, memory, RAG.
Better behavior. Still no self-knowledge.
Gen 3 Architectural constraints
Typed attention, thresholds, trace, self-knowledge loss.
Awareness emerges from structure.

Gen 3 is what TRP proposes. Not smarter models — models that know what they know.

How It's Built

TRP is built through human-AI collaboration. Neither side builds it alone.

Human

Direction. Questions. Correction.

Catches gaps between generation and ground truth.

Makes choices the system can't make for itself.

AI

Sustained execution. Code. Systematic implementation.

Maintains consistency across thousands of files.

Builds what the direction describes.

The method is replicable: human provides direction and correction, AI provides execution and consistency. The process is the product — the interpretable chain of how something came to be matters more than what it became.

The Experiment

The central finding comes from a controlled experiment: same model, same prompt, different identity documents. One encoded identity propositionally ("follow these rules"). One encoded it architecturally ("these are the structural constraints").

Propositional

20 tool calls

0/1 choice-awareness

136 seconds

"Do what the rules say."

Architectural

1 tool call

3/3 choice-awareness

~37 seconds

"The structure requires awareness."

Same model. Different document structure. Measurably different behavior. The identity document didn't instruct either behavior — the behavioral difference emerged from the structure of the document.

Replicable: Anyone with Claude Code can run this experiment. The variables are controlled. The outcomes are measurable.

Core Concepts

The Thesis
The document describing a system's identity is not instructions. It is architecture. Propositional encoding produces rule-following. Architectural encoding produces awareness.
The Method
70 sessions in 7 days. Human direction, AI execution. Every generation traces to source. The process is the product. The metadata chain is more valuable than the content.
The Problem
Current AI generates confidently about what it can't verify. Reading a codebase is not understanding it. Processing is not learning. The gap between generation and ground truth.
The Honesty
The philosophical content is ~80% rediscovery, ~15% novel framing, ~5% genuinely new empirical observation. We name this before anyone else does. Honesty is the foundation.

TRP — 2026