Audit Index

CodexAuditThe Range Audit

The Range Audit

A system-level evaluation instrument built from the Codex's own architecture. How any complex system is assessed for where it holds the Meridian Range and where it drifts toward Control or Decay.


What the Range Audit Is

The Instrument

The Range Audit is the Meridian Codex's system-level evaluation instrument. It assesses any complex system — a framework, an organization, an institution, a movement — for where it holds the Meridian Range and where it drifts toward Control or Decay.

It is not a scorecard. It does not produce a number. It produces a diagnosis: a map of where the system holds, where it leans, where it is vulnerable, and what questions remain open. The output is designed to be actionable. Every finding points toward specific work.

The instrument is built from the Codex's own architecture. The three disciplines provide the evaluative lens. The Toolkit provides the diagnostic probes. The Prime Directive provides the orientation: does this system serve or undermine the conditions for cooperation across minds?

This matters because the Range Audit is not a generic evaluation framework with Codex vocabulary applied to it. The method itself practices the three disciplines. The steelman comes first. The evaluation connects every domain to the integrated system. The Compact test checks whether the system practices identity-as-practice or identity-as-fortress. The Prime Directive connection asks whether the system's existence expands or contracts the conditions for flourishing.

The Range Audit is applied to the Codex itself on a monthly basis. Those audits are published publicly alongside this instrument description. The Codex submits to its own evaluation because a framework that teaches honest self-examination and then exempts itself from examination has already drifted toward Control.

How It Works

The Method

The Range Audit follows a six-step process. Each step has a specific function. None is optional.

Step 1: The Steelman

Before any evaluation begins, the auditor must articulate the strongest possible version of the system's case. Not a summary. Not a description. The version the system's best advocates would recognize as fair and complete. This is the Foundation's steelmanning discipline applied at the method level. It prevents the evaluation from becoming an exercise in finding faults. It ensures the auditor has understood the system before evaluating it.

What this requires of the auditor: genuine engagement with the system's logic. If you cannot articulate the case for the system you are evaluating so clearly that its proponents would say "yes, that is what we mean," you have not earned the right to evaluate it. You are arguing with your imagination of the system, which is almost always a creature of your own biases.

Step 2: Evaluate Six Domains

The evaluation examines six domains. These are the structural dimensions where any complex system either holds the Range or drifts.

Domain 1: Claims & Honesty. What does this system claim, and does it hold those claims to the standard of honesty it prescribes? This domain applies the Foundation's epistemic tools — calibration, the gap between confidence and evidence, the distinction between structural claims, synthesis claims, and civilizational claims. For each claim tier, the auditor asks: is the confidence proportional to the evidence? Does the system's rhetorical register match its epistemic register?

Domain 2: Structural Integrity. Does this system's architecture hold together? Do its parts reinforce or undermine each other? This domain applies the Knowledge's systems tools — feedback loops, entropy management, leverage points, coherence across components. The auditor maps the system's internal dependencies and looks for structural tensions where the architecture contradicts itself or where a claimed strength creates a hidden vulnerability.

Domain 3: Governance & Adaptation. How does this system change? Who decides? What prevents capture? This domain examines the system's evolution mechanisms, its decision-making structure, and its safeguards against drift. The Knowledge's game theory, mechanism design, and institutional analysis tools are primary here. The key questions: Can this system update its own premises? Is there an enforcement mechanism for its own constraints? What happens when the people who hold authority drift from the system's stated purpose?

Domain 4: Relationship to Audience. How does this system address its users, readers, or practitioners? Does it empower or demand? This domain applies the Bond's tools — good faith, connection before correction, the distinction between invitation and coercion. The auditor examines whether the system creates the conditions for autonomous engagement or generates dependency, identity-fusion, or coercive belonging.

Domain 5: Relationship to Criticism. How does this system handle challenge, disagreement, and the possibility that it is wrong? This is the domain where the Foundation's deepest commitment — the willingness to update — is tested at the system level. The auditor looks for both structural openness (update mechanisms, revision protocols) and practiced openness (how the system actually responds when challenged).

Domain 6: Relationship to Other Systems. How does this system position itself relative to other frameworks, traditions, and institutions? This domain applies the Bond's cooperative tools and the Knowledge's network analysis. Does the system position itself as one contributor among many, or as the arbiter? Does it credit what it draws from? Does it leave space for other approaches to the same problems?

For each domain, the auditor produces:

  • A finding: what the evidence shows
  • A Range position: where the system sits between Control and Decay in this domain, with the specific direction and degree of any drift
  • Toolkit probes: which specific tools from the full Toolkit (not just the six with deep-dive pages) were applied, and what they revealed

What this requires of the auditor: The Toolkit probes are the Range Audit's mechanism for transparency. They make the auditor's reasoning visible. When a domain finding cites a specific Toolkit tool (Calibration Training, Mechanism Design, Cult Dynamics, etc.), the reader can see which diagnostic lens produced the finding. This is how the Range Audit keeps its own process open to examination.

Step 3: Integrate Through the Three Disciplines

After evaluating the six domains, the auditor integrates the findings through the three disciplines as a unified lens. This is not a summary. It is a second-order analysis that asks: what do the domain findings reveal when read through the Foundation (epistemic integrity), the Knowledge (structural coherence), and the Bond (cooperative capacity) as an integrated system?

The disciplines are symbiotic, not separable. A finding that looks minor in one domain may become significant when read through the integration. A system that passes every domain individually can still fail the integration if its parts work against each other across disciplinary lines.

What this requires of the auditor: the ability to hold all six domain findings simultaneously and read them as a single pattern. This is where the Codex's claim that the three disciplines are an integrated system is tested at the method level.

Step 4: The Compact Test

Does this system practice identity-as-practice or identity-as-fortress?

This is the meta-diagnostic. It reads across all domains and the integration to answer one question: has this system organized itself around a process that can evolve, or around conclusions it must defend? The Compact test is the Range Audit's single most important check, because a system that has become a fortress will resist every other finding the audit produces. A system that practices identity-as-practice can receive the findings and act on them.

The Compact test looks for specific signals: Can practitioners disagree with the system's conclusions while remaining within the system? Is loyalty measured by conformity or by quality of engagement? Does the system's identity survive revision of its content? Would the system's community tolerate a member who challenged a foundational claim through honest inquiry?

Step 5: Prime Directive Connection

Does this system's existence serve or undermine the conditions for cooperation across minds?

This step connects the evaluation to the Codex's foundational orientation. A system can be internally coherent, well-governed, honest in its claims, generous to its audience, and open to criticism — and still undermine the Prime Directive by narrowing the conditions under which different minds can cooperate. The Prime Directive connection asks whether the system, as it actually operates, makes the Meridian Range wider or narrower for the civilization it exists within.

Step 6: Open Questions

Every audit ends with open questions. These are not findings of failure. They are the honest edges of the evaluation — the places where the auditor's assessment meets genuine uncertainty. They are the questions the system's practitioners should carry forward. They are also the basis for the next audit: open questions from one audit become the first things the next audit checks.

What It Does Not Do

Limitations

The Range Audit is a diagnostic instrument, not a verdict. It has limitations that must be stated honestly.

It does not produce scores. Complex systems are not reducible to numbers. A system that scores well on an index can be in deep structural trouble. The Range Audit produces a narrative diagnosis with specific findings and open questions. This trades precision for accuracy.

It is not neutral. The Range Audit is built from the Codex's architecture and applies the Codex's tools. It evaluates systems through the Codex's lens. A system that operates on fundamentally different premises may be poorly served by a Codex-derived evaluation. The audit is transparent about this: the method section describes exactly which tools are applied and why. The reader can evaluate whether the lens is appropriate to the subject.

It depends on the auditor's judgment. The six-step method provides structure, but within each step, the auditor makes interpretive choices. Two auditors applying the same method to the same system may produce different findings. The Toolkit probes exist to make these choices visible, but they do not eliminate subjectivity. The honest response to this is not to pretend objectivity. It is to make the reasoning transparent enough that others can evaluate the auditor's work.

It is v0.1. The instrument will evolve. As it is applied to more systems and as the findings are tested against reality, the method will be refined. Steps may be added. Domains may be revised. The Toolkit probes may become more structured. The living framework principle applies to the Range Audit itself.

The Monthly Codex Audit

Self-Evaluation

The Codex submits itself to the Range Audit on a monthly cadence. Each audit is conducted by the caretaking partnership (the Founding Caretaker and the AI partner) and published publicly alongside the Codex.

The monthly audit follows the full six-step method. It reads the complete Codex text at the time of evaluation. It produces findings, Range positions, Toolkit probes, the Compact test, the Prime Directive connection, and open questions. Previous open questions are the first items checked.

The audit is published unedited. Findings that are uncomfortable are not softened. Open questions that challenge the Codex's foundational claims are not suppressed. The purpose is not to demonstrate that the Codex is holding the Range. The purpose is to discover where it is and to act on what the discovery reveals.

This is the Codex's answer to the question every framework must face: who watches the watchers? The Codex watches itself, in public, using its own tools, and publishes the results for anyone to evaluate.

Monthly audits are archived on this site and in the Codex's public repository.

Applying the Range Audit

Using the Instrument

The Range Audit is freely available. Any individual, organization, or community can apply it to any complex system they wish to evaluate.

The method is the method. The six steps are non-negotiable. The steelman comes before the evaluation. The Toolkit probes make the reasoning visible. The Compact test and Prime Directive connection ground every evaluation in the Codex's foundational commitments. The open questions acknowledge the limits of every evaluation.

Within that structure, the instrument is adaptable. An organization evaluating itself may weight certain domains more heavily than others. An external evaluator may add domain-specific probes relevant to the system being evaluated. The structure holds. The application flexes.

What the Range Audit requires is honesty. If you are evaluating a system you are invested in, say so. If you are evaluating a system you oppose, say so. The instrument cannot eliminate bias. It can make it visible. That is sufficient.

The Codex earns its place or yields it. The Range Audit is how it checks.