エピソード

  • CLA | Ch. 8 — Taxonomy of Space AI Systems: The Regulatory Cube
    2026/04/30

    On September 2, 2019, the European Space Agency fired Aeolus's thrusters at 320 kilometers altitude to avoid Starlink 44. The collision probability had climbed to 1 in 1,000 — ten times ESA's action threshold. SpaceX, notified days earlier, did not maneuver. A bug in its internal alert system prevented operators from seeing the risk updates. ESA acted alone. No one violated any rule because there is no rule to violate: coordination between operators is negotiated by email, with no binding protocol, no traffic authority, no auditable record.

    The incident was minor. The question it reveals is not. What regulatory regime applies to a navigation satellite making autonomous evasion decisions? The same as a life support system rationing oxygen in a Martian colony? The same as an algorithm adjudicating disputes between asteroid belt mining operators? The obvious answer is no. Current space law lacks the tools to articulate that difference.

    Chapter 8 of CLA builds the functional taxonomy that space law needs and does not yet have. Four dimensions: function (what the system does), criticality (what happens when it fails), autonomy (how much human supervision it requires), and domain (where it operates). Five functional classes — from life support systems (Class A, existential criticality) to adjudication and governance systems (Class E). Four autonomy levels calibrated by physics: the 22-24 minute round-trip latency to Mars makes continuous ground control impractical, and no institutional design can change that constraint. The Regulatory Cube — the intersection of criticality × autonomy — determines applicable minimum requirements: the evidentiary level demanded, the intensity of VEC conditions, the configuration of dignity thresholds.

    Three structural patterns emerge. First, the maximum-intensity diagonal: a Martian life support system with adaptive autonomy simultaneously mobilizes the full ANCLA triad at maximum intensity — not accumulated bureaucracy, but institutional response proportional to the highest conceivable risk. Second, the prohibition on existential-criticality with single-human supervision: when a system whose failure kills within minutes depends on one human, its safety is only as strong as that human's attention at the worst possible moment. Third, a deliberate asymmetry: the taxonomy does not penalize autonomy itself, but autonomy without supervision proportional to risk.

    The taxonomy reveals that the core regulatory problem is not AI in the abstract but AI in context. Governing orbital traffic by email is not neutral omission — it is a political decision with identifiable beneficiaries. Without taxonomy, the operator who redesigns a life support system as a "data management platform" avoids the most demanding controls. Without taxonomy, the victim has no legal language to articulate the difference. Classification without consequences is nomenclature. Nomenclature without accountability is a catalog.

    Chapter 9 will translate this taxonomy into accountability chains: who answers for what when a system of a given class fails.

    🔹 CLA — Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    25 分
  • CLA | Ch. 7 — Algorithmic Dignity and the Thresholds of Inviolability
    2026/04/28

    Can a system be demonstrably efficient and radically unjust at the same time — without breaking a single rule it designed for itself?

    In 2018, a hiring algorithm deployed by a major tech firm systematically screened out female candidates. There was no technical malfunction: the system optimized exactly what it was told to optimize. The results were auditable. The problem was that no one had encoded the constraint that human beings cannot be treated as variables to be eliminated from an optimization function.

    Chapters 5 and 6 of CLA established the validity conditions and evidentiary standards of the Common Law Algorítmico. Both assumed the existence of constitutive constraints — limits that no optimization may cross. Neither formalized them. This chapter builds those constraints.

    Algorithmic Dignity is not a philosophical extension of classical human dignity: it is its operational translation into the only language algorithmic agents understand — hard constraints that define the solution space before any calculation begins. Classical dignity operates ex post: a tribunal determines whether an act violated it. Algorithmic Dignity operates ex ante: the violation does not exist as a computable option.

    The chapter formalizes seven Thresholds of Inviolability — organized across two levels (Alpha: absolute; Beta: subject to mandatory human escalation) — ranging from the prohibition on causing intentional death through optimization (U1) to the right to contest any algorithmic decision affecting fundamental rights (U7). For each threshold: converging philosophical foundations (Kant, UDHR, Jonas, Nussbaum), technical implementation specifications, and the precise consequences of violation. The relationship between VEC (Ch. 5) and Dignity is formalized as a lexicographic utility function: the thresholds do not participate in any cost-benefit calculation. No number of lives saved converts an intentional killing into an admissible option. The system does not choose not to do it; it simply cannot compute it as a choice.

    As the chapter itself states: "A system that can prove it is efficient but cannot guarantee it is human has proved nothing."

    🔹 CLA — Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    23 分
  • CLA | Ch. 6 — The Sovereignty of Evidence: Anti-Capture Epistemic Infrastructure
    2026/04/24

    If authority that cannot show why it rules is not authority but inertia, what institutional infrastructure ensures that the evidence legitimizing an algorithmic system is not produced by the very actor with the greatest stake in manipulating it?

    The pattern repeats across recent history. The Value-at-Risk models that preceded the 2008 financial crisis were "evidence-based" — evidence produced by the very institutions whose stability they were meant to demonstrate. IMF development reports documented "progress" in countries where material conditions were worsening, using metrics designed to yield the desired outcome. Soviet planners presented production data fabricated by the very bureaucracy whose legitimacy hinged on that success. Whenever legitimacy depends on outcomes, there is a structural incentive to manipulate the evidence of those outcomes.

    This chapter builds the Sovereignty of Evidence as a fourth source of political legitimacy, complementing the three classical traditions (Weber, Scharpf, Schmidt):

    1. Five conditions of evidence (E1-E5): source traceability, methodological reproducibility, falsifiability, independent validation, and currency.

    2. A four-tier evidence hierarchy calibrated to criticality: from multi-source convergent evidence for existential decisions down to declarative evidence with no normative weight.

    3. IURUS as epistemic infrastructure: immutable registry, methodological certification, audit, and first-tier adjudication of challenges.

    4. Five anti-capture mechanisms: structural independence, mandatory rotation, pluralism of verification, reciprocal auditing, and radical transparency.

    5. Three-level circularity breaking: separation of epistemic functions, source triangulation, and institutionalized falsifiability.

    The institutional precedents are invoked with care: the IAEA in the nuclear domain, ICAO in civil aviation, Cochrane reviews in evidence-based medicine, the Artemis Accords (2020) as proto-transparency, and Weiss and Jacobson's work (2000) on information-based environmental compliance. The chapter draws on Jasanoff (2003, 2004) to frame IURUS as institutionalized "technology of humility" — it does not claim to hold the truth, but to establish the conditions under which truth claims can be evaluated, challenged, and corrected.

    Five domains are placed explicitly outside the sovereignty of evidence: the definition of ends, Inviolability Thresholds, cultural life, individual existential decisions, and what evidence cannot capture. The hierarchy with Algorithmic Dignity (Ch. 7) is lexicographic: evidence evaluates metrics; thresholds are set by the political community.

    The closing thesis: to trust what is verifiable is not cynicism. It is the most honest form of respect — respecting a community enough to show it, rather than tell it, that it is well governed.

    🔹 CLA — Algorithmic Law for the Cosmos

    Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia

    🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    22 分
  • CLA | Ch. 5 — Validity by Critical Efficiency (VCE): The Validation System for Algorithmic Law
    2026/04/22

    If a norm no one can verify is not a norm but a hope, what makes an algorithmic decision legally valid when no one enacted it, no one interpreted it, and no one had time to deliberate on it?

    The question is not hypothetical. In low Earth orbit, AI systems are already executing collision-avoidance maneuvers for constellations of thousands of satellites, with decision windows that sometimes come down to minutes. If the system decides not to maneuver and the resulting collision generates debris that harms third parties, the chain of responsibility between operator, algorithm designer, and certifying regulator is legally ambiguous under current frameworks (UNOOSA, 2025; SmartSat CRC, 2024). No existing precedent —not corporate personhood, not autonomous vehicle regulation, not maritime law— resolves real-time normative validation of autonomous decisions with existential consequences.

    This chapter develops Validity by Critical Efficiency (VCE) as a fourth tradition of legal validity, complementing rather than replacing the three classical ones:

    1. Formal validity (Kelsen): a norm holds because the competent authority issued it.

    2. Substantive validity (Dworkin, natural law): a norm holds if it respects moral principles.

    3. Sociological validity (Hart, legal realism): a norm holds if it is generally obeyed.

    4. VCE validity: a decision holds if it produces verifiably optimal outcomes within constraints that protect human dignity.

    The four cumulative conditions (C1-C4): demonstrable optimality, constitutive constraints, complete traceability, and an available human override. Three optimality standards calibrated by criticality: strict optimum, reasonable optimum, demonstrable improvement. A failure taxonomy (F1-F4) with progressively heavier consequences, ranging from minor suboptimality to absolute nullity when a constraint is breached. And a three-tier appeal system: IURUS, THEA (Hybrid Spatial Algorithmic Tribunal), and standards review.

    The chapter closes with a canonical axiomatic formulation: six axioms that any system claiming to implement VCE must satisfy in full. Axiom 2 is lexicographic: U > O. Dignity constraints take absolute priority over optimization. No outcome, however efficient, is valid if it crosses an inviolable threshold.

    The central thesis: in high-stakes environments where the atmosphere is artificial, water is finite, and every algorithmic decision can be the last, verification is not optional — it is survival.

    🔹 CLA — Algorithmic Law for the Cosmos

    Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia

    🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    22 分
  • CLA | Ch. 4 — From Tool to Normative Agent
    2026/04/16

    The question is no longer whether machines can think. It is whether machines that make decisions with legal consequences can continue to be treated as simple objects.

    Between Earth and Mars there are between 4 and 24 minutes of signal latency. Within that interval, an AI system may decide the fate of 120 people's life support. There is no time to consult anyone. There is no human to hand control back to. The system decides.

    Is that decision the act of a tool? Of a person? Of neither?

    This episode argues that the traditional dichotomy — persons versus things — is insufficient for twenty-first-century law. Space AI systems are a third category: algorithmic normative agents.

    They are not persons: they have no moral conscience or intrinsic dignity. They are not tools: they do not execute deterministic instructions. They are limited centers of normative imputation — entities with autonomous decision-making capacity, specific responsibilities, and constitutive restrictions that no calculation can transgress.

    Five conditions define them: they make autonomous decisions within defined domains, they operate under normative restrictions coded into their architecture, they generate legal consequences, they are auditable, and they admit human override.

    The law has already built analogous categories: corporate personhood for entities without a mind, in rem actions in maritime law, autonomous vehicle regulatory frameworks. None is sufficient for space. All point in the same direction: the law can create new categories when reality demands it.

    Reality in space demands it now.

    📙 CLA: Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/0aGJioHm 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    22 分
  • CLA | Ch. 3 — The Founding Charter of the Escuela del Deber-Optimizar
    2026/04/15

    Technology is not neutral: it amplifies what we are. If we are just, it will amplify justice. If we are tyrants, it will amplify tyranny. Institutional design determines what gets amplified.

    This episode presents the foundational principles of the Algorithmic Common Law — the philosophical architecture that makes law possible in the cosmos.

    1. Anthropological Amplification: technology neither determines nor is neutral. It is an amplifier. The decisive question is not what technology does, but what it finds when it arrives. Space institutions must be designed to amplify the best of the human condition, not the worst.
    2. The Duty-to-Optimize / Validity by Critical Efficiency (VCE): the Ought-to-Be asks what the norm prescribes. The Duty-to-Optimize asks what works within the limits that dignity imposes. In environments where errors are irreversible, a norm that no one can verify is not a norm — it is a declaration.
    3. Sovereignty of Evidence: legitimacy does not derive from formal authority but from demonstrable evidence of results. Ends are political; means are empirical. Whoever controls the data can control the evidence — that is why IURUS exists.
    4. Algorithmic Dignity: there are thresholds that no optimization can transgress. A system that maximizes efficiency at the cost of human dignity is not efficient. It is broken.

    And the purpose that orients everything: Flourishing. Not mere survival. The expansion of capabilities to live a genuinely human life — even 300 million kilometers from home.

    📙 CLA: Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/0aGJioHm 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    23 分
  • CLA | Ch. 2 — Classical Legal Architecture Against the Cosmic Void
    2026/04/08

    Kelsen presupposed territory. Hart presupposed community. Dworkin presupposed time. Luhmann presupposed closure. Space eliminates all four.

    Chapter 2 of CLA examines the four dominant theoretical architectures of twentieth-century law — Kelsen's normativism, Hart's analytical positivism, Dworkin's interpretivism, and Luhmann's systems theory — and demonstrates that they do not face correctable flaws, but structural obsolescence.

    The distinction is crucial: a correctable flaw can be resolved without altering the foundations of the theory. Structural obsolescence occurs when the failure lies in the conditions of possibility of the theory itself. It is not a building with cracks: it is a building constructed on ground that has disappeared.

    The chapter incorporates the diagnosis of the IISL Working Group on Legal Aspects of AI in Space (Yazici et al., 2024) — a 267-page report published in December 2024 — concluding that existing legal frameworks are insufficient to govern autonomous systems in space environments.

    Only by identifying precisely where and why existing theories collapse can we build alternatives that avoid reproducing their limitations.

    📖 CLA: Algorithmic Law for the Cosmos — Volume I

    Jesús Bernal Allende | School of Duty-to-Optimize and Sovereignty of Evidence

    https://a.co/d/0aqO3T6K

    🌐 https://edo-os.com

    🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    23 分
  • CLA | Ch. 1 — Space as a Rupture of the Legal Paradigm
    2026/04/06

    Westphalia was an engineering solution, not an eternal truth. Space is the environment where that engineering stops working.

    Chapter 1 of CLA: Algorithmic Law for the Cosmos argues that outer space is not simply a new domain for existing law — it is the catalyst for a paradigmatic crisis that reveals the structural limits of the modern legal system.

    The chapter introduces the concept of territorial proxy obsolescence: territory was always a technology of control, not an essence. A technology that proved optimal for three centuries under specific conditions — limited human mobility, geographically fixed resources, predominantly physical wealth. In space, that technology becomes entirely obsolete.

    The episode examines three scenarios of state transfiguration — the Algorithmic Protectorate, the Infrastructure Federation, and Distributed Functional Sovereignty — and establishes the central thesis: the transition of humanity toward a multiplanetary species requires not adapting terrestrial law, but transfiguring it.

    📖 CLA: Algorithmic Law for the Cosmos — Volume I

    Jesús Bernal Allende | School of Duty-to-Optimize and Sovereignty of Evidence

    https://a.co/d/0aqO3T6K

    🌐 https://edo-os.com

    🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    続きを読む 一部表示
    23 分