CLA | Ch. 8 — Taxonomy of Space AI Systems: The Regulatory Cube
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
On September 2, 2019, the European Space Agency fired Aeolus's thrusters at 320 kilometers altitude to avoid Starlink 44. The collision probability had climbed to 1 in 1,000 — ten times ESA's action threshold. SpaceX, notified days earlier, did not maneuver. A bug in its internal alert system prevented operators from seeing the risk updates. ESA acted alone. No one violated any rule because there is no rule to violate: coordination between operators is negotiated by email, with no binding protocol, no traffic authority, no auditable record.
The incident was minor. The question it reveals is not. What regulatory regime applies to a navigation satellite making autonomous evasion decisions? The same as a life support system rationing oxygen in a Martian colony? The same as an algorithm adjudicating disputes between asteroid belt mining operators? The obvious answer is no. Current space law lacks the tools to articulate that difference.
Chapter 8 of CLA builds the functional taxonomy that space law needs and does not yet have. Four dimensions: function (what the system does), criticality (what happens when it fails), autonomy (how much human supervision it requires), and domain (where it operates). Five functional classes — from life support systems (Class A, existential criticality) to adjudication and governance systems (Class E). Four autonomy levels calibrated by physics: the 22-24 minute round-trip latency to Mars makes continuous ground control impractical, and no institutional design can change that constraint. The Regulatory Cube — the intersection of criticality × autonomy — determines applicable minimum requirements: the evidentiary level demanded, the intensity of VEC conditions, the configuration of dignity thresholds.
Three structural patterns emerge. First, the maximum-intensity diagonal: a Martian life support system with adaptive autonomy simultaneously mobilizes the full ANCLA triad at maximum intensity — not accumulated bureaucracy, but institutional response proportional to the highest conceivable risk. Second, the prohibition on existential-criticality with single-human supervision: when a system whose failure kills within minutes depends on one human, its safety is only as strong as that human's attention at the worst possible moment. Third, a deliberate asymmetry: the taxonomy does not penalize autonomy itself, but autonomy without supervision proportional to risk.
The taxonomy reveals that the core regulatory problem is not AI in the abstract but AI in context. Governing orbital traffic by email is not neutral omission — it is a political decision with identifiable beneficiaries. Without taxonomy, the operator who redesigns a life support system as a "data management platform" avoids the most demanding controls. Without taxonomy, the victim has no legal language to articulate the difference. Classification without consequences is nomenclature. Nomenclature without accountability is a catalog.
Chapter 9 will translate this taxonomy into accountability chains: who answers for what when a system of a given class fails.
🔹 CLA — Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795