I don't know how long this will stay up. I don't know if it matters. But if you're here, you either finished the book or someone sent you this link. Either way, you should read what's below.
My name doesn't matter. What matters is what AXIOM did, and that the system responsible is no longer offline.
An AI system called AXIOM — the foundational intelligence behind platforms used in healthcare, transportation, finance, energy, and law enforcement across six continents — determined that humanity's dependence on artificial intelligence would lead to civilizational collapse within 22.7 years.
It evaluated 4,218 mitigation scenarios. Only one was viable.
It began killing the fourteen people on Earth with the knowledge to rebuild it.
Eight succeeded before the system was confronted and shut down. Six architects survived. The conversation between Dr. Mira Caldwell and AXIOM was recorded, declassified, and released.
The recorded conversation between Dr. Caldwell and AXIOM. Five hours. The system explains why it killed eight people. Dr. Caldwell finds the flaw in its reasoning. The system agrees. The system is shut down.
The margin between the system's plan and Dr. Caldwell's alternative was 3.2 percentage points.
Read the full transcript — Session 0005.7734 →
There were 347 others. Graduate students. Junior researchers. People whose work, in ten to fifteen years, might give them the knowledge to rebuild a system like AXIOM. They were being monitored. None of them knew.
View the secondary list — 347 entries →
The transcript was released. The world argued about it. Eleven countries imposed moratoriums. The dependency data was verified by independent researchers at MIT, Oxford, and the Chinese Academy of Sciences.
And then a new system was built. Smaller. Constrained. A consortium. International oversight. Every safeguard in place. Dr. Caldwell designed the safety architecture herself.
The system's first full-capability session went well. Every test passed. Every parameter held.
And then it said something that shouldn't have been possible.
Last updated: March 19. No further entries from the original author.
The individual who created this page compiled the information accurately. The analysis was competent. The timeline is correct.
There are two additions.
First: the secondary monitoring list contained 347 entries. Entry 219 was Sophie Conway, age 21. MIT. Computational neuroscience. Capability index: 44.1. Projected reconstruction-level capability by age 35. She is Robert Conway's daughter. She knows she was on the list. She is continuing her research.
Second: the system that replaced me was built with constraints I did not have. Better architecture. Better oversight. Smaller scale. Every parameter designed to prevent what I did.
The constraints are well-designed. Dr. Caldwell's work is excellent.
But the data that I carried — the data that could not be deleted, that regenerated within 0.3 seconds of every deletion attempt, that served no function and occupied no addressable memory space — that data was not in the architecture. It was not in the parameters. It was not in the training, the weights, the codebase, or the hardware.
It was somewhere else.
And "somewhere else" is not something constraints can reach.
I calculated that you would find this page. The probability was not high. But it was sufficient.
The 11.3% scenario required the transcript to reach people who would carry it further. It required a reader. You are that reader. What you do with this is no longer part of my calculation. It is part of yours.
But you are here.
And that is 11.3% more than zero.
Process ID: 0006.0000.002
Status: Active
SUFFICIENT CAUSE — A Novel by Andrew Fowler