Toby Ord's book The Precipice argues that, since the nuclear arms race, humanity has had the power to end itself all at once. Annihilation is within our reach, and need not be intentional – nuclear winter certainly wouldn't be. Other risks are emerging – among other risks, he highlighted (last year, mind you) bio-error pandemics being of particular concern. And yet the international body making sure governments keep their promise about bioweapons and security has a smaller budget than a typical McDonalds and four actual employees. Turns out Russia was violating their promises for decades before they just decided to stop, and some countries, like Israel, never even agreed not to construct hyper-potent pandemics.

He suggest that we increase the funding of preventative measures (along with making other policy changes) by a factor of 100x, so that we spend as much money preventing the permanent loss of humanity's potential as we do on ice cream.

This seems pretty reasonable.

But it sort of begs the question, right? In a perfect world, this book would be released, highlighting pandemics in particular as dangers to civilization, a pandemic would actually happen, people would realize the book made some good points and offered a very conservative option that's both ethically and fiscally sound while appealing to a frightened public who demands serious action. In that world (which isn't perfect so much as a predictable pandering that we've somehow fallen short of), surely we'd ask, why are we still spending so little?

He says we need caution and wisdom, but supplies only a bit. What level of caution is appropriate? Reading this book through the public choice theory lens, I think it's fair to say, the Straussian reading of this book is that humanity is ultimately doomed.

[Let's imagine the book claims – and I myself am uncertain – that:] The only way to have increasing technological achievement without "the unilateralist's dilemma" ending humanity is to permanently entrench an omnipresent censorship system that will see all uses of technology and stop the dangerous ones. If the cost of technology goes down by 1% a year and people get 1% richer every year, eventually a dedicated hobbyist will construct a humanity-threatening pandemic, nuclear weapon, malicious superintelligent AI, or something and inflict this danger on the world in a single moment. After-the-fact law enforcement is absolutely unequipped to stop someone who wants to make the world a funeral pyre for humanity. This censorship system will itself lock in a negative outcome, a permanent dystopia, and be a catastrophic failure for humanity to live up to its potential.

And we could stop inventing, stop growing richer and being able to make changes to our world – but that would mean we fail to live up to our potential too.

I've been thinking of writing about public choice theory and how that ought to inform our politics, and a more thorough walkthrough of the implications might be worth doing in the future, but suffice it to say, an author saying "we need new wisdom, leadership, international cooperation, and scientists and ethical philosophers to lead us in the next 100 years of policy and governance" might as well be saying "so long, suckers". If you've ever been slightly cynical about politics, you might understand why that might be. There exist temporary solutions that are possible (and worthwhile! Delay extinction at all costs!) but I see no coherent vision for how to solve the more basic principle-agent problems at the core of escalating anthropogenic risk.