So, I recently spoke about what it might look like if you were trying to avert a nearly inevitable apocalypse. Suffice it to say, this might be a real concern – our capacity (meaning both humanity generally and every sub-group of people) for destruction has grown a lot, and it does seem tied to general wealth. The classic formulation of this idea is "Moore's Law of Mad Science", that "every 18 months, the minimum IQ necessary to destroy the world drops by one point". This is a ridiculous phrasing – how much destruction has truly been wrought by the smartest people, and not the ones with control over nation-state resources? – but if you rephrase it in terms of an inflation-adjusted dollar amount, I think the general idea holds.

It certainly wasn't possible at all, even for the wealthiest countries in the world, and now we have a funding crisis because we're too... bored? to keep our world-ending weapons properly secured. Soon private individuals will be able to steer asteroids onto collision paths with Earth, and although that can be defended against, I don't think it has a passive defense. Which means the price has kept going down, as it now approaches the range where you don't need nation-state levels of funding.

The more terrifying concept is the threat of micro-machines. Either nanotechnology or engineered viruses, doing the destruction but being too minuscule for modern surveillance to track or perhaps even quarantine. If that technology gets cheaper and cheaper, at some point we will need to develop a security system that can handle it – a plausible but universally-agreed-to-be-terrible solution is a provably-nice AI-run totalitarian dictator. This appears to be the defense mechanism supported by the source of Moore's Law of Mad Science. [Note, and a minor spoiler for Travelers: this is also part of the dystopian nightmare of the future depicted there – their lack of additional answers to the security question might be why their attempts to secure the past don't work.]

Anders Sandberg (I had no idea who he is, but he's a published author on what would happen if the Earth turned into blueberries, so I think I'd like him if we met in person) has a decent introductory exploration of how precisely we get even an approximation of that. His conclusion, that we need better ways for groups of people to make decisions, is not usually considered an existential risk question, but I think that he's correct. Right now our security is essentially a joke – police are great for reporting already-occurred crimes to, but to even call them security is delusional, and our military is probably not organizationally capable of even replicating its own feats of 100 years ago, which is not great considering everyone was less organizationally sophisticated 100 years ago. If you want protection from desktop nanofactories, I recommend immediately pissing your pants. It won't help, but it might be more helpful than the current military influence (see the beginning of Sandberg's analysis above). The modern military subcontractor system has very helpfully made sure civilians and private organizations are equipped with all sensitive or cutting edge designs.

So yes, making better organizational decisions is perhaps wildly under-invested in – classic agency problems are squared – individuals making decisions in organizations are faced with only a small fraction of the upside of good decisions and downsides of bad ones – and the research itself is a combined tragedy of the commons in paying for it and a prisoner's dilemma in who throws out an adequate-but-probably-non-optimal system to use an untested-but-perhaps-better system. Whatever happens to the first mover provides benefits to everyone, but at cost to themselves. Fixing those incentives may well be the difference between Humans conquering the galaxy and us being destroyed.