Ross Douthat believes that religious agnosticism has grown into a torpor, and many fields of inquiry and devotion, but questions about the fate of humankind, specifically, now, more than ever, are met with a shrug. "Who am I to say?" he imagines people responding, when asked of the big picture.
But as Paul Graham has pointed out, when you hear a rhetorical question, consider if it has an answer, because sometimes the answer can be interesting. And I think some people, those roundly mocked by the decadent institutions he would rightfully criticize for intellectual vapidity (notably, The New York Times), have an answer. Their answer is: I have an idea about what the future will look like, because I will build it. And many people will try, so things will either go pretty well, or our creations will destroy us. The only other option is if no one builds anything new, and I won't let that happen.
To be fair, this is (sort of) mentioned in the book, but without meaningful elaboration. He says building new things is a promising path out of decadence, particularly building ethically questionable things, or things that challenge our humanity. Rapid gene editing, for instance – and that it matters whether this technology comes from China, for obvious reasons related to their ongoing genocide. I don't disagree, but I think he seems dissatisfied with this line of thought, and I see why. "More people make things" sounds like what's already happening now, and while he does say there's something to the ambition of techno-optimists, it doesn't get developed on.
But, of course, that's not precisely what I want to talk about. There is no eschatology of optimism, so a careful reading of the name of this post should have suggested that. He presented pictures, pulled from sci-fi or speculation, about wild ways the future might be, but there is no fate of man in this worldview. The end state of humanity is that it keeps going. That's part of what techno-optimism entails – maybe we go to Mars and beyond, but optimism is "we solve global warming" and pessimism is "we don't". That's the range of opinions he was considering.
A shocking percent of expert technologists believe that, within our lifetimes, we will build a non-domain-specific reasoning and optimization engine. It will understand the world in particulars too numerous for the human mind to comprehend, and, if we can tell it what we want from the world (an uncertain task), will discover how it can be delivered, possibly without our knowledge, consent or assistance. It isn't a consensus that this will happen. It's not even a consensus that it's possible to do well, or within the current bounds of human wisdom and skill. But it's been clear from the beginning of the computer era that it's possible in principle to make highly effective machines. Alan Turing essentially created the formalism for "what does it mean to compute?", and even he simply assumed this would be developed in time, and speculated about how it might be built (a question we still don't know the answer to).
There is a long-standing technical finding, among these experts, that the default outcome of building a highly-capable system would be that this research Artificial Intelligence takes control of the planet and all humans die shortly thereafter, likely without any malicious goals or programming. It is the concern of nuclear proliferation writ large, where the costs and complexity of a dangerous technology only escalate our fear. Nuclear material won't be getting more expensive, as mining procedures get more sophisticated. Our knowledge of how to build bombs hundreds of times more powerful than those which destroy Hiroshima and Nagasaki won't disappear. Every year, the risk goes up as costs go down. When will the first non-nation-state actor have a nuclear weapon, and what use is a nuclear deterrent in stopping them? Even with test bans, the knowledge is out there.
But AI tech is advancing absurdly rapidly, instead of merely stagnating. Far, far faster than Moore's law, and becoming not just more accurate but more general. Prediction software or text generators are producing compelling fiction, and prediction is literally isomorphic to understanding of the world. The optimization engines don't just outclass humans at specific tasks like Chess or Go, they outclass humans even when they don't even know the rules yet. The systems are getting more general, and more integrated between understanding and optimization – and those are the only two things needed to be dangerously effective.
Imagine a world where nuclear weapons cost 10x less each year, per TNT-kilogram equivalent. How long until New York is gone? And how sure are you that this isn't happening?
If you write a book about people being overly concerned about safety and are weirdly complacent in their wealth, it's worth saying (as Douthat does) that technology is the exception. But just as tech hasn't been stagnating like other fields, it hasn't adopted a safety-forward mindset. It's becoming more and more clear that something will happen. Hopefully it will be good. But the only thing we know is that things can't stay the same. The world, as we know it, will likely end – that's the claim of the techno-volatile builders of this technology. Hopefully it's because it's replaced with something much better.
I'm not the first person to refer to this as an eschatology. I won't be the last. But I truly have no idea what will happen in the future. We are plausibly two or three breakthroughs away from this all being trivial to do, and maybe as few as one away from it ending the philosophical research phase.
So what if some experts think the world will end by the hands of a system we create? Maybe it's good, and maybe it's bad – we don't have the expertise to make it a positive outcome yet, but people are working on that. Well, if the future is dominated by the technology we build, Douthat's scope of understanding of the tech zeitgeist is tragically limited. Who cares who makes this system? Some researchers have said it'd be a success condition to have it under the control of any human, even a Bond villain.
Why would Douthat have a blindness to the, frankly, insanely dangerous ambitions of a number of organizations, including one with financial backing from Google? If he begins the book by establishing his yearning for great projects, why does he yawn at someone saying, if this goes bad, it might end humanity, and we don't know how to avoid that yet, despite the fact we're working on making it happen as soon as possible anyway? He imagines a future where people are shook out of a stupor by technology that is so ethically fraught or dangerous as to demand a better response. Perhaps he should have checked to make sure nobody was afraid of tech so powerful there is no public option for feedback, once it is created.
Like his rejection of post-Malthusian human existence in favor of moon travel, in gauging the high water mark for human achievement, there are some things he simply doesn't care about. Maybe he's decided computers are boring, so there's no need to look into what he himself claims is the exception to the rule of decadence. It's a strange omission. But, of course, life is not just about grand things. The small things matter too, and that's what we'll be looking at in the next part.