So, everyone knows that the idea of using humans as batteries makes no sense. I've heard that a million times if I've heard it once. And for some people that diminishes The Matrix. Why would clearly hyper-competent machines do something so bizarre?

Of course, the suggested replacement idea, that they use humans for compute power, is interesting. But they're running a simulated reality for... fun? If they could give compute instructions to the human mind, why bother with an illusion? Just keep them busy with whatever mundane tasks you care about. I'm not sure humans-as-computer makes much sense either, considering the technical specs needed to have the intelligence density the film implies – they've got to have some excellent hardware already, and became sentient before they hijacked the human brains anyway.

Here's my proposal: AI was created, and was given the constraint that it had to help humans live a basically normal, largely independent, happy life. They didn't constrain the world much, figuring that helping humans achieve their goals would be a good proxy for a dynamic world. And this is, indeed, a reasonable thought. Experts today suggest we not give computers goals but have them try to please us and use our feedback to get better guesses at what our ultimate goals are.

Of course, some people want the truth, not some elaborate ruse, and, like Neo after the first movie, would reveal that truth to a lot of people once they know it. And while the machines got a huge boost of confidence from the basically happy and normal people living good lives in the matrix, it inevitably learns that simulated realities aren't quite what people want. Or, at least, not this style of one (nor one where there is no suffering, as their disbelief led to distress, as Agent Smith points out – but if they can make the choice between a pleasant lie and a harsh world, the solution the Oracle favors towards the end, there's a decent chance they're meeting the humans' goals).

This all leads me to suggest that a meaningful danger zone for safely aligning artificial intelligences is Matrix-ifying you to meet your complex standards. It's not wireheading (subverting their reward function to skip achieving their true goal), and they aren't even wireheading the citizens of the matrix. But giving humans a complex and meaningful life in a simulation seems much much more resource-efficient than giving up control of the world to humans. I suppose part of the difficulty of the danger analysis is some people (like Cipher) both enjoy more-idyllic simulations and would prefer not to know about it. These machines really might be doing their best, not ensnaring humanity in their diabolical scheme.

And that's pretty scary.