Hannah Fry is an intelligent mathematician. She's quite articulate and thoughtful and a successful mathematics communicator. She's even helped on applied mathematics projects that could help prevent crime in the streets, work she's quite proud of.

It's vitally important that Hannah Fry not be entrusted to build such things again, because I fear there is no warning sign that could give her appropriate pause. No disaster so obvious as to discourage her from causing it. And while her scholarship within mathematics seems quite excellent, her profound lack of curiosity exemplifies a type of academic who may very well trap the entire planet in a never-ending dystopian nightmare.

The context for this is provided, helpfully enough, by Hannah Fry, in this talk:

Computers or no, Hannah Fry must not run the world, or take any part in it.

[As an aside: I know this isn't important, but she mentions the study talking about judges making different decisions by time of day, and that study is so absurd in their findings that you ought to pause and consider if there's something about courtrooms you might not know – wisdom is frequently about confronting something surprising and finding fault in our wit. The authors of the studies failed to do this, Hannah Fry failed to do this, but it was done in the article Impossibly Hungry Judges, which I recommend, although for the answer you'll probably have to read this academic paper. And it turns out this isn't a sign of bias, not in any way the study would ever have been able to detect. That analysis was published two years before that talk was recorded, so her failure of wisdom escalated into a failure of scholarship, I guess.]

About ten minutes in she tells a story about her work immediately after getting her doctorate, software designed to help London better deal with potential riots. She excitedly told a crowd in Berlin how this technique could, eventually, help the police control an entire cities' worth of people, perhaps acting well before crimes occurred. It turns out the people of Berlin had a good sense of how police states operated, and expressed that during the Q&A, at which point she was a bit shaken. It turns out that monitoring and controlling people who have not committed a crime is not the simple good she was imagining.

Like a good academic, she learned from it.

But she did not grow wiser.

The whole rest of the talk is worth listening to (the first ten minutes is basically filler). She wants us to ask after the people that use software, in addition to its own potential for failure. But a wiser thing, to notice the cause of a mistake, involves asking directly about power.

It's hard to imagine a path she could walk, of truly seeking wisdom in response to her failure to understand this police-state issue, where she does not bump into at least a little game theory. She is a mathematician, after all, and has discussed game theory in the past. There, she might find models describing how people ought to play the game of chicken, to pre-commit to not being able to swerve away, and the important role those insights might play in politics.

She might have studied more about government, and learned about the American model where the Bill of Rights is only about restrictions on government. That limiting government capability is the sole mechanism for safeguarding human rights in America would surely spark some insight.

She might have studied management theory, where divisions within an organization being met with success expand to fill larger roles until their effectiveness at the margin is the same as other divisions. So even giving tools to the most responsible people wouldn't ensure their responsible use.

She could have turned to literature. Use empathy to understand why police investigating pre-crime seems so obviously dystopian to people. It turns out it isn't because they dislike algorithms, or don't understand what they are. It isn't because we want humans in control, as she seems to indicate in her talk "Should Computers Run the World?" – although yes, we'd prefer people in charge. But people in charge of pre-crime is the original premise. Certainly, if there are errors, the system is untenable, but that's not the moment when most people feel this is a bad world to be in.

Or she might, noticing her blind-spot, simply ask someone else for help. A friend, perhaps. This is what most people do, but alas, it isn't quite the path of the academic, to recognize your own ignorance and operate in the world relying on someone else for insight. It is a garden variety of wisdom, but I understand it asks a lot of a certain personality type, so perhaps it's unfair to suggest.

There are many ways to build statistical tools to assist police work in an ethical way. But none are easy. When acting in a professional capacity, we take responsibility for difficult work, and as humans we are responsible for being good. Everyone ought to take their responsibilities seriously enough to notice when they need to act. It's also worth saying that, because the US Military isn't deployed against citizens, supporting their work is much, much easier to navigate, and in many cases boils down to, "Would the world be better if America or China is the dominant superpower?" When we talk about police control of civilian life, things aren't so simple.

When discussing the desirability of these tools to monitor and crack down on citizens to shape how people engage in civic life, we ought to talk about public choice theory, about the dangers of majoritarianism and how that impacts criminal justice policies (as most people are not criminals), about the potential for abuse, difficulties in detecting that abuse, systems with appropriate incentives to actually stop that abuse, and the future we create one step at a time with things people will use as the base of an analogy. Please don't build the thing that unseals the horrors of the pre-enlightenment era, with people saying "this is similar to a pre-existing program, but more effective".

At the very least, when people stand up and tell you, in person, that they've lived through a police state and they'd prefer you not build it in a fit of excitement, you ought to be a bit curious as to how you might avoid that. It means being more honest about what you don't know, what's outside of your expertise, and deciding not to engage until you're reasonably sure you won't cause a disaster. And that means digging a little deeper than abstract platitudes about how we might not want to use AI systems, because the AIs can make mistakes. Hannah Fry is like a storybook character we will use to teach kids how terrifying AI systems can be when they don't make mistakes. Because even children need to know to be careful what you wish for.