I feel like the stories of Philip Tetlock's Superforecasters have saturated all the social circles I've found myself in, despite me personally not quite thinking the results are there in meaningful sense. If you imagine all human traits as being on a bell curve, and all measurements of those traits introducing error, you'd anticipate some subset of high-performers not to revert to the mean. You'd also anticipate e.g. probability training, or any rudimentary preparation, gives a discontinuous jump in skill to all those who get it. Although I'm not sure Superforecasters have a discontinuous skill jump (i.e. NOT a bell curve's tail), you'd still anticipate at least some of that.
But that doesn't point to any innate characteristics of note – just, bell curves have tails and you can prepare people for tasks like prediction. For the research to be useful for its intended audience, it seems to stumble over itself to avoid this pretty basic model and pretend there's an innate 'superforecasting' thing in some people – instead of just recommending the CIA give their analysts a probability and calibration seminar. Their own research suggests those things are a big portion of their success in prediction, but doesn't quite cross the finish line on saying, if you've hired the people you want, here's what to do.
I'd be intensely fascinated if the CIA wanted to hire analysts by proof-of-work geopolitical predictions – if only because it'd be fun to have that skill explicitly get a dollar value – but it's pretty clear they're looking for other things.
But, ignoring my personal distaste for over-narrative psych research hyping, if I'm allowed the same liberties, I'd like to discuss Supermodelers. This is a different but more useful skill. In geopolitics, things are too complicated to understand. Guesses are all we have and accurate guesses are useful. But in actual real life things, we don't just want accurate predictions – we want to understand the stuff going on around us.
Imagine someone trying to do their job by cargo cult, or substituted empathy for their partner with some type of Skinner box-esque superstition. It might even be hard to imagine, but your partner would be terrifying if you didn't understand them or have true sympathy with them – way beyond the obvious jokes of, sometimes I don't get my partner hahaha. This would be a deeply upsetting way to live, and very few people need to deal with this problem at all, I imagine.
In Michael Stevens' most recent season of Mind Field, he has a human Skinner box scenario where someone actually correctly identifies the rules, despite many high-quality, more obvious guesses (avoid hitting the obvious button, hit the obvious button a lot, entertain the cameras prominently placed, leave the enclosed area (and return for the payout), etc.). The fact he was just messing with them doesn't make those guesses bad – they're actually pretty solid places to start, and I'm not sure I'd be able to run through all my guesses before time ran out in their experiment. I'd say it's even odds I'd solve the implicit puzzle in their crafted mechanism, and only because the true secret that's obscured by the obvious guesses is so simple.
But that skill set, of not just making predictions, but of making models that make predictions, that's perfect for the day-to-day of real life. Way more interesting and rewarding than geopolitics, I think it's safe to say.