Ah, image macros. They were the dominant memes for so long people thought the word 'meme' meant image macros. But I think that misconception is clearing up. The days of general, non-poster-specific short ideas and jokes are fading. Memes, lately, have been about getting attention more than what things are. They involve the real world, but play it like it is beneath the fame that's possible online. Likes, shares, and views are a powerful drug. Having an audience is something people will put more effort into than ever before. God knows this blog would enjoy an audience, but that can't motivate what you post.
But now things are both more social, person-to-person in a way image macros never were, and more destructive and dangerous to people involved and the world. It could get worse, but it's surprising how far we've gone already.
The most recent example at the time of writing is arguably the ice cream licking thing. I refuse to call it a challenge, because it's actually easier than the classically relaxing activity of eating ice cream, because you don't even need a spoon. It's just repellent and mean-spirited. Don't get me wrong, the reason most youngsters aren't doing thins is because they have a conscience. They don't want to ruin other people's day. As I'd prefer to let your exposure to negative memes be optional, I'll avoid describing the meme, but the Know Your Meme link above does explain it quite well – it's a wonderful source.
Of course, that's not the only meme in them there hills. The Tide Pod Challenge is worthy of the name, because you'd have to be mentally challenged to not notice the flaws in the plan, and your life afterwards, including the trip to the emergency room, will indeed be more challenging. It is the dumbest possible way to be self-destructive, and probably on-net improved youngsters' ability to self-moderate – they don't need adult to tell them to resist social pressure. They've built healthier social groups on average responding to the stupidest possible threat. It's like building a social immune system – a chicken pox party for wisdom, organized by children themselves.
It'd be better if these ideas were too stupid to spread at all, but given that's not the world we live in, I'm glad people are more and more resistant to them.
There are also prank videos, which have made a come back recently. All you need to know about this is that some of those videos are being deleted because they are probably evidence of actual crimes. And then there's the guy who got 5 months in jail over dog Nazi salute stuff – that's pretty tame, and didn't deserve it, but man did people need to put the stop to pranks. People who definitely did deserve their jail time were these people for some of the most destructive-to-the-social-order stuff I've ever heard of. I think the charges that got them the sentence – the bomb hoaxes – aren't even the worst of what they'd uploaded for the public to see.
People want attention, and there's not much they won't do to get it.
And conversely, Facebook has been doing more and more about "fake news". They seem to mostly be acting to stop things that are legitimately whole-cloth fabrications, but they throw in some deplatforming of political figures as well, just to make sure you know they aren't being view-point neutral.
The part that should bother you the most is that they may start algorithmically adjusting what gets seen based on how reliable the person is, and determining that by what they say and share on the platform.
Now, it matters how much the platform likes you. It's not just Logan Paul, getting high click through rates on garbage material. It's the algorithm deciding if what you say is true or false, and hiding you from public view if it thinks you're not a good influence.
So now, if you have something important to say, but it sounds unbelievable, you might be tempted to ask: is it important enough to burn through the social capital I have built up – not with my audience, that valuable thing I've cultivated and sacrificed for – but with the computer?
Does the algorithm have a good opinion of me?
It would be disingenuous to claim this is a new problem. The invention of bureaucracy was largely to prevent corruption and fraud. When there are hard rules for decision making, and a distance between the decision makers and the people whose lives are impacted by those decisions, you might get better decisions overall. This has all the problems you might guess, but it's worth focusing on some of the things the system does right in the same breath.
Social credit scores, for instance, are largely a mechanism to make more consistent the creepy totalitarian surveillance that some countries have come to rely on. It probably won't change much, but make the system a bit cheaper to maintain, and apply the strange state-run vengence on even more people.
But real credit scores are essentially how we got rid of racism in housing. Or largely ameliorated the problem, however much is left. It was a big step in the right direction. You could say, these numbers go into the system, and since we can just not tell the system your race, it definitely can't be a factor in who we give out loans to. It can help defeat racism, but it also has that strange tinge of creepy patronizing something – of course I wouldn't use race as a factor, but there's tons of stuff I might use as a factor that the system doesn't allow. People with successful businesses, for instance, often can't get mortgages without giving themselves a big salary for two years. They can have ten times the cost of the house in cash, but it doesn't matter – that's not part of the input for the system, so it doesn't impact the loan decision. The decision isn't really made by people anymore, and the system doesn't really operate to help people either. It's a robot, but with human minds making the decisions instead of transistors.
Creepy, inefficient in some ways – but definitely better than what came before. Maybe in the future we can do better still – not in terms of optimizing return on the bank's capital. The machine can do that. But in terms of optimizing for the customer surplus – doing the most good with the money. It's hard to pin down, but that doesn't mean it matters any less. In a world where measurable things are already largely optimized for, it's arguably the only thing left that does matter.
Algorithms Doing Opinion Journalism
I can't tell you how irrelevant I find opinion polling. Obviously that's an influence that I almost completely regret – the sophistication is lost on its audience, and the insights you can get from opinion polls are almost totally irrelevant to the public. If no news organization used data in that way ever again I'd be pretty happy.
But when this is probably the most fair review of the facts, you get mainstream coverage largely picking the worst people a group has ever associated with. Of course, if you're interested in honest discussion, you'd want to spend a meaningful amount of your time trying to convince people like Stefan Molyneux that he's terribly wrong about the way he views the world. He has an audience, and you ought to try and convince them too. The people you spend time with will sometimes be the people you disagree the most. If someone has a civil conversation with them, I don't know if they're a terrible racist or someone who wants there to be fewer racists. The association alone isn't enough information to know anything – but that's where most people draw the line.
Do we want a world where we have to be careful about who we talk with? Or even about – it turns out any lateral reference to Alex Jones without immediately condemning him was grounds to be de-platformed by Facebook. That means that this extremely important piece on Slate Star Codex requires the free and open internet to survive. He's a very careful thinker, and worth reading – can you imagine the algorithm punishing him so hard that he was driven underground? Can you imagine him being so afraid of that it stopped him from speaking?
Facebook ought to be ashamed of itself.
But we also ought to be ashamed that we use platforms like this. In many ways, CNN is just another massive algorithm to provide a roughly homogenous truth-adjacent slop. It's not good, and it's produced by a system that wants your attention more than it wants to speak the truth when it's important – their non-stop coverage of MH370 for months and months after the terrible crash are evidence enough of that, even before they went so far as to discuss on-air whether it was swallowed by a black hole, which for some reason didn't lead to the news director being fired. Their expert said a small black hole would suck in the entire universe (which begs the question, do they think the galaxy has no black holes? How did we survive this long if we did?). These are absolute morons, and their ratings are highest when you become a moron too.
They make decisions in that same manipulative way that technologists are just barely getting computers to do. They are more adept at manipulation, and should be treated like a hazard to your brain the same way you'd treat drinking mercury or having a lead popcicle.
Why Is There An Algorithm At All?
Why we try to use algorithms for recommendations and trust? Perhaps my RSS-tinted glasses here are influencing me too much, but I don't understand why anyone would think an algorithm is more effective than just listening to people you trust, when you want to see what they have to say.
Keep a folder of bookmarks in your browser. It'll make you happier, if it competes with these insanely optimized-for-manipulation algorithms. What else could it do? There can only be one top priority, and it's pretty clear the top priority for advertiser-driven systems is, in almost all cases, extremely unhealthy to the human mind.
Or at least make room, in an algorithmic space, for a more person-to-person organization. YouTube runs almost everyone through recommendations, which can avoid strange art, taking risks with your time, and through those incentives makes a platform of people who make bland and terrible things just barely interesting enough to get people to watch. They are The Big Bang Theory, Everybody Loves Raymond, and George Lopez's bland sitcom, but with people tazing dead rats and pretending to rob museums as pranks.
But if you don't want to end up accidentally be avoiding strange art, you can just visit your subscriptions. The platform has developed a culture of personal recommendations, and collaborations, which let you see new things without ever really asking the algorithm for help.
It could be better. It's hard to imagine any large organization resisting the fundamental incentives, which is why we've got to abandon the algorithmic interface, now that they've introduced modern real time click through rate tracking and optimization. They can treat you like a lab rat, and will need to do so to make more money. The world is clickbait, when you only want an audience.
So we shouldn't just be an audience. We should look for community. A place we visit because we want to be there, not because we were channel surfing through for a distraction.
I Want To Be On Top
Of course, personal exhortations aren't enough. We like systems that optimize for something, because we think we can do better than others. The trick is to find a flattering mechanism, one that makes status decisions for others, and to put ourselves into that system.
You can't trust enough people to make Google irrelevant. When you search for something, there needs to be an algorithmic ranking. There is a whole industry around Search Engine Optimization, the fundamental premise of which is that there is something – aside from having the best answer to a question someone might have, aside from being the thing they would want to find, when they search – that you need to do to appease the system. Even without the black-hat SEO techniques, this should be a deeply saddening thing to hear.
But Google wants their job to be easier, so if you could tag your stuff, and work hard at making sure the implicit trust others might have in you in made explicit in links to your website, that would be great.
In extremis, this leads to things like Credit Score Dating, a microscopic dating service that was still specific enough in focus to land an appearance in modern major news media, like CBS and (somewhat bizarrely) Consumer Affairs. This is an app with less than 100 downloads of their Android app. And yet there's this weird status component. It offers to judge people a different way, which is fascinating, even if nobody is using it.
Everyone wants thsi contextual status, and everybody enjoys being marginally more valued. That's good. And something like credit scores really are algorithmic proxies for virtues like trustworthiness. But this is the maximally creepy version of the blind banker and the businessman problem from earlier. There's information they can't look at – so it feels weird to pay attention to the deliberately impoverished view of someone.
Why have an algorithm here at all, when you want to build a relationship?
Why wouldn't you want to build a relationship, even a parasocial one?
Winners and Losers
It's not just advertising dollars that push us to algorithmic decisions about how to pay attention to. Plenty of other systems have winners and losers, people benefiting from a system while (incidentally) silencing those who are crushed by it. That's how attention economies must work – the people at the bottom will never have their concerns – or anything else – heard.
It's worth saying a short prayer for activist liberals online here. In a totally secular expression of my most vulnerable hope for them: it's not healthy to have people being able to search through every single thing you've ever said, take it out of context, and use it to ruin your life. It's even harder to stomach when you limit all your thoughts to very short expressions, and if you want to use that platform for jokes. Nobody can hear your audience laughing on Twitter, which makes it a perfect context-destroying life-ruining machine for everyone who spends time there.
But the likes and retweets can be so addictive.
Looking for likes instead of cultivating your art in a more community-driven space is a dangerous choice. Twitter is perhaps the perfect example of making users into expressions of the same sad algorithm that's driving Facebook. People share things not knowing if they are true, and are thereby made complicit in the destruction of trust.
Counter-optimizing for things specifically undervalued can help. Mostly we should stop retweeting, stop tweeting at all. Send someone a text, if it's a silly thought. God knows I do that often enough, little one liners to friends and family. Create the archive you want to make, and make the jokes you want to make, but do them separately. When you see your work described with a number, maybe you should run. Too many numbers will make you an optimizing algorithm as well. Not everyone has the luxury of doing something without getting money from it, but enough people do that they have Twitter accounts and Facebook accounts, and some people even still have blogs. So if you're already applying all your effort to something without expectations of return, do so with your best foot forward.