The truth about content policing at any scale
You can absolutely trust and believe everything you read in this post. Honest.
Here is something I find troubling about reports of how Google, Facebook, Twitter, and other online advertising platforms have been famously used to manipulate federal elections.
Having used all three platforms over the years for personal projects and client campaigns in my agency work, I have had ads rejected for technical reasons related to content. Google AdWords once disallowed and ad for a gluten-free cookbook because the add copy contained “free”. As in “gluten-free”.
This works because a computer matched a string against a set of disallowed terms. I’m sure Google has a planetary scale version of this in place, but essentially, this is probably done by software, not human oversight.
I dabbled in writing a “dirty word” filter years ago. It was abject failure. Using pattern matching, Levenshtein distance, and Soundex was fun to try. In reality, some clever friends of mine (affectionately word-nerds) were able to get all kinds of garbage through my filter. False positives were at unacceptably high levels too. My decision was to not actively police the forum, rather, deal with abuse complaints if and when then came in. This was a forum of approximately 1,000 active users. Not a large fraction of Earth’s population.
My experience is not to say that I understand what Google, Facebook, and Twitter are up against. After all, scale brings complexity, and really smart people (read: much smarter than me) to solve complex problems. However, I can’t help thinking they address content policing the same way I dabbled with 15 years ago. Catching a watch list is easy, but has false positives. Catching smart motivated content creators who know there is a filter is really hard.
When I hear about the reach of disinformation on Facebook, Google, Twitter (10 million Americans, or 126 million) it’s staggering.
Some of this content could definitely be stopped by motivated enforcement groups within these companies. All of it? No way.
What’s interesting is at what scale is the activity “worth it” for both the disinformation producers, and the platform providers. How much political influence is enough? How much lost ad revenue (and reputation) is too much?
It’s like that line from Fight Club:
“A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don’t do one.”
Assuming this cold, calculated approach is used by both sides, it is hard not to feel helpless. The motivations will never align with individuals, and the consequences impact everyone. “Victories” for the platforms over bad actors will only ever be partial and temporary – benefitting public relations more than the public.
By enabling unlimited sources and signals of information the platforms have not reduced corporate media control. They did expanded the field beyond the corporate media oligarchy and mashed it up into some kind of media feudalism/tribalism instead of democracy. Editorial control was the only effective form of content policing to large audiences. No doubt it could be bought and paid for – with suspect truthfulness – but at least it was an idea of how to rely on a set of signals. There were even potential consequences when you could point to the editor and call them out. Now, everyone is a signal source, and the concept of an editor makes no sense. The role of discussion moderator is economically unfeasible to implement at any scale beyond thousands.
Corporations influencing and manipulating us with television, radio, print advertising – while owning news media outlets – was just the warm up for what we face now. Everything you see on social media streams and online advertising is potentially untrustworthy. Potentially designed to trick you. Even (or especially) inside your echo chamber or bubble of like-minded connections.
The best tools to counter disinformation are a critical mind and personal responsibility. Those, and the commitment of individual people to engage with others in real life, where we can be (mostly) sure to avoid bots and trolls. Content platforms and governments can police content, but bad actors will continue to change tactics and find a way to deliver disinformation to our devices.
I never liked the idea that repeating a lie enough times makes it true. However, I believe enough people repeating enough lies makes us doubt many facts, and believe many lies. The best metaphor I have for this is information pollution. Yuck.