There was a sickening sort of predictability to it. As England fell in the final of the Euro Cup recently, social media lit up with racist abuse of the players who has missed the deciding penalties.
It was perhaps silly to have expected better. Racism among English soccer fans is about as surprising as snow in a Canadian winter, and social media has a knack of making the worst among us outdo themselves.
But if we are not to expect any more from the dregs of society, then perhaps we can expect better from social media companies themselves. They are, after all, the ones who provide the spaces for reaction and speech where this abuse occurs. In the face of the awful bigotry that pervades their platforms, should they not be doing more?
The simple answer to that question is yes. It’s true that Facebook or Twitter can’t fix bigotry by themselves, but at times it feels like they’re barely even trying.
Yet, the demand that social media companies fix our broken public sphere is itself misguided, and may, in fact, end up perpetuating their power. And pretending that the issue of content moderation online is simple isn’t simply wrong, it also hinders us making any progress.
That Facebook in particular has been slow to react to the nefarious use of its platform now seems a matter of public record. The most recent example is a New York Times piece this week that details how European soccer leagues tried for years to get Facebook to react to the racist abuse hurled at players of colour.
The trouble seems to be that Facebook, in particular, has at times appeared to have thrown up its hands.
“The unfortunate reality is that tackling racism on social media, much like tackling racism in society, is complex,” was the statement from Facebook made to the Times — as if to say, “Look, we really can’t do very much.”
That, however, isn’t true. As scholars and critics have said for years now, there are numerous steps social media companies can take to mitigate abuse, from giving users greater control of what they see, to more far more aggressive moderation.
But the soccer league situation highlights both how tricky the problem is, and how the reaction to the issue is can be wildly over-simplistic.
One example: the Times details how internal reaction at Facebook to the post-match racism got stuck on what to actually do. When it came to the idea of banning certain words or symbols, “they argued that terms or symbols used for racist abuse, such as a monkey emoji, could have different meanings depending on the context and should not be banned completely.”
Reaction on Twitter, even among New York Times tech writers themselves, was that this sentiment betrayed how out to lunch Facebook was — that this was an obvious case of how a clear example of racism should have been immediately banned.
Here’s the trouble, though: Facebook is right.
The issue with what can and cannot stay up online is not that it’s obvious what should and should not be allowed, and that social media companies simply aren’t enforcing the rules. It’s that, first, human meaning is contextual, and second, that at the scale of social media with its billions of posts, parsing out the specific meaning of every statement becomes impossible.
Imagine if, for example, in response to a torrent of homophobic abuse, Facebook simply banned the word “gay.” It would likely harm as many or more people as it would help.
So you either blanket ban things and suffer the downsides of a constrained public arena, or you leave things up and try and play Whack-a-mole with various aberrations.
Social media companies have an economic incentive to do the latter — their business models are predicated on keeping users engaged and on their platforms — and thus have leaned toward letting abuse slide.
Naturally, there are thus calls for those companies to do more. But perhaps that is just another part of the problem we are facing.
There is a frustratingly American character to the conversations about social media. U.S. notions of free speech, rights and, most of all, relying on companies over and above the state have stymied productive conversations on how to solve the deep problem of how social media both reflects and produces bigotry in society.
For one: Perhaps the body that should determine rules of speech in the public arena is not a multibillion-dollar corporation but the democratically elected governments that at least purport to represent citizens.
And, maybe, asking Facebook or Twitter to get better and better at dealing with bigotry isn’t actually challenging their power, but helping to entrench it.
It is quite true that neither Facebook nor Twitter alone is going to solve racism. Neither is the government, at least not by itself.
But the least the state could do is try — that is, enact legislation making clear rules about speech online, and also form a regulatory framework that would punish social media companies for failing to sufficiently react.
The other option is to simply let Facebook and Twitter determine the character of our public sphere. And given their record so far, that is far too dismal and depressing a choice.
Thus far, they have let the worst of society have far too much say; indeed, when it comes to the ills of the 21st century, it increasingly feels like an unregulated social media is among the most odious and offensive.