Pages

Tuesday 24 April 2018

How The Internet Can Help Or Hinder Hate Speech

Cartoon of Wendy Cockcroft for On t'Internet
I've been called all sorts of names over my stances on freedom of speech and on the issues that are important to me. Basically, I believe in personal freedom. However, this freedom ends where other people's freedoms begin, which is why we're forever debating it. Some new items have come up which help shed light on the complexity of this debate, which I'm going to discuss tonight.

Let's begin with a discussion on the limits of freedom of speech.

Freedom for whom?


The internet helps or hinders hate speech by providing a platform for it — and for dissenting voices. The error on the part of the maximalists is to assume that because we all have access we all have the same reach. We do not. The most popular people have a fire hose while the rest of us are lucky if we have a water pistol, which is why the maximalist go-to, counter-speech, isn't always effective in terms of engaging with hate speech purveyors. They also appear to believe that hate speech and abusive actions are different things. Not so; lines blur very easily when you're on the receiving end of a continuous torrent of abuse. Think about it; your everyday internet use is disrupted by the fact that you're having to dig through death threats, spam, and rude messages for hours to get to your correspondence (and perhaps worrying that some of the people making those threats might seriously consider carrying them out. Which threat is just a joke and which should be reported to the authorities?). This can and does result in stress-related illnesses, mostly as a result of having to spend a good chunk of your day dealing with it, and that's before it goes real life (RL) and your boss gets involved. When the best response you get from the maximalists when you complain is "If you can't stand the heat, get out of the kitchen" or "Toughen up, Buttercup!" you realise at once that the freedom they speak of is intended for the noisiest, most obnoxious speakers while the rest of us are obliged to shut up and put up with it. No. Speech can be used to censor speech.

The limit of my freedom, then, is whether or not I'm willing to risk my job (and reputation!) for the right to post my opinions online as myself. And it's a very real threat if my employers believe I'm responsible for provoking my accuser into posting these allegations against me. Thank God I don't do web design for money any more. I might have ended up losing my livelihood over this. - When Speech Is Used To Censor Speech, How Free Is Speech? - On t'Internet

Okay, I survived that, as did my speech. I still post occasionally controversial blog posts as myself but that scared me. The point is, freedom of speech is not a cut-and-dried issue, it's complex and nuanced and nobody likes being treated like crap. This begs some questions which I'm going to address tonight:

  • Whose responsibility is it to control hate speech, etc.?
  • Where does opinion end and hate speech begin?
  • How can we effectively counter hate speech?
  • Is legislation the answer?

Okay, let's get to work.

Whose responsibility is it to control hate speech, etc.?


Until I saw this article, I'd have said it's up to the individual, both the speaker and the audience, to deal with hate speech. It's a demand-side issue, after all:

But where institutions are weak or undeveloped, Facebook’s newsfeed can inadvertently amplify dangerous tendencies. Designed to maximize user time on site, it promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.

In the Western countries for which Facebook was designed, this leads to online arguments, angry identity politics and polarization. But in developing countries, Facebook is often perceived as synonymous with the internet and reputable sources are scarce, allowing emotionally charged rumors to run rampant. Shared among trusted friends and family members, they can become conventional wisdom. - Where Countries Are Tinderboxes and Facebook Is a Match, by Amanda Taub and Max Fisher for the New York Times

The argument you may be cooking up about them using something else to get their news from is a non-starter:

It's a demand-side issue


Yes, those tensions have always been there and yes, those incidents would have happened absent a social media platform to stir them up but the trouble with social media is that it's instantaneous. Rumours used to take time to spread. These days the minute someone posts one, everyone who follows them knows about it. It's the speed at which they travel and the fact that Facebook's algorithm amplifies popular posts that's the problem. Needless to say, Something Must Be Done, but what? People want to spread horrible, hateful, and often false stories about people they don't like in the first place on the grounds that they're horrible and hateful. I remember a viral post going round about Chinese people eating babies. My usually calm and dignified friend swung into action, determined to stamp out this horrible trend and attempted to co-opt me into it. She stopped when I pointed out it was racist blood libel and told her to knock it off. The trouble in these hotspots the Times reporters speak of is that the "knock it off" counter-speech is either non-existent or siloed by the fact that you can choose whom you pay attention to. People who choose to pay attention to posts that confirm existing biases will naturally share anything nasty about "those people." Okay, that begs two more questions:

  • How do you limit access to posts and news articles that paint individuals and groups in a negative light, thereby inflaming extant tensions?
  • How do you stop people from uploading and sharing these posts, often with their own opinions added on?

Serious questions. Are we going to nanny people in ethnic tension hotspots in an effort to get them all holding hands around a campfire singing Kum Ba Yah? Good luck with that.

Reporting TOS violations


I remember my own situation where I was being mobbed by trolls; I reported them to the administrators of the online platforms where they abused me but they did nothing. Terms of service are only as good as the enforcement policy and where enforcement is either patchy or non-existent due to reliance on automation, good luck with that.

One post declared, “Kill all Muslims, don’t even save an infant.” A prominent extremist urged his followers to descend on the city of Kandy to “reap without leaving an iota behind.”
Desperate, the researchers flagged the video and subsequent posts using Facebook’s on-site reporting tool.
 
Though they and government officials had repeatedly asked Facebook to establish direct lines, the company had insisted this tool would be sufficient, they said. But nearly every report got the same response: the content did not violate Facebook’s standards. - Where Countries Are Tinderboxes and Facebook Is a Match, by Amanda Taub and Max Fisher for the New York Times

Effective enforcement would go a long way towards helping to reduce the number of posts that even the most maximalist of free speech advocates would gladly pull down but until Facebook puts its money where its mouth is, forget it. The trouble with — let's not beat about the bush — censorship, is that human eyeballs are required to discern the difference between a reliable, verifiable news story and a deliberately inflammatory made-up one. Even then, subjectivism gets in the way; what you or I might consider a public interest story might result in some poor geezer being set on fire for being in the wrong place at the wrong time because "Those people" have been "at it again" and Something Must Be Done. And those two questions I asked earlier pop up again:

  • How do you limit access to posts and news articles that paint individuals and groups in a negative light, thereby inflaming extant tensions?
  • How do you stop people from uploading and sharing these posts, often with their own opinions added on?

Where does opinion end and hate speech begin?


Facebook is trying to address these problems. However, they're doing it remotely and as automatically as possible, despite the promises to hire more local staff in the countries afflicted with ethnic tensions.

From October to March, Facebook presented users in six countries, including Sri Lanka, with a separate newsfeed prioritizing content from friends and family. Posts by professional media were hidden away on another tab.

“While this experiment lasted, many of us missed out on the bigger picture, on more credible news,” said Nalaka Gunawardene, a Sri Lankan media analyst. “It’s possible that this experiment inadvertently spread hate views in these six countries.” . - Where Countries Are Tinderboxes and Facebook Is a Match, by Amanda Taub and Max Fisher for the New York Times

As I said, it's a demand-side issue. Both bad actors and people acting in good faith have access to the platform and people from both groups add to the problem by spreading horrible stories that rile people up. It's also a structural issue; these tensions have been bubbling away for years — for centuries in some cases. When all you know about Muslims is that they're easily offended and are liable to Bomb All The Things! if they get upset about something, and you see a post about one doing something nasty, of course you'll share it. I see this all the time on Twitter, where one chap I follow has a habit of demonising Muslims. He cut back after reading about unprovoked attacks on them which I pointed out were the result of him constantly sharing horrible stories and memes about them. What I'm saying (and failed to impress upon Twitter user Liz Webster) is, it's not the egregious hate-mongering that makes ordinary, normal people turn against individuals and groups on the grounds of religion, ethnicity, etc., it's the drip drip drip of negative stories about them that come from allegedly reputable sources — more often than not, from people we know.

Rivers of blood


Accurately defining hate speech is hard. Egregious statements along the lines of "Irish people are idiots" is easy enough to describe as hate speech because it invites condemnation of Irish people qualified with the assertion that we're stupid. Yet (Liz either can't or won't accept this) it's not in-yer-face racism like this that entrenches prejudice, it's the dog whistle everyday respectable racism that does the job.
 
Think about it: when Enoch Powell made his Rivers of Blood speech he lost his job and his credibility. This is the part that got him into trouble:

In this country in 15 or 20 years' time the black man will have the whip hand over the white man.

That statement is bad enough on its own but in context, it's worse; Powell was repeating what a constituent allegedly said to him about life in Britain. Basically, he considered emigrating to "Forn parts" because England's green and pleasant land was filling up with immigrants. What, did he think he'd be on top in a country in which he was a minority? I can't even... d'oh!

To be fair to Powell, I've heard the same kind of things from white British people myself. He continues:

What he is saying, thousands and hundreds of thousands are saying and thinking - not throughout Great Britain, perhaps, but in the areas that are already undergoing the total transformation to which there is no parallel in a thousand years of English history. 

Confirmed correct. People are saying and thinking that today. Why do you think Brexit is a thing? It's their chance to kick the foreigners out and get their country back, style of fing. Foreigners like me. I'm Irish. Powell continues:

But while, to the immigrant, entry to this country was admission to privileges and opportunities eagerly sought, the impact upon the existing population was very different. For reasons which they could not comprehend, and in pursuance of a decision by default, on which they were never consulted, they found themselves made strangers in their own country. 
 
They found their wives unable to obtain hospital beds in childbirth, their children unable to obtain school places, their homes and neighbourhoods changed beyond recognition...

Yeah, about that, Enoch... while it's true that people from other countries have significantly increased in number this is more to our benefit than to our detriment. If there's a shortage of hospital beds, etc., that's down to Tory austerity policies. They don't like us having nice things. You can see the scapegoating of "the alien" here. Result:

The sense of being a persecuted minority which is growing among ordinary English people...

Austerity is responsible for this, people! It's the natural consequence of reducing the number of crumbs from the table and making us compete for them. Powell then attacks the Race Relations Bill on the grounds that it would put immigrants at an advantage over the native-born, at which point he exclaims:

Here is the means of showing that the immigrant communities can organise to consolidate their members, to agitate and campaign against their fellow citizens, and to overawe and dominate the rest with the legal weapons which the ignorant and the ill-informed have provided. As I look ahead, I am filled with foreboding; like the Roman, I seem to see "the River Tiber foaming with much blood." 

He then cites the "phenomenon" of the civil rights movement in America (and the violence that went with it) as a reason to be worried and to be prepared to act. What he fails to comprehend is that Britain didn't have the structural embedded legislated racism that America had. We never ended up with a civil rights movement with millions marching on Parliament and fire hoses turned on kids so the resistance to racism was quieter and more orderly. One presumes that "respectable" racists felt validated in their beliefs when the Tottenham riots erupted in the Eighties but they were short lived; the riots were localised and didn't engulf the whole country. What Liz can't see (for reasons I do not understand) is that it's not the egregious, thuggish, obvious racists that are the problem, it's the nice, tidy, respectable ones. Why? They find ways to express their racism in ways that bypass hate speech laws and feed into subconscious prejudices in ordinary, everyday people, who don't perceive their negative attitudes as racist but as common sense. That's what makes institutional racims so damn hard to root out. It'd be easier if they all wore swastikas and burnt crosses on the front lawn.

How can we effectively counter hate speech?


Challenging offensive speech of any kind is hard work but it's ultimately worth it. I rely on scraptivism — using argument to influence one's opponent's audience by showing them up. The hard part is being consistent — and effective — since social media tends to silo its audience. You've got to keep the effort up as some people need to be countered over and over again. You also need to provide positive messages since "You're wrong" just gets a back-and-forth antagonistic situation going which tends to play to tribalism. It's hard to argue from a tribal point of view when presenting kind behaviour from a member of a demonised group. Given that you're up against an "If it bleeds, it leads" mentality even in the most stable environments you may feel like giving up but in a sea of negativity you may find that your story of kindness is the one that gets attention if only for novelty value.

Write to them


I was very pleased when, after emailing the Metro to complain about their referring to whistleblower Ed Snowden as a traitor, they changed the way they wrote about him. It does work. The trick is to back up your assertions with facts; I was careful to reference the laws affecting Ed's opportunity to defend himself (basically, the whistleblower defence was nuked, so he had none). There's also the letters to the editor page, where you can post counter-speech (if they'll host it).

In real life


It's harder to stand up to your friends than your enemies but it has to be done. I've called friends out a few times when I thought they were wrong; that baby-eating story being but one example. I've told my neighbours to flat out stop reading the Daily Mail and the Sun on the grounds that they're trash and make up stories. They still like me. If I hear comments demonising individuals or groups I always ask where they got it from every time, then ask why they believe such crap. Being on the receiving end of that kind of thing has made me very sensitive to it and I naturally side with the underdog. I have to watch that tendency since it does make me open to manipulation. Sometimes you have to win people over one at a time.

Is legislation the answer?


Our Glorious Leaders just love to seize the opportunity whenever they can to establish control over the rest of us. Hacked-off people aren't much better; I agree that bad actors shouldn't be allowed to get away with behaving badly but going after Facebook over dodgy adverts is about as effective as going after Ford for making getaway cars. The point is, when politicians make laws or people launch lawsuits on the grounds that Something Must Be Done, ridiculously stupid things happen as the platform companies scramble to be seen to be doing something as profitably as possible. This usually means automating a solution using algorithms that filter by keyword. Result: policies that end up curtailing speech on the grounds that somebody might be offended, even if it's the poster himself.

As I've stated earlier censorship is problematic in and of itself because it's subjective; basically, you're at the mercy of someone's opinion as to whether or not a given article constitutes hate speech or is acceptable opinion, and that's when there are human eyeballs on the item in question. Don't get me started on bots.

It's the people's problem


The reason that hate speech exists at all is that human beings are messy, complicated, and often downright horrible. More often than not it's not even personal, that's how it is. The internet provides an instantaneous means of broadcasting funny cat videos, porn, beheadings, the news, and pictures of your dinner. It's the participatory nature of the beast that gives it its power; the only way to effectively control it is to limit who can or can't write blog posts or upload images and videos. Aye, there's the rub. Keyword sniffers can easily be fooled; you have only to change the way a word is spelled or use slang terms to get around them, hence lolspeak, etc. There's also the fact that internet platforms' business models tend to revolve around serving ads, for which they need eyeballs, for which they need to get and hold your attention. That which we make popular by sharing tends to get promoted in order to get more eyeballs on the ads. You can only blame so much on the platform: if you're in the habit of sharing negative posts about individuals and groups you're a part of the problem.

Okay, now what?


Going after the individuals who post horrible content generally just annoys them. They can easily set up a new account and continue their activities. If you jail them to make an example of them people who might share their views don't go away, they go underground. That's why everyone seems so surprised by all the racism we're seeing these days; they thought it had been legislated away. Wrong. Then there's the structural problem of siloing, but that's a side-effect of enabling user filtering. Heck, I do it. If I wasn't so aware of the largely left-wing bubble in which I live online theirs would be the only voices I hear. I have to seek out conservative voices since my friends don't tend to follow them and many people who call themselves conservative are basically swivel-eyed loons.

With great power comes responsibility


So... expecting people to take personal responsibility for their online activity is where the platforms started off. Unfortunately many people are irresponsible and downright lazy which is why we're where we are now. The solutions on the table appear to basically be paternalistic nannying and centralised control. I don't like that idea as it means that someone who doesn't share my views will  have the job of deciding what I can or can't see online based on their personal prejudices — or a set of keywords. I certainly don't like the idea of having to run my blog posts by a censor to ensure I'm not breaking any laws or offending any sensibilities. That said, it's reasonable to set limits on how badly a person can behave before we call time on their activities. I'd certainly recommend more local moderators trained to tell the difference between hate speech and actual news reports and I would totally shut down the accounts of abusive people, particularly if they called on others to join in mobbing other people. Terms of service should certainly be enforced. There also needs to be greater cooperation in terms of helping to unmask bad actors, e.g. the ones running scams.

The internet and the platforms on it can only ever do as much good or evil as the people using it enable it to. That is both the problem and the solution.

No comments:

Post a Comment