Thursday, October 5, 2017

Las Vegas Story: Google Finds Itself Gamed Again by Organized Hate Groups (Back to the Case of Father Martin and Church Militant)



On 21 September, I presented you with a series of screenshots showing you that on that day at about 2 P.M. CST (in the U.S.), the three "top news" stories that Google was returning to those who googled the name "James Martin" were all personal attacks on Father Martin from the Church Militant website, which is not a bona fide news site at all.


Church Militant is, not to put too fine a point on it, an organized right-wing Catholic anti-LGBT hate group that has a personal vendetta against Father Martin due to his book about building bridges between the LGBT and Catholic communities. (Church Militant and its supporters also have a vendetta against the current pope and consider Father Martin low-hanging fruit as the group pursues its anti-Francis vendetta.)

The point of my posting on 21 September: something's seriously awry when a search for bona fide news stories on Google returns, as its three top recommendations, three vile, lie-filled personal attacks on the person whose name is the subject of one's Google search — all from a non-news site representing a hate group.

When the mass murder event took place in Las Vegas several days ago, I woke to find my Twitter feed full of warnings to Twitter users about fake news stories that swamped Twitter, Google, and Facebook as soon as the Las Vegas news was announced. As Alexis Madrigal reports for The Atlantic

In the crucial early hours after the Las Vegas mass shooting, it happened again: Hoaxes, completely unverified rumors, failed witch hunts, and blatant falsehoods spread across the internet. 
But they did not do so by themselves: They used the infrastructure that Google and Facebook and YouTube have built to achieve wide distribution. These companies are the most powerful information gatekeepers that the world has ever known, and yet they refuse to take responsibility for their active role in damaging the quality of information reaching the public.
BuzzFeed's Ryan Broderick found that Google's "top stories" results surfaced 4chan forum posts about a man that right-wing amateur sleuths had incorrectly identified as the Las Vegas shooter. 
4chan is a known source not just of racism, but hoaxes and deliberate misinformation. In any list a human might make of sites to exclude from being labeled as “news,” 4chan would be near the very top. 
Yet, there Google was surfacing 4chan as people desperately searched for information about this wrongly accused man, adding fuel to the fire, amplifying the rumor. This is playing an active role in the spread of bad information, poisoning the news ecosystem. 

Madrigal reports that he contacted Google about what happened in this case, and received a runaround (that's my word summarizing what he says, not one he himself uses). In response to Madrigal's inquiries, Google talked algorithm: as if its much-vaunted news algorithm is some kind of magical sorting tool akin to Adam Smith's invisible hand that allows us to prescind from hard moral questions about who is producing news and spreading false information and why he/she is doing so — and why news-search groups like Google, which claim to be respectable, are aiding and abetting hate groups in spreading disinformation as "news." 

Madrigal points out that, given the tremendous importance of Google, Facebook, and YouTube (I'd add Twitter to this list) as news sources right now, the onus is on these groups to monitor what passes as news on their sites — to monitor what passes as news on their sites with much more vigilance than they clearly are exercising right now. He concludes,

As news consumers, we can say this: It does not have to be like this. Imagine a newspaper posting unverified rumors about a shooter from a bunch of readers who had been known to perpetuate hoaxes. There would be hell to pay—and for good reason. The standards of journalism are a set of tools for helping to make sense of chaotic situations, in which bad and good information about an event coexist. These technology companies need to borrow our tools—and hire the people to execute on the principles—or stop saying that they care about the quality of information that they deliver to people. 
There's no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.

Regarding Father Martin and what Google permitted an organized anti-gay right-wing Catholic hate group, Church Militant, to do to him in its news search on 21 September, I rest my case. Hate-filled lies are not news, unless we're living in a society Orwell predicted.

P.S. What Madrigal says about Google's magical algorithm was also very much my point several days ago when I spoke about the magical-mystical way that centrist Catholic blog and news sites censor comments while hiding behind fictive screens that allow them to pretend that the "system" itself is doing the censoring — in the same way Google pretends that its "algorithm" sorts out the messy moral questions as fake news is spread like wildfire around the internet by organized hate groups.

People stand behind these magical-mystical tools. And people need to take responsibility for the decisions that allow hate groups to spread lies online while, at some centrist Catholic blog sites, those standing up to the hate and speaking out against it can expect to find themselves censored by the "system."

The graphic is an IFLA (International Federation of Library Associations) infographic based on FactCheck.org’s 2016 article "How to Spot Fake News" and uploaded to Wikimedia Commons for sharing online. 

No comments: