The war on fake news: From clickbait to political misinformation, internet awash with fictional stories masquerading as fact
But can it be stopped, asks Rosamund Urwin
The earth will be plunged into a 15-day blackout, warns Nasa! Morgue employee cremated while taking a nap! Denzel Washington says Donald Trump’s election saved us from an ‘Orwellian police state’!
You can stop fretting about the darkocalypse, slapdash crematoriums and Denzel’s sanity. These headlines are, of course, ‘fake news’ — fabrications from the warped world of hoaxes-for-hits and viral videos with a veracity void.
In the past two years, tall tales have flooded Facebook and Twitter, and climbed to the top of Google searches. A whole industry has sprouted up; more than 100 US political ‘fake news’ sites have been registered in the small Macedonian town of Veles. Now politicians and tech firms promise a crackdown. This month, MPs in the Digital, Culture, Media and Sport select committee — led by Damian Collins — travelled to Washington to grill Facebook, Google and Twitter about Russian interference in the EU referendum and election, including the use of phoney news. At the Web Summit in Lisbon in November, the usually self-congratulatory sector turned self-reflective about how to tackle the mendacity in its midst.
Tech executives admit that fake news endangers their brands’ credibility and thus bottom lines (the consumer goods giant Unilever threatened to pull investment from sites that failed to address the problem), so their efforts to combat fake news are rooted in self-interest. “If people lose trust, they’ll stop using our services,” says Richard Gingras, Google’s vice-president of news.
Gingras adds that those sharing a link don’t necessarily believe the content. But a Buzzfeed study found that three quarters of US adults who see untrue headlines do believe them. And why wouldn’t they? These sites often look like polished products. There are two main motivations for spawning these scams: ideological and financial. The political falsehood machine — Hillary Clinton sold weapons to Isis; the Pope endorsed Trump — is dominated by right-wingers, especially in the US. Oxford University researchers found that 96% of Trump supporters who use Twitter shared fake news ahead of the State of the Union address.
Then there are the even more common cash-for-clicks stories, of the ‘FBI finds 3,000 penises during a raid on a morgue worker’s home’ ilk. “They spread like wildfire and have copycats,” says Jane Lytvynenko, a Buzzfeed journalist who specialises in fake news. “Some scams have only a headline and photo, and they’re still shared.”
So how are the tech firms responding? Last month at Davos, Google execs privately floated an idea to fellow attendees: that the company could advise users on the trustworthiness of articles, potentially with an extension for the Chrome browser. However, no product is believed to be in the pipeline yet.
Both Facebook and Google have toughened their stances on ‘fake news’ recently. Tessa Lyons-Laing, product manager on the news team at Facebook in the US, explains: “Anything which violates our standards — such as hate speech — we remove entirely from the platform. Then there is content that doesn’t violate our standards, but isn’t consistent with our news feed values (like ‘fake news’). We do not remove that content; we work to reduce it.”
The first line of defence against fake news involves chipping away at the financial incentives (advertising sold based on the number of hits web pages receive) to produce it. Google now prevents websites that run hoaxes from using its AdSense advertising network. Meanwhile, Facebook is trying to prevent those who keep sharing false stories from advertising on the site.
Then there’s the technological fightback. Facebook has changed the way it recognises inauthentic accounts, identifying them by patterns of behaviour like repeated posting of identical content, and has removed tens of thousands of fake accounts in the UK alone. The site has also ensured that users now see fewer posts in their news feed that link to poor-quality pages. Algorithms alone cannot identify all nefarious content, so Google has also introduced a ‘fact check’ label to show if information has been vetted and its veracity, with authoritative publishers and fact-checkers doing the fact checking. When a major story breaks, YouTube marks authoritative sources ‘top news’ in search results.
In the UK, YouTube runs ‘Internet Citizens’ workshops for teenagers to warn them about hoaxes and scams, and last April, Facebook posted a temporary notice at the top of users’ feeds to help them spot falsehoods, such as being sceptical about headlines.
Finally, Google and Facebook have tried to build closer ties with reputable news organisations. This charm offensive involves stumping up cash (Google has a £133m fund for European publishers’ digital news projects), product development (as part of its Journalism Project, Facebook collaborates with media firms to create new advances, while the Google News Lab works to adapt tools and YouTube provides free video hosting for 50 European news publishers under Player for Publishers) and offering training for journalists. Google is also a founding member of the First Draft Coalition, which trains journalists to determine whether eyewitness accounts are real, and both sites have worked with independent fact-checkers like PolitiFact, Snopes and Full Fact to scrutinise content that users flag as suspicious.
Twitter’s response lags behind the other two. When you speak to staff off-the-record, they often say that Twitter mirrors society, seemingly abdicating responsibility for the content published on the platform. James Ball, the author of Post-Truth, says this attitude extends to politicians: “Twitter’s political response seems designed to p*** off world governments.”
While working at Buzzfeed, Ball found a network of 13,000 Russian bots on Twitter that pumped out pro-Brexit propaganda in the run-up to the referendum. “We shared details with Twitter and the company — characteristically — refused to comment but deleted the bot accounts.”
Ball believes, however, that Facebook has been “let off too lightly on this — it’s easier to find bot activity on Twitter because the platform is more open than Facebook, but Facebook’s audience is much bigger.”
There are other concerns too. According to research by Lytvynenko and her colleagues Craig Silverman and Scott Pham, despite Facebook’s efforts, the top 50 most viral fake stories of 2017 were shared and commented on more than the top 50 the year before. Efforts to address fake news are also patchy, with many advances only available in a smattering of countries (even Facebook’s mini guide to fake news was only in 14 countries). And many of the fixes are imperfect. Back in 2016, The Guardian journalist Carole Cadwalladr found that when you typed ‘did the Hol...’ into Google search and clicked on the autocomplete suggestion, ‘Did the Holocaust happen’, top of the results list was a link to the neo-Nazi site Stormfront’s article, ‘Top 10 reasons why the Holocaust didn’t happen’. Moreover, do we really want tech firms to be the arbiters of truth?
There’s also a bigger tension here between tech firms and the media, which feels short-changed. Earlier this year, Rupert Murdoch argued Facebook should pay news publishers a ‘carriage fee’ for using their stories. Last year, I attended an off-the-record dinner for journalists hosted by one of the tech titans. The journalists were angry that tech firms threaten their business models, but they were also frustrated that these firms wouldn’t admit to being publishers and take responsibility for their content.
Ball explains that their position is a hangover from when internet providers didn’t want to be responsible for everything users found online. “This was then extended to websites, where the case is much weaker,” he says. “People already accept that internet companies should have responsibilities, such as taking down child porn from their sites — so I think journalists are fighting the last fight here.” He believes Google, Facebook and Twitter should talk more to newsrooms and experts on misinformation.
There is a role for government here, too. In January, Germany launched a new law which means social media firms have 24 hours after a complaint to take down fake news and hate speech, or face fines of up to €50m (£44m) in the case of the latter. Tech bosses are worried other countries could follow. The age of the internet Wild West is coming to a close, and the sheriffs (regulators) may finally have some bullets in their guns.
‘Fake news’ is one of the growing pains of the tech titans. As companies, they are slowly waking up to the idea that far from being the world’s saviours, the geeks may have unleashed dark forces. But as Ball points out, that’s not entirely on them: “Let’s not forget — people have an appetite for this stuff. Perhaps ‘fake news’ shows that we end up with the social media we deserve.”