As social media increasingly take on a central role in information sharing, organising and news gathering, it accordingly matters more that so much of it is nonsense – be it misinformed or actively misleading.
Researchers at Britain’s Sheffield University are attempting to tackle this problem by building what they’re calling a “social media lie detector”.
Social networks have been used to spread accusations of vote-rigging in Kenyan elections, allege that Barack Obama was Muslim and claim that the animals were set free from London Zoo during the 2011 riots. In all of these cases – and many more – an ability to quickly verify information and track its provenance would enable journalists, governments, emergency services, health agencies and the private sector to respond more effectively.
The project, funded by the European Union, involves designing an algorithm that would “classify online rumours into four types”: speculation (“such as whether interest rates might rise”); controversy (“as over the MMR vaccine”); misinformation (“where something untrue is spread unwittingly”); and disinformation (“malicious intent”). The assessed credibility of a post – we’re talking for the most part Twitter here – would be calculated from a range of factors.
The system will … automatically categorise sources to assess their authority, such as news outlets, individual journalists, experts, potential eye witnesses, members of the public or automated “bots”. It will also look for a history and background, to help spot where Twitter accounts have been created purely to spread false information.
(Asked whether Pheme is limited to tweets or will apply to a range of social networks, the response is: “The project aims to be applied to other social networks too.”) Here’s Dr Kalina Bontcheva, the project’s lead researcher, based in the University of Sheffield’s Department of Computer Science:
We can already handle many of the challenges involved, such as the sheer volume of information in social networks, the speed at which it appears and the variety of forms, from tweets, to videos, pictures and blog posts. But it’s currently not possible to automatically analyse, in real time, whether a piece of information is true or false and this is what we’ve now set out to achieve.
Writing in the Guardian, however, Steven Poole has reservations about Pheme as arbiter of veracity.
On the face of it, these look like good ideas. But one can immediately think of examples where they would have resulted in misleading judgments. Authoritative news outlets, for example, have sometimes been complicit in spreading state disinformation (see the New York Times‘ sorry record with the pre-Iraq war weapons of mass destruction claims). And sometimes, of course, despite what one lone rebel says is correct, institutional authorities may disagree (see Galileo).
The risk, then, is that such systems will encourage their users to place more faith in mainstream sources simply because they are official. In the future, if such automated systems of truth grading are taken seriously by powerful institutions or the state itself, then the people designing the algorithms will essentially be an unelected cadre of cyber thought police.
It’s not the first such tool to weigh up the nonsense-level of a given tweet. This, from an October 2013 report in Wired:
[A] fascinating recent study from Imperial College London suggests a new approach. Borrowing some tricks from computational neuroscience, coauthors Gabriela Tavares and Aldo Faisal have come up with an algorithm that can tell — with 85% accuracy — whether a Twitter account is home to a bot or (worse) a corporate shill instead of a regular person.
See also: Mandela death tweet a hoax
Toby Manhire is in London on the British High Commission / Financial Times Scholarship sponsored by British Airways