We are facing a global crisis of widespread unverified information
Technology brings information to our fingertips. But at what cost?
Coronavirus is not just a public health emergency
The World Health Organisation warned in February that the crisis had been accompanied by "a massive ‘infodemic’".
This infodemic is an unprecedented overabundance of information—both accurate and false—that prevents people from accessing authoritative, reliable guidance about the virus.
Months on from that warning, research from Ofcom has shown that a significant number of people still see misinformation about COVID-19 online each week, causing confusion, fear and mistrust.
Causes of the infodemic
Causes of the infodemic are multifaceted. Evidence we received consistently emphasised loss of trust in institutions as an aim and opportunity for hostile actors.
Foreign sources
Both state (Russia, China and Iran) and non-state (such as Daesh and the UK and US far right) campaigns have spread false news and malicious content.
Professor Philip Howard, Director of the Oxford Internet Institute, told us that these actors aim "to degrade our trust in public institutions, collective leadership or public health officials".
Financial gain
Several witnesses claimed that they had observed people attempting to exploit the crisis for financial gain, either through scams or quack cures.
Dr. Claire Wardle of First Draft News told us there was "a huge increase in scams and hoaxes and people motivated by financial gain", including elderberry supplements or testing kits that are falsely advertised as FDA- or CDC-approved.
Good intentions
Finally, many people have shared misleading or false information with well-meaning intentions.
Dr. Wardle provided insight into the psychological and social reasons why people may share misinformation, saying that "larger proportions of the population are losing trust in institutions, feel hard done by, and conspiracies basically say, 'you don’t know the truth. I’m telling you the truth'".
The impact of misinformation
Public health impact
Early examples of misinformation during the pandemic often misled people about cures or preventative measures to tackle infection. Some people have mistakenly turned to unproven home remedies, stopped taking prescribed medicine, or ingested harmful chemicals such as disinfectant. Otherwise, people have avoided hospital altogether.
This impact has been particularly drastic amongst specific British communities. A UK GP told us this type of misinformation has caused particularly acute panic and confusion amongst British Asian communities, some of whom "feel adamant that doctors are actively trying to harm them or discharging them without treating them".
5G conspiracies
Written evidence from BT stated that, between 23 March and 23 April alone, there were 30 separate attempts of sabotage on the UK's digital infrastructure while EE said personnel and subcontractors alone have faced 70 separate incidents, including "threats to kill and vehicles driven directly at staff".
Mobile UK, the trade association for the UK's four mobile network operators, was even forced to issue a statement in April stating the "resilience and operational capacity of the networks to support mass home working and critical connectivity to the emergency services, vulnerable consumers and hospitals" was being seriously impacted.
Impact on frontline health workers
Misinformation has also directly and indirectly impacted health workers themselves. As one doctor wrote, medical staff are "battling two challenges: trying to save the lives of ICU patients succumbing to the virus and tackling the infodemic".
Conspiracy theories have also helped fuel targeted abuse and harassment online. Worryingly, a belief that 'Asians carry the virus' has also led to real life attacks as well as online trolling.
UK police statistics have registered a 20% increase in anti-Asian hate crimes with more than 260 offences recorded in the UK since lockdown began.
What part do tech companies play in the infodemic?
Social media giants hold great power and have been left largely unaccountable for their inaction
Misinformation sells
The prevalence of misinformation online must be understood within the business context of tech companies. Social media companies generate revenue primarily through advertising targeted at users based on observed or perceived tastes and preferences.
The more people engage with conspiracy theories and false news online, the more platforms are incentivised to continue surfacing similar content. This theoretically encourages users to continue using the platform so that more data can be collected and more adverts can be displayed.
Platform policies against misinformation
The Government states that legislation will simply hold platforms to their own policies and community standards.
However we discovered that these policies were not fit for purpose, a fact that was seemingly acknowledged by the companies. Facebook, for example, conceded that their enforcement of their own policies, terms, conditions, guidelines and community standards on hate speech and misinformation was not perfect.
The Government must empower any new regulator to go beyond ensuring that tech companies enforce their own policies, community standards and terms of service.
Identifying and reporting disinformation
Currently, tech companies emphasise the effectiveness of Artificial Intelligence (AI) content moderation over user reporting and human content moderation.
However, the evidence has shown that an overreliance on AI moderation has limitations, particularly in regards to speech, but also often with images and video too.
Easy-to-use, transparent user reporting systems as well as robust proactive systems which combine AI moderation but also human review, are needed to identify and respond to misinformation and other instances of harm.
What tech companies are doing to stop the spread of misinformation
We recognise tech companies' innovations in tackling misinformation, such as ‘correct the record’ tools and warning labels. We also applaud the role of independent fact-checking organisations, who have provided the basis for these tools.
These contributions have shown what is possible in technological responses to misinformation, but often these responses do not go far enough. There is little to no explanation from tech companies as to why such shortcomings cannot be addressed.
Twitter's labelling, for instance, has been inconsistent, whilst we are concerned that Facebook's new corrective tool overlooks many people who may be exposed to misinformation.
The Government's response and proposed legislation
What has the Digital, Culture, Media and Sport (DCMS) Department done to tackle the issues?
In its Online Harms White Paper, the Government stated its aim "to make Britain the safest place in the world to be online".
Legislation would take a "proportionate, risk-based response" by introducing “a new duty of care on companies and an independent regulator responsible for overseeing this framework.”
Counter Disinformation Unit
In March, the Secretary of State announced his intention to re-establish the DCMS-led Counter Disinformation Unit, bringing together existing capability and capacity across government. The goal was to "help provide a comprehensive picture on the potential extent, scope and impact of disinformation".
We did however feel that there was a duplication of work, as there are already lots of independent fact-checking organisations already up and running. Experts who we spoke to said that the Department would be better placed to help provide independent researchers with more data, to help understand the scope and scale of the problem.
Engagement with social media companies
The DCMS Department has also led on engagement with the social media companies themselves, and tech companies have reciprocated.
Facebook, Twitter and TikTok told us that they had provided the Government with pro bono advertising credit on their platforms to counter misinformation and provide verified sources.
They all also said that they had amplified Government messaging on their platforms through various information hubs, adjusted search results and other platform-specific features.
Offline solutions to an online problem
As technology continues to grow into all areas of our lives, there is a need for a comprehensive digital literacy, community engagement and school education programmes.
The Government had committed to publishing a media literacy strategy this summer. We understand the pressures caused by the crisis, but believe such a strategy would be a key step in mitigating the impact of misinformation, including on the current pandemic.
An Online Harms regulator
The Government has emphasised that decisions about the scope of regulation for so-called ‘harmful but legal’ content should fall to the regulator.
Though a regulator has yet to be confirmed, Ofcom (the UK's official communications watchdog) has been a front runner to take on this role. As arguments in its favour, we note Ofcom’s track record of research and expedited work on misinformation in other areas of its remit in this time of crisis.
The regulator must be named immediately to give it enough time to take on this critical remit.
Our key recommendations
1. The Government must make a final decision on the online harms regulator now and bring forward online harms legislation this autumn.
2. Give Parliament a role in establishing what harms (including disinformation) are in scope, rather than allowing tech companies to set what is and isn’t acceptable. Tech companies need more than holding them to their own (often inadequate) policies in order to protect freedom of expression and ensure regulation is robust.
3. Give legislation real teeth where companies are failing in their duty of care, such as powers to give out significant fines, disrupt business activities and, ultimately, custodial sentences where there is evidence of wrongdoing.
4. The Government should publish a media literacy strategy by September at the latest, and report on the adoption of the ‘Teaching online safety in school’ guidance by the end of the next academic year.
We have made these recommendations to the Government.
The Government now has two months to respond to our report.
Our report, 'Misinformation in the COVID-19 'Infodemic', was published on 21 July 2020.
Detailed information from our inquiry can be found on our website.
If you’re interested in our work, you can find out more on the House of Commons Digital, Culture, Media and Sport Committee website. You can also follow our work on Twitter.
The Sub-Committee on Online Harms and Disinformation was set up in March 2020 to consider a broad range of issues in this area, including forthcoming legislation on Online Harms.
The main Digital, Culture, Media and Sport Committee scrutinises the work of the Department for Digital, Culture, Media and Sport and its associated public bodies, including the BBC.
Cover image credit: Daria Shevtsova via Pexels