top of page
Search
Writer's pictureJuan Santiago

Technology companies in the pandemic. Falling short when protecting us from fake science.

Updated: May 26, 2020

By Juan Santiago and Gary Smith







Social media and new platforms for communication have both played an important role in our lives in recent weeks. During the pandemic, platforms such as Zoom and Teams have seemingly arrived from nowhere to dominate our everyday business (and social) life. Everything from chatting to grandparents, to key business meetings have shifted to these new platforms. Online retail delivers food to your door, and Youtube offers videos to teach you how to cook that dish you’ve been craving.

But there have also been some negatives. The uncertainty regarding the origins of Covid-19 made conditions perfect for the spread of conspiracy theories. Many of these were born in pseudoscience, cutting corners on accuracy in order to serve a pre-existing biased narrative. The internet encourages the incubation and hype of theories, some of which have harmful, or even malicious, consequences.

One example can be traced to Thomas Cowan, an American physician whose Youtube video led to people setting fire to 5G towers in the UK. In his video, he claimed that Covid-19 is caused by frequencies required to deploy 5G internet, citing similarities to the Spanish flu in 1918 which coincided with the development of radio. Thomas Cowan actually had his medical license revoked in 2017, and is perhaps not the best person to dispense medical advice.

President Trump suggested that disinfectants and UV light might be used inside the human body as cures to Covid-19. The NY Times reported that following his comments 700 posts on unproven treatments attracted over 50,000 comments and likes on Facebook. As a sign of how these stories mutate and multiply note that a research paper by David Brenner of the Center for Radiological Research at Columbia University, was being cited as evidence supporting Trump’s comment. For those that bothered to read it, the actual study looked at using UV lighting to kill the coronavirus in public spaces, not in human bodies.

Research undertaken by the Network Contagion Research Institute observes that online hate speech towards Asians has surged along-side theories that the Chinese engineered the virus. The researchers examined 4chan, an online bulletin board where some of the most offensive content on the internet is posted, and how this eventually makes its way to mainstream platforms such as Twitter and Reddit. Online hate leads to offline hate. The UK reported 267 hate crimes against people of Chinese origin during the first three months of the year, and 2020 is therefore now on course to see three times more offences than were recorded in 2019.

A recent video called “Plandemic” reached more than 7 million hits before being taken down by Facebook. It promotes misinformation about Covid-19, vaccines, and big pharma collusion. It does this by suggesting that Bill Gates and Anthony Fauci hide the truth to profit from existing patented drug sales. The video features Judy Mikovits, a scientist and anti-vaccine advocate who has previously been discredited for fraud and scientific misconduct. Although YouTube and Facebook have taken down the video, those who have debunked “Plandemic” are accused of censorship and an attack on free speech.

Social media platforms gain value by keeping their users engaged, and employ algorithms that push controversial content. In the current laissez-faire environment they are unlikely to take voluntary action to stop the spread of misinformation that generates likes, comments, and sharing. This should change.


Moreover, the algorithms often direct content so that people see content that appeals to their pre-existing bias. This phenomenon was explained in a WSJ’s interactive article called “Blue Feed, Red Feed” in 2016 ahead of the US Presidential election. The article highlighted how each side was able to nudge media exposure to their respective voters base.

All of this leaves us with a dilemma. Ignoring disinformation allows it to continue to survive, but engaging with it and attempting to correct it can sometimes add fuel to the fire. If the tech companies will not self-police, then governments might feel obliged to get involved. This might not be an optimal outcome.

China is an example of where we might be headed. Beijing is successful in controlling internet content, by censoring the content that can appear on platforms that operate inside China. The downside of employing this power is that governments themselves might become part of the problem. Suppressing the truth can have negative consequences as we saw when virus was first discovered. Li Wenliang, a physician in Wuhan, took to the internet to warn of a possible new virus inmate 2019, only for the authorities to reprimand him for “spreading rumors”.

Is there a middle way? Asking the tech companies to manage algorithms using a “do no harm” rule could be one solution, especially in matters relating to health. The current approach in western nations places free speech above the need to speak the truth, and above the risk of doing harm. A better balance is required and the tech companies might be wise to take the lead before governments are compelled to take action.

Comments


bottom of page