Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

Editor’s note: Here’s an excerpt from an article in The BMJ. To read the piece in its entirety, click here.

In a move likened to the way governments have assumed emergency powers in response to the COVID pandemic, Facebook has removed 16 million pieces of its content and added warnings to around 167 million. YouTube has removed more than 850 000 videos related to “dangerous or misleading COVID-19 medical information.”

While a portion of that content is likely to be willfully wrongheaded or vindictively misleading, the pandemic is littered with examples of scientific opinion that have been caught in the dragnet — resulting in their removal or de-prioritization, depending on the platform and context. This underscores the difficulty of defining scientific truth, prompting the bigger question of whether social media platforms such as Facebook, Twitter, Instagram, and YouTube should be tasked with this at all.

“I think it’s quite dangerous for scientific content to be labeled as misinformation, just because of the way people might perceive that,” says Sander van der Linden, professor of social psychology in society at Cambridge University, UK. “Even though it might fit under a definition [of misinformation] in a very technical sense, I’m not sure if that’s the right way to describe it more generally because it could lead to greater politicization of science, which is undesirable.”

Read the entire The BMJ article here.