Big Brother News Watch
COVID Masking Hysteria Was Never About Following the Science + More
COVID Masking Hysteria Was Never About Following the Science
By the spring of 2020, when I returned from my COVID sabbatical, all previous knowledge of immunity seemed to have been discarded. This contagion of “know-nothingism” could not be missed. On my return to the Capitol after recovering from COVID, I was met by a gaggle of young journalists, the ones who occupy a spot between the Capitol subway and the escalator to the Capitol. They barreled up to me with multiple masks on their twenty-something-year-old faces and demanded to know why I wasn’t wearing a mask.
I calmly explained to them that the benefit of having survived COVID-19 was that I now had immunity. They challenged me, saying that I didn’t know how long my immunity would last. I responded in kind, replying that they didn’t know my immunity wouldn’t last.
The week before I returned from quarantine, I had donated my blood to researchers at the University of Louisville for analysis. They found that I had a robust antibody response to three different sites on the COVID-19 surface.
The reporters, none of whom had a science degree (nor had any of them likely even passed an advanced science course), angrily and self-righteously excoriated me for my “ignorance” and my “dangerous non-compliance.” What they did not do was challenge my position in any meaningful way by citing scientific studies based on randomized controlled trials showing any efficacy of masking for viral infection. No. The ignorance of today’s “journalists” is staggering. They only know how to repeat the dogma fed to them.
Missouri Health Officials Defend Posts Telling Vaccine Skeptics to ‘Just Keep Scrolling’
St. Louis Post-Dispatch reported:
Missouri health officials recently defended social media posts instructing COVID-19 vaccine skeptics to “just keep scrolling” after the posts generated heavy criticism.
The Sept. 13 posts on X and Facebook promoted the updated COVID-19 vaccine, which authorities began rolling out last month. The U.S. Centers for Disease Control and Prevention is recommending the vaccine for everyone 6 months and older.
Missouri health officials said, “COVID vaccines will be available in Missouri soon if you’re into that sort of thing. If not, just keep scrolling!”
Social media users tore into the health department, and the post on X had been viewed more than 633,000 times as of Friday. But officials stood by their post in an email sent the next day to local public health agency administrators, which acknowledged people had “mixed feelings” about the messaging.
More Than 200,000 College Students May Get Tuition Refunds
Over 200,000 college students in Michigan could have part of their tuition refunded following several lawsuits brought by students after they were forced to attend school online due to the COVID-19 pandemic.
Last week, the Michigan Supreme Court heard arguments from students who attended Central Michigan University, Eastern Michigan University and Lake Superior State University who claim that they should receive a refund for payments they made for living expenses and other aspects of their tuition after attending school virtually, instead of in-person, during the COVID-19 pandemic.
The Fink Bressack law firm in Michigan, which is representing the students from the universities, said in a press release that Central Michigan University “has unfairly refused to issue satisfactory refunds for housing, meals, tuition and other fees which students prepaid for services the University is not currently providing.”
The Detroit News reported that Fink said during recent Michigan Supreme Court arguments that a decision in these cases could impact around 220,000 college students.
23andMe Resets User Passwords After Genetic Data Posted Online
Days after user personal surfaced online, the genetic testing company 23andMe said it’s requiring all users to reset their passwords “out of caution.”
On Friday, 23andMe confirmed that hackers had obtained some users’ data, but stopped short of calling the incident a data breach. The company said that the hackers had accessed “certain accounts” of 23andMe users who used passwords that were not unique to the service — a common technique where hackers try to break into victim’s accounts using passwords that have already been made public in previous data breaches.
According to 23andMe, the data was “compiled” from users who had opted into the DNA Relatives feature, which allows users who choose to switch on the feature to automatically share their data with others. In theory, this would allow a hacker to get more than one victim’s data just by breaking into the account of someone who opted into the feature.
AI, Social Media Drive Democracies to a Tipping Point
Experts are blaming AI and misinformation on social media for pushing embattled democracies around the world toward a tipping point of distrust.
Why it matters: The rise of cheap and easy-to-use generative AI tools, the lack of legal guardrails for their deployment and relaxed content moderation policies and layoffs at tech companies are creating the conditions for a perfect misinformation storm.
Snapchat’s AI Chatbot May Pose Privacy Risk to Children, Says U.K. Watchdog
Snapchat may have failed to properly assess privacy risks to children from its artificial intelligence chatbot, Britain’s data watchdog said on Friday, adding it would consider the company’s response before making any final enforcement decision.
The Information Commissioner’s Office (ICO) said if the U.S. company fails to adequately address the regulator’s concerns, “My AI”, launched in April, could be banned in the UK.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI,'” Information Commissioner John Edwards said.
The ICO is investigating how “My AI” processes the personal data of Snapchat’s roughly 21 million U.K. users, including children aged 13-17.
AI Firms Working on ‘Constitutions’ to Keep AI From Spewing Toxic Content
Two of the world’s biggest artificial intelligence companies announced major advances in consumer AI products last week.
Microsoft-backed OpenAI said that its ChatGPT software could now “see, hear, and speak,” conversing using voice alone and responding to user queries in both pictures and words. Meanwhile, Facebook owner Meta announced that an AI assistant and multiple celebrity chatbot personalities would be available for billions of WhatsApp and Instagram users to talk with.
But as these groups race to commercialize AI, the so-called “guardrails” that prevent these systems from going awry — such as generating toxic speech and misinformation, or helping commit crimes — are struggling to evolve in tandem, according to AI leaders and researchers.
In response, leading companies including Anthropic and Google DeepMind are creating “AI constitutions” — a set of values and principles that their models can adhere to, in an effort to prevent abuses. The goal is for AI to learn from these fundamental principles and keep itself in check, without extensive human intervention.
Report: Feds Need Rules for Using Facial Recognition Tech
Several federal law enforcement agencies haven’t properly trained their staffs on how to use facial recognition technology or imposed policies to protect the public’s civil rights when it’s used, a report by a government watchdog says.
Why it matters: Facial recognition technology is being used increasingly by federal, state and local law enforcement agencies and has led to several false arrests nationwide — largely of Black men and women, according to advocates, research and news reports.
Even so, police, retail stores, airports and sports arenas are rapidly turning to the technology. Some local governments that initially restricted its use are weighing whether to ease those limits because of jumps in crime.
Zoom in: The recent report by the U.S. Government Accountability Office found that from October 2019 to March 2022, the FBI, DEA, Customs and Border Protection, Homeland Security Investigations and three other agencies used facial recognition systems for criminal probes without requiring staff training.
Remember That Letter Calling for a Pause on AI? It Didn’t Work
On March 29, 2023, more than 500 top technologists and business leaders signed onto an eye-catching open letter begging artificial intelligence labs to immediately pause all training on any AI systems more powerful than Open AI’s GPT-4 for at least six months.
The consequences for plowing ahead without taking a breather, they warned, would cause “profound risks to society and humanity.” Luckily, the world listened and the word “AI” has vanished into our collective memory holes.
Just kidding. Advances in AI most certainly have not stopped, paused, hiccuped, or done anything other than abruptly accelerate forward in the preceding six months. In October 2023, just about any startup or business even remotely connected to technology has tried to figure out ways to add ChatGPT-style chatbots or AI image generators into their pitch to consumers.
AI companies like OpenAI have plowed ahead with newer models and greater capabilities while others, like Meta and Amazon, have shifted their priorities to pour resources into the brewing AI tech race. The so-called pause was more like a firing gun.
School Surveillance Leading to ‘Digital Dystopia’ for Students, ACLU Says + More
School Surveillance Leading to ‘Digital Dystopia’ for Students, ACLU Says
Technology surveillance companies that market themselves to schools as ways for educators to ensure student safety are creating a “digital dystopia” that harms children’s trust and mental health, according to a new report from the American Civil Liberties Union (ACLU).
The $3.1 billion industry has marketed tools to track students’ online activities, facial recognition, cameras and more as ways to prevent bullying, self-harm and school shootings, but it has provided no evidence its technologies lead to these outcomes, the ACLU says.
The ACLU conducted research and reviewed other conclusions collected by the Department of Justice to find there is a “lack of clear evidence” that the products advertised by educational technology (EdTech) firms keep students safe like they say they do.
In addition, a survey conducted by the ACLU shows a third of 14- to 18-year-olds say they “always feel like I’m being watched” with the surveillance tech. Fifteen percent felt “exposed” from the monitoring, 14% say it makes them anxious and 13% say they are paranoid about it.
Huntington Beach Proclaims Itself a ‘No Mask and No Vaccine Mandate City’
The Huntington Beach City Council proclaimed the city a “No Mask and No Vaccine Mandate City” on Tuesday night, passing a resolution by a 4-3 vote. Mayor Pro Tem Gracey Van Der Mark, who introduced the item, said it prevents potential government overreach. Opponents of the proclamation labeled it as nothing more than a political stunt.
The resolution declares that mask and vaccine mandates are banned within city jurisdiction, with exceptions for those who test positive for COVID-19. It also states that residents retain the right to mask and vaccinate, and businesses retain the right to impose mask and/or vaccine requirements.
Van Der Mark said the state of California denied citizens individual liberties with the way it handled the coronavirus pandemic in 2020.
“They did deny the citizens of their individual liberties, including how to take care of yourself,” Van Der Mark said. “Business owners were not allowed to open unless they asked for vaccine cards or forced masks onto people. That’s not the country that we live in, and I believe as a city we need to stand up for our residents and our businesses.”
Microsoft Teams Is Getting a Pretty Creepy Facial Recognition Tool — but It’s Totally Fine
Your workplace meetings and calls could soon be able to detect exactly who is in the room thanks to a new (and slightly concerning) update from Microsoft Teams.
The video conferencing service has announced a new “desktop client face enrollment process” which it says can speed up identifying participants as they join a call on its Teams Rooms platform.
However, as the name suggests, users will have to “enroll their face” in order to sign up, where the company’s slightly concerningly named “People Recognition” platform will create a “face profile” for them.
First announced back in June 2023, People Recognition uses “advanced facial recognition algorithms” to allow Microsoft Teams Rooms kit to identify users and provide personalized experiences during video conferences and meetings.
Youngkin Takes $2 Million From TikTok Investor Despite App Ban, China Warnings
Gov. Glenn Youngkin accepted a $2 million political contribution this week from a donor with a multibillion-dollar stake in TikTok, a Chinese-owned app that the Republican governor banned from state devices late last year amid his broader campaign against Chinese influence in Virginia
Jeff Yass, a billionaire financier whose personal stake in TikTok’s parent company is worth a reported $21 billion, donated $2 million to Youngkin’s Spirit of Virginia political action committee Tuesday. With hefty political donations, Yass has been helping TikTok rally conservatives in Washington against banning the app in the United States, the Wall Street Journal reported last month.
ECDC, Sweden Detail Good Practices, Lessons Learned From COVID School-Closure Policies
The European Centre for Disease Prevention and Control (ECDC) and the Public Health Agency of Sweden today released a 36-page after-action report on how decisions were made on whether to keep schools open during the earlier pandemic months. According to an ECDC press release, the reviewers looked at how the policies were made, gauged their impact, and assembled detailed lessons on good practices and areas for improvement in future health emergencies.
The report focuses on the period of November 2020 to January 2021, a time when Sweden experienced a second wave that was much larger than its first. Sweden’s policy was distance learning for secondary school students and keeping in-person learning in place — alongside infection control measures for younger students, except when local outbreaks occurred.
Google Agrees to Reform Its Data Terms After German Antitrust Intervention
Following preliminary objections over Google’s data terms, set out back in January by Germany’s antitrust watchdog, the tech giant has agreed to make changes that will give users a better choice over its use of their information, the country’s Federal Cartel Office (FCO) said today.
The commitments cover situations where Google would like to combine personal data from one Google service with personal data from other Google or non-Google sources or cross-use these data in Google services that are provided separately, per the authority.
“In the future, Google will have to provide its users with the possibility to give free, specific, informed and unambiguous consent to the processing of their data across services. For this purpose Google has to offer corresponding choice options for the combination of data,” the FCO said, adding that the design of the new “selection dialogues” must not seek to manipulate users towards cross-service data processing (aka, no dark patterns).
Generative AI Is the Newest Tool in the Dictator’s Handbook + More
Generative AI Is the Newest Tool in the Dictator’s Handbook
A new report from Freedom House shared with Gizmodo found political leaders in at least 16 countries over the past year have deployed deep fakes to “sow doubt, smear opponents, or influence public debate.” Though a handful of those examples occurred in less developed countries in Sub-Saharan Africa and Southwest Asia, at least two originated in the United States.
“AI can serve as an amplifier of digital repression, making censorship, surveillance, and the creation and spread of disinformation easier, faster, cheaper, and more effective,” Freedom House noted in its “Freedom on the Net” report.
The report details numerous troubling ways advancing AI tools are being used to amplify political repression around the globe. Governments in at least 22 of the 70 countries analyzed in the report had legal frameworks mandating social media companies deploy AI to hunt down and remove disfavored political, social, and religious speech.
Those frameworks go beyond the standard content moderation policies at major tech platforms. In these countries, Freedom House argues the laws in place compel companies to remove political, social, or religious content that “should be protected under free expression standards within international human rights laws.” Aside from increasing censorship efficiency, the use of AI to remove political content also gives the state more cover to conceal themselves.
Federal Appeals Court Extends Limits on Biden Administration Communications With Social Media Companies to Top U.S. Cybersecurity Agency
A federal appeals court has expanded the scope of a ruling that limits the Biden administration’s communications with social media companies, saying it now also applies to a top U.S. cybersecurity agency.
The ruling last month from the conservative 5th Circuit US Court of Appeals severely limits the ability of the White House, the surgeon general, the Centers for Disease Control and Prevention and the FBI to communicate with social media companies about content related to COVID-19 and elections that the government views as misinformation.
The preliminary injunction had been on pause and a recent procedural snafu over a request from the plaintiffs in the case to broaden its scope led the court on Tuesday to withdraw its earlier opinion and issue a new one that now includes the U.S. Cybersecurity and Infrastructure Security Agency. That agency is charged with protecting non-military networks from hacking and other homeland security threats.
Similar to the ruling last month, in which the appeals court said the federal government had “likely violated the First Amendment” when it leaned on platforms to moderate some content, the new ruling says CISA violates the Constitution.
The Founder Who Sold the Startup That Would Become Amazon’s Alexa Called Big Tech ‘Apex Predators’ and Says That’s Why Everyone Is Scared of AI
The reason so many people have been so afraid of AI isn’t because of the technology itself, but because of who is developing it: Big Tech. “Do you know why you’re all afraid?” AI pioneer Igor Jablokov asked the audience at Fortune’s CEO Initiative in Washington D.C., during a discussion about responsible development of AI. “It’s because Big Tech are apex predators.”
These companies, Jablokov elaborated in a phone call with Fortune, are using their positions of strength, whether financial or political through lobbying, to further cement their positions in the marketplace and keep new entrants out. The fear among some of these companies, he says, is that AI will upend the industry and demote some of them to the next batch of yesterday’s tech companies, like AOL, Motorola, or Yahoo.
All of this concentrated power ends up harming consumers too. “Eventually there’s only one source of technology,” he told Fortune. “There’s no control over how many ads they see, product quality ends up faltering. It’s all the monopolistic things, and monopolies are unhealthy because there’s no competition that drives prices up and your choices down.”
On Tuesday, the day of Jablokov’s comments, the Federal Trade Commission released a report outlining the many anxieties consumers felt about AI. “The bottom line?,” the report reads. “Consumers are voicing concerns about harms related to AI—and their concerns span the technology’s lifecycle, from how it’s built to how it’s applied by third parties in the real world.”
Lawsuit: Man Claims He Was Improperly Arrested Because of Misuse of Facial Recognition Technology
A Black man was wrongfully arrested and held for nearly a week in jail because of the alleged misuse of facial recognition technology, according to a civil lawsuit filed against the arresting police officers.
Randal Quran Reid, 29, was driving to his mother’s home outside of Atlanta the day after Thanksgiving when police pulled him over, according to Reid.
Officers of the Jefferson Parish Sheriff’s Office used facial recognition technology to identify Reid as a suspect who was wanted for using stolen credit cards to buy approximately $15,000 worth of designer purses in Jefferson and East Baton Rouge Parishes, according to the complaint filed by Reid.
“[The facial recognition technology] spit out three names: Quran plus two individuals,” Gary Andrews, Reid’s lawyer and senior attorney at The Cochran Firm in Atlanta, told ABC News. “It is our belief that the detective in this case took those names … and just sought arrest warrants without doing any other investigation, without doing anything else to determine whether or not Quran was actually the individual that was in the store video.”
Amazon Allegedly Used Secret Algorithm to Raise Prices on Consumers, FTC Lawsuit Reveals
Amazon, the behemoth online retailer, used a secret algorithm called “Project Nessie” to determine how much to raise prices in a manner that competitors would follow, according to a lawsuit filed by the Federal Trade Commission.
The algorithm was able to track how much Amazon’s power in the e-commerce field would get competitors to move their prices, and in instances in which competitors didn’t move their prices, the algorithm would return Amazon’s prices back down, according to the Journal.
The algorithm, which is no longer in use, brought the company $1 billion in revenue, sources told the Journal.
What Happened When Toxic Social Media Came for My Daughter
Eating disorders predate the internet, and our culture is saturated by body shaming and unhelpful images. But as one specialist explained to me, because of social media, the degree and depth of warped information that increasingly young children are consuming these days is unparalleled.
The Center for Countering Digital Hate conducted research showing that when its accounts on TikTok paused briefly over and “liked” content about mental health or body image, within 2.6 minutes, they were fed content about suicide, and within 8 minutes, they were fed content about eating disorders. The Tech Transparency Project’s “Thinstagram” research found that Instagram’s algorithm amplifies and recommends images of dangerously thin women and accounts of “thinfluencers” and anorexia “coaches.”
Worse, the platforms are aware of and profiting from this, as the whistleblower Frances Haugen, formerly of Facebook, helped expose. The platform’s own analysis shows it harms children.
We Know How to Regulate New Drugs and Medical Devices — but We’re About to Let Healthcare AI Run Amok
There’s a great deal of buzz around artificial intelligence and its potential to transform industries. Healthcare ranks high in this regard. If it’s applied properly, AI will dramatically improve patient outcomes by improving early detection and diagnosis of cancer, accelerating the discovery of more efficient targeted therapies, predicting disease progression, and creating ideal personalized treatment plans.
Alongside this exciting potential lies an inconvenient truth: The data used to train medical AI models reflects built-in biases and inequities that have long plagued the U.S. health system and often lacks critical information from underrepresented communities. Left unchecked, these biases will magnify inequities and lead to lives lost due to socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation.
The consequences of flawed algorithms can be deadly. A recent study focused on an AI-based tool to promote early detection of sepsis, an illness that kills about 270,000 people each year. The tool, deployed in more than 170 hospitals and health systems, failed to predict sepsis in 67% of patients. It generated false sepsis alerts for thousands of others. The source of the flawed detection, researchers found, was that the tool was being used in new geographies with different patient demographics than those it had been trained on. Conclusion: AI tools do not perform the same across different geographies and demographics, where patient lifestyles, incidence of disease, and access to diagnostics and treatments vary.
Particularly worrisome is the fact that AI-powered chatbots may use LLMs that rely on data not screened for the accuracy of information. False information, bad advice to patients, and harmful medical outcomes can result.
AI Chatbots Are Learning to Spout Authoritarian Propaganda
When OpenAI, Meta, Google, and Anthropic made their chatbots available around the world last year, millions of people initially used them to evade government censorship. For the 70% of the world’s internet users who live in places where the state has blocked major social media platforms, independent news sites, or content about human rights and the LGBTQ community, these bots provided access to unfiltered information that can shape a person’s view of their identity, community, and government.
This has not been lost on the world’s authoritarian regimes, which are rapidly figuring out how to use chatbots as a new frontier for online censorship.
The hope that chatbots can help people evade online censorship echoes early promises that social media platforms would help people circumvent state-controlled offline media. Though few governments were able to clamp down on social media at first, some quickly adapted by blocking platforms, mandating that they filter out critical speech, or propping up state-aligned alternatives.
We can expect more of the same as chatbots become increasingly ubiquitous. People will need to be clear-eyed about how these emerging tools can be harnessed to reinforce censorship and work together to find an effective response if they hope to turn the tide against declining internet freedom.
Children Were Failed by Pandemic Policies, COVID Inquiry Told
Children were disproportionately affected by pandemic policies, with their voices not listened to and no one made responsible by the government for ensuring their legal rights were met, the COVID inquiry has heard.
Questions about how lockdown policies affected young people “weren’t even asked”, said the barrister Jennifer Twite, giving evidence on behalf of Save the Children U.K., Just for Kids Law and the Children’s Rights Alliance.
Children were at the back of the queue when the government made its biggest decisions about lockdown and reopening the economy, said Twite.
Prioritization of venues meant that pubs, restaurants and sports clubs were allowed to reopen before schools, nurseries and other places for children’s activities.
Only 43 of More Than 8,000 Discharged From U.S. Military for Refusing COVID Vaccine Have Rejoined + More
Only 43 of More Than 8,000 Discharged From U.S. Military for Refusing COVID Vaccine Have Rejoined
Only 43 of the more than 8,000 U.S. service members who were discharged from the military for refusing to be vaccinated against COVID-19 have sought to rejoin eight months after the vaccine mandate was officially repealed, according to data provided by the military branches.
Many Republicans argued that the vaccine mandate hurt military recruiting and retention efforts, which was part of the rationale for forcing the Defense Department to cancel the vaccine requirement. The military mandated the vaccine for only 15 months from August 2021 through January 2023, when it was rescinded by law as part of the National Defense Authorization Act. It marked perhaps the first time in U.S. military history that a vaccine requirement was reversed.
But since the repeal, only 19 soldiers have rejoined the Army, while 12 have returned to the Marines, according to service spokespeople. The numbers are even smaller for the Air Force and Navy, where only one and two have rejoined, respectively, the services said.
Pretty Soon, Your VR Headset Will Know Exactly What Your Bedroom Looks Like
Imagine a universe where Meta, and every third-party application it does business with, knows the placement and size of your furniture, whether you have a wheelchair or crib in your living room or the precise layout of your bedroom or bathroom. Analyzing this environment could reveal all sorts of things. Furnishings could indicate whether you are rich or poor, and artwork could give away your religion. A captured marijuana plant might suggest an interest in recreational drugs.
When critics suggest that the metaverse is a giant data grab, they often focus on the risks of sophisticated sensors that track and analyze body-based data. Far less attention has focused on how our new “mixed reality” future — prominently hyped at last week’s Meta Connect conference — may bring us closer to a “total surveillance state.”
The risks of this spatial information have not received as much attention as they deserve. Part of this is because few people understand this technology, and even if they do, it does not seem as scary as tech that is developed to monitor our eyes or surreptitiously record someone at a distance. Concepts like “point clouds,” “scene models,” “geometric meshes,” and “depth data” can be explained away as technical jargon. But allowing wearables to understand their surroundings and report back that information is a big deal.
We should anticipate that companies, governments, and bad actors will find ways to use this information to harm people. We have already seen how location data can be used by bounty hunters to harass people, target women seeking reproductive healthcare, and do an end-run around the Fourth Amendment. Now imagine a spatial data positioning system that is far more precise, down to the centimeter. Whether wearing a headset or interacting with AR holograms on a phone, the real-time location and real-world behaviors and interests of people can be monitored to a degree not currently imaginable.
California Misinfo Law Is Dead — Repeal Bill Also Strengthens Consumer Protections and Raises Doctors’ License Fees
With little fanfare, California Governor Gavin Newsom late last week signed into law a bill that repealed its controversial doctor misinformation statute just a year after it was signed.
Critics, including several physician plaintiffs who had sued the state, argued that it went against the constitutionally guaranteed right to free speech, and a judge had granted a restraining order on its implementation.
The original intent of the bill was an effort to give the Medical Board of California specific language that granted them the power to discipline providers who were found to have conveyed misinformation about COVID vaccines and treatments, including statements they might make on social media or in other public forums such as public protests.
Meet the Four Men Being Held as Political Prisoners in Canada
The Freedom Convoy erupted in January of 2022 after tens of thousands of Canadians, sick of Trudeau’s authoritarian approach to COVID-19, took to the streets of Ottawa in a mass act of civil protest led by truck drivers. For this, the Trudeau administration labeled them as racists and fascists — and then invoked the Emergency Measures Act for the first time in Canada’s history, suspending the civil liberties of Canada’s citizenry.
“Freedom of expression, assembly and association are cornerstones of democracy, but Nazi symbolism, racist imagery and desecration of war memorials are not,” Trudeau infamously said of the largest peaceful protest in Canada’s history, before accusing a Jewish Member of Parliament of “standing with those who wave Nazi flags” for her support of the protest.
Trudeau appeared to be referencing a single swastika flag in a protest of over 10,000 souls — the masked waver of which was never identified. The Canadian government also became convinced that a veteran named Jeremy MacKenzie was using the Freedom Convoy to lead a violent overthrow of the Trudeau government. After the Convoy, MacKenzie was charged with assault, pointing a firearm, using a restricted weapon in a careless manner, and mischief. Yet none of the charges against him were related to the Convoy, and most have since been dropped.
Four men caught in the government’s dragnet have not been as lucky. In February 2022, Anthony Olienick, Chris Carbert, Christopher Lysak, and Jerry Morin were arrested in separate locations throughout Alberta on allegations that they had conspired to murder Royal Canadian Mounted Police officers in Coutts, Alberta, a second protest site, as part of MacKenzie’s group. And though three of the men had no criminal records, they were all denied bail and have been languishing in prison for nearly 600 days. (Crown Prosecutor Steven Johnston declined a request via email for an interview. The RCMP did not reply to a request for comment.)
Up to 200,000 People to Be Monitored for COVID This Winter to Track Infection Rates
Up to 200,000 people will be monitored for COVID-19 this winter in a scaled-down version of the three-year infection survey for the virus.
The new study will run from November 2023 to March 2024, with as many as 32,000 lateral flow tests being used every week.
Scientists will be able to identify any changes in the rate of people infected with COVID-19 being admitted to hospital and assess the potential for increased demand on the NHS.
It is being co-ordinated by the Office for National Statistics (ONS) and the U.K. Health Security Agency (UKHSA).
Britain’s COVID Response Inquiry Enters Second Phase With Political Decisions in the Spotlight
Britain’s inquiry into the response to the coronavirus pandemic and its impact on the nation entered its second phase Tuesday, with political decision-making around major developments, such as the timing of lockdowns, set to take center stage.
Much criticism preceded the start of the so-called Module 2, the second of four planned phases of the inquiry, as it was set to hear in person only from one bereaved family member. Representatives of the bereaved have said that the lack of more live testimonies is “deeply concerning.”
This stage of the inquiry will focus on the British government’s actions during the crisis between Jan. 2020, when it first became evident that the virus was spreading around the world, and June 2022, when the inquiry was set up. The first phase, which concluded in July, looked at the country’s preparedness for the pandemic.
Tech Giants Slam ‘Draconian’ New Sri Lanka Online Safety Bill
Just a few days after politicians in the U.K. signed off on the highly-debated Online Safety Bill, a homonymous proposed law is now sparking discussions over five thousand miles away.
Despite coming as a means to halt online harm and fake news, tech giants have deemed the new Sri Lanka Online Safety Bill as a “draconian system to stifle dissent.” Other experts have been warning of new executive powers and vague provisions too, which are thought to ultimately lead to increased online censorship, and free speech and privacy abuses.
Sri Lanka Online Safety Bill aims to create a legal framework to reduce online harm (especially for children) by halting the spread of harmful content and fake news online.
Among the concerns surrounding the bill, there are vague definitions of harmful content which could lead to censorship of legitimate material combined with a lack of safeguards for citizens’ freedom of expression.