COVID Lockdowns Were a Giant Experiment. It Was a Failure. A Key Lesson of the Pandemic.
On April 8, 2020, the Chinese government lifted its lockdown in Wuhan. It had lasted 76 days — two and a half months during which no one was allowed to leave this industrial city of 11 million people, or even leave their homes. Until the Chinese government deployed this tactic, a strict batten-down-the-hatches approach had never been used before to combat a pandemic.
Yes, for centuries infected people had been quarantined in their homes, where they would either recover or die. But that was very different from locking down an entire city; the World Health Organization called it “unprecedented in public health history.” The word the citizens of Wuhan used to describe their situation was fengcheng — “sealed city.” But the English-language media was soon using the word lockdown instead — and reacting with horror.
“That the Chinese government can lock millions of people into cities with almost no advance notice should not be considered anything other than terrifying,” a China human rights expert told The Guardian. Lawrence O. Gostin, a professor of global health law at Georgetown University, told the Washington Post that “these kinds of lockdowns are very rare and never effective.”
One of the great mysteries of the pandemic is why so many countries followed China’s example. In the U.S. and the U.K. especially, lockdowns went from being regarded as something that only an authoritarian government would attempt to an example of “following the science.” But there was never any science behind lockdowns — not a single study had ever been undertaken to measure their efficacy in stopping a pandemic. When you got right down to it, lockdowns were little more than a giant experiment.
Biden Issues Sweeping Executive Order That Touches AI Risk, Deepfakes, Privacy
On Monday, President Joe Biden issued an executive order on AI that outlines the federal government’s first comprehensive regulations on generative AI systems. The order includes testing mandates for advanced AI models to ensure they can’t be used for creating weapons, suggestions for watermarking AI-generated media, and provisions addressing privacy and job displacement.
In the United States, an executive order allows the president to manage and operate the federal government. Using his authority to set terms for government contracts, Biden aims to influence AI standards by stipulating that federal agencies must only enter into contracts with companies that comply with the government’s newly outlined AI regulations. This approach utilizes the federal government’s purchasing power to drive compliance with the newly set standards.
Amid fears of existential AI harms that made big news earlier this year, the executive order includes a notable focus on AI safety and security. For the first time, developers of powerful AI systems that pose risks to national security, economic stability, or public health will be required to notify the federal government when training a model. They will also have to share safety test results and other critical information with the U.S. government in accordance with the Defense Production Act before making them public.
While the order calls for internal guidelines to protect consumer data, it stops short of mandating robust privacy protections. According to the Fact Sheet, the administration recognizes the need for comprehensive privacy legislation to fully protect Americans’ data. The order also touches on the possible consequences of data collection and sharing by AI systems, signaling that privacy concerns are on the federal radar, even if they’re not extensively covered by the order.
Privacy Will Die to Deliver Us the Thinking and Knowing Computer
We’re getting a first proper look at much-hyped Humane’s “AI pin” (whatever that is) on November 9, and personalized AI memory startup Rewind is launching a pendant to track not only your digital but also your physical life sometime in the foreseeable future.
Buzz abounds about OpenAI’s Sam Altman meeting with Apple’s longtime design deity Jony Ive regarding building an AI hardware gadget of some kind, and murmurs in the halls of VC offices everywhere herald the coming of an iPhone moment for AI in breathless tones.
Of course, the potential is immense: A device that takes and extends what ChatGPT has been able to do with generative AI to many other aspects of our lives — hopefully with a bit more smarts and practicality. But the cost is considerable; not the financial cost, which is just more wealth transfer from the coal reserves of rich family offices and high-net-worth individuals to the insatiable fires of startup burn rates. No, I’m talking about the price we pay in privacy.
The death of privacy has been called, called off, countered and repeated many times over the years (just Google the phrase) in response to any number of technological advances, including things like mobile device live location sharing; the advent and eventual ubiquity of social networks and their resulting social graphs; satellite mapping and high-resolution imagery; massive credential and personal identifiable information (PII) leaks, and much, much more.
Time to Enhance Digital Privacy Protections for Minors
Few issues strike at the core of parents’ concerns more than protecting children’s privacy on the internet. A quarter century ago, Congress enacted the Children’s Online Privacy Protection Act (COPPA) to help give parents more control over the personal information websites collect on their children. Congress needs to revise the act to fully protect children and families from misuse of confidential information on the internet.
The Federal Trade Commission (FTC) enforces COPPA through regulations that impose restrictions on websites that collect data on children to help ensure the confidentiality, security, and integrity of this information. While COPPA proved historic when Congress and the FTC first implemented it, it is clear that the measure, which Congress and the commission haven’t updated in 10 years, is no longer strong enough to meet the challenges of the modern-day digital environment.
In many cases, there are no clear ways to hold bad actors liable under COPPA for failing to protect our children’s data. In other cases, COPPA has simply proven not to have gone far enough in protecting children from digital abuses.
The rule only protects the data of minors younger than 13, even though teenagers (traditionally more active on social media and the internet more generally) face a disproportionate risk of having their information collected. For this reason, California recently passed the Age-Appropriate Design Code Act.
The bill, which will take effect next year, defines children as anyone under 18 and will require apps and websites to enact more privacy protections for these vulnerable individuals. Other states seek to do the same. However, this is a federal issue, so a federal solution must come to fruition for the problem to be adequately addressed.
Why Congress Keeps Failing to Protect Kids Online
Roughly a decade has passed since experts began to appreciate that social media may be truly hazardous for children, and especially for teenagers. As with teenage smoking, the evidence has accumulated slowly but leads in clear directions. The heightened rates of depression, anxiety, and suicide among young people are measurable and disheartening. When I worked for the White House on technology policy, I would hear from the parents of children who had suffered exploitation or who died by suicide after terrible experiences online. They were asking us to do something.
The severity and novelty of the problem suggest the need for a federal legislative response, and Congress can’t be said to have ignored the issue. In fact, by my count, since 2017 it has held 39 hearings that have addressed children and social media, and nine wholly devoted to just that topic.
Congress gave Frances Haugen, the Facebook whistleblower, a hero’s welcome. Executives from Facebook, YouTube and other firms have been duly summoned and blasted by angry representatives. But just what has Congress actually done? The answer is nothing.
This German Airport Could Be the First to Offer Face-Scanning Technology for All Passengers
Queuing to check in and board a flight is a notoriously tedious experience. But one airport in Germany wants to significantly speed up the process for passengers.
Frankfurt airport says it will begin offering biometric check-in services for all travelers in the next few months. It already offers the facial recognition system for flyers on Lufthansa and its affiliated Star Alliance routes (including United, Air China and Air India).
Instead of queuing up at a desk to have ID and documents checked, your face becomes your boarding pass. Then they will have their faces scanned as they pass by checkpoints instead of having to present their documents.
Meta Is Asking Users for Handouts Amid New Regulations in Europe
Being waterboarded with advertisements sort of feels like second nature on the likes of Facebook and Instagram, but for some in the EU willing to pay, that will change.
Meta revealed its subscription service to access an ad-free version of Facebook and Instagram in a blog post on Monday. Users in several European countries will now have the option to pay anywhere from €9.99 on desktop or €12.99 on mobile — that’s about $10.60 and $13.78 USD, respectively — for an ad-free version of Instagram and Facebook. For now, the fee will cover all accounts linked to the account that purchases the subscription, but starting on March 1, 2024, users will have to fork over €6 ($6.36) on desktop or €8 ($8.48) on mobile for each additional account.
Earlier this month, news broke that Meta was toying with having users pay for an ad-free experience on its platforms. This new subscription system is Meta’s attempt at navigating through the EU’s recent regulations, which clamped down on Big Tech abusing data privacy and targeted ads. Meta previously cited the EU’s General Data Protection Regulation and the Data Privacy Act, which aims to tear down the “walled gardens” of Big Tech “gatekeepers” like Meta, Amazon, and Apple.
Britain Is ‘Omni-Surveillance’ Society, Watchdog Warns
Britain is an “omni-surveillance” society with police forces in the “extraordinary” position of holding more than 3 million custody photographs of innocent people more than a decade after being told to destroy them, the independent surveillance watchdog has said.
Fraser Sampson, who will end his term as the Home Office’s biometrics and surveillance commissioner this month, said there “isn’t much not being watched by somebody” in the U.K. and that the regulatory framework was “inconsistent, incomplete and in some areas incoherent.”
He spoke of his concerns that the law was not keeping up with technological advances in artificial intelligence (AI) that allow millions of images to be sorted through within moments and that there were insufficient checks and balances on the police.
The Mystery of the British Government’s Vanishing WhatsApps
Downing Street’s former top officials face grillings from Britain’s public inquiry into COVID-19 this week. But the headline news may not be their testimony, but the WhatsApp messages they were sending at the time.
As the nation awaits evidence sessions likely to reveal pandemic-era chaos at the heart of Downing Street, nerves in Whitehall are on edge. The inquiry has demanded the mass disclosure of messages from the encrypted app, despite an unsuccessful attempt to block their release by the government.
The furor over those messages and the anticipation of more to come have reopened big questions about government transparency in the digital age — and in particular, the increasing use of the “disappearing messages” function on WhatsApp by senior officials, political advisors and ministers.
Some of those involved argue they should be allowed the same in-person privacy they enjoy in Westminster’s corridors and canteens — and that WhatsApp messages are no different to quiet “water-cooler conversations” in any office environment.