Big Brother News Watch
The New Era of Social Media Looks as Bad for Privacy as the Last One + More
The New Era of Social Media Looks as Bad for Privacy as the Last One
When Elon Musk took over Twitter in October 2022, experts warned that his proposed changes — including less content moderation and a subscription-based verification system — would lead to an exodus of users and advertisers. A year later, those predictions have largely borne out. Advertising revenue on the platform has declined 55% since Musk’s takeover, and the number of daily active users fell from 140 million to 121 million in the same time period, according to third-party analyses.
As users moved to other online spaces, the past year could have marked a moment for other social platforms to change the way they collect and protect user data. “Unfortunately, it just feels like no matter what their interest or cultural tone is from the outset of founding their company, it’s just not enough to move an entire field further from a maximalist, voracious approach to our data,” says Jenna Ruddock, policy council at Free Press, a nonprofit media watchdog organization, and a lead author on a new report examining Bluesky, Mastodon, and Meta’s Threads, all of which have jockeyed to fill the void left by Twitter, which is now named X.
Companies like Google, X, and Meta collect vast amounts of user data, in part to better understand and improve their platforms but largely to be able to sell targeted advertising. But the collection of sensitive information around users’ race, ethnicity, sexuality, or other identifiers can put people at risk.
Even for users who want to opt out of ravenous data collection, privacy policies remain complicated and vague, and many users don’t have the time or knowledge of legalese to parse through them. At best, says Nora Benavidez, director of digital justice and civil rights at Free Press, users can figure out what data won’t be collected, “but either way, the onus is really on the users to sift through policies, trying to make sense of what’s really happening with their data,” she says. “I worry these corporate practices and policies are nefarious enough and befuddling enough that people really don’t understand the stakes.”
Trust Us With AI, Say the Big Tech Titans. That’s What the Banks Said Before the 2008 Crisis
When the great and the good of Silicon Valley pitched up in Buckinghamshire for Rishi Sunak’s AI safety summit, they came with a simple message: trust us with this new technology. Don’t be tempted to stifle innovation by heavy-handed rules and restrictions. Self-regulation works just fine.
To which the simple response should be: remember 2008, when light-touch supervision allowed banks to indulge in an orgy of speculation that took the global financial system to the brink of collapse.
In the years leading up to the crisis, banks had developed products that were both lucrative and — as it turned out — highly toxic. The drive for profits trumped prudence. Only in retrospect were the dangers recognized of allowing the banks to mark their own homework. Financial regulation was subsequently tightened, but only after a deep recession from which the global economy has never fully recovered.
Sunak should learn from that experience. The focus at the Bletchley Park summit has been on the existential threat posed by AI – the risk that if left unchecked, the machines could lead to human extinction. That’s a worthy discussion point, especially given the rapid advances in creating super-intelligent machines. Elon Musk might be right when he says AI poses a “civilizational risk.”
But, as the TUC and others pointed out earlier this week, the focus on the longer-term challenges should not come at the expense of responding to a number of more immediate issues. These include the likely impact of AI on jobs, the increasing market dominance of big tech, and the use of AI to spread disinformation.
YouTube Limits Harmful Repetitive Content for Teens
In a move designed to prevent teenagers from repetitively watching potentially harmful videos on YouTube, the streaming platform announced Thursday that it will limit repeated recommendations of videos featuring certain themes to U.S. teens.
Currently, YouTube is limiting repetitive exposure to videos that compare physical features and favor some types over others, idealize specific fitness levels or body weights, or depict social aggression in the form of non-contact fights and intimidation. While these videos don’t violate the platform’s policies, repeated viewings could be harmful to some youth. YouTube already prohibits videos of fights between minors.
James Beser, director of product management for YouTube Kids and Youth, said that the company’s youth and family advisory committee, which comprises independent experts in child development and digital learning and media, helped YouTube identify categories of content that “may be innocuous as a single video, but could be problematic for some teens if viewed in repetition.”
The new policy comes amidst heavy scrutiny and criticism of the way social media platforms can influence youth mental health and well-being.
Conservative Nebraska Lawmakers Push Study to Question Pandemic-Era Mask, Vaccine Requirements
It didn’t take long for conservative Nebraska lawmakers to get to the point of a committee hearing held Wednesday to examine the effectiveness of public health safety policies from the height of the COVID-19 pandemic.
Following a brief introduction, Nebraska Nurses Association President Linda Hardy testified for several minutes about the toll the pandemic has taken on the state’s nursing ranks. The number of nurses dropped by nearly 2,600 from the end of 2019 to the end of 2022, said Hardy, a registered nurse for more than 40 years. She pointed to a study by the Nebraska Center for Nursing that showed nurses were worried about low pay, overscheduling, understaffing and fear of catching or infecting family with the potentially deadly virus.
“How many nurses quit because they were forced into vaccination?” asked Sen. Brian Hardin, a business consultant from Gering. When Hardy said she hadn’t heard of nurses leaving the profession over vaccination requirements, Hardin shot back. “Really?” he asked. “Because I talked to some nurses in my district who retired exactly because of that.”
The question of masks, mandatory shutdowns and the effectiveness of COVID vaccines was repeated time and again during the hearing. Those invited to testify included members of Nebraska medical organizations and government emergency response agencies.
U.S. Hospital Groups Sue Biden Administration to Block Ban on Web Trackers
The biggest U.S. hospital lobbying group on Thursday sued the Biden administration over new guidance barring hospitals and other medical providers from using trackers to monitor users on their websites.
The American Hospital Association (AHA), along with the Texas Hospital Association and two nonprofit Texas health systems, filed a lawsuit against the U.S. Department of Health and Human Services (HHS) in federal court in Fort Worth, Texas. The lawsuit accuses the agency of overstepping its authority when it issued the guidance in December.
The guidance warns healthcare providers that allowing a third-party technology company like Google or Meta to collect and analyze internet protocol (IP) addresses and other information from visitors to their public websites or apps could be a violation of the Health Insurance Portability and Accountability Act (HIPAA). Federal law bans the public disclosure of individuals’ private health information to protect them against discrimination, stigma or other negative consequences.
Court records show several hospitals have been hit with proposed class actions that cite the guidance, accusing them of mishandling personal health information through the use of these trackers.
British PM Rishi Sunak Secures ‘Landmark’ Deal on AI Testing
The British Prime Minister Rishi Sunak on Thursday said that under a new agreement “like-minded governments” would be able to test eight leading tech companies’ AI models before they are released.
Closing out the two-day artificial intelligence summit in Bletchley Park on Thursday, Sunak announced the agreement signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K. to test leading companies’ AI models.
“Until now the only people testing the safety of new AI models have been the very companies developing it. That must change,” said Sunak to a room full of journalists.
Sunak said the eight companies — Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and Open AI — had agreed to “deepen” the access already given to his Frontier AI Taskforce, which is the forerunner to the new institute. The access is currently given on a voluntary basis, though under its Executive Order, the U.S. government has put binding requirements to hand over certain safety information.
Parents of Teens More Concerned About Internet Addiction Than Drug Use, Study Finds: ‘Problematic Patterns’ + More
Parents of Teens More Concerned About Internet Addiction Than Drug Use, Study Finds: ‘Problematic Patterns’
More parents are concerned about internet addiction by their adolescent children than substance addiction, according to the results of a survey published in JAMA Network Open on Oct. 26. The researchers conducted an online survey of 1,000 parents of U.S. youth between the ages of 9 and 15 to understand their perceptions and concerns about their kids’ internet use. Participants completed the survey between June 17 and July 5, 2022.
The survey assessed the parents’ perceptions of the risks and benefits of internet use in four main areas: their children’s physical and cognitive development, their children’s safety, the potential for addiction, and family connectedness.
Excessive internet use has been associated with mental health problems that include higher rates of alcohol dependence, depression, anxiety and insomnia. Too much time on the internet has also been linked to difficulty socializing with peers, having healthy conversations, being comfortable in social settings and showing empathy, as previous studies have shown.
Overall, internet addiction concerns outweighed those of substance problems. In particular, the potential for addiction was most evident for social media use and video gaming. The survey highlighted the growing influence of internet use in kids’ lives and the importance of monitoring for potential harmful use.
Sweeping Ban on COVID Vaccine Mandates by Private Employers Heads to Governor
A sweeping ban on COVID-19 vaccine mandates for employees of private Texas businesses is on its way to Gov. Greg Abbott’s desk, carrying with it a $50,000 fine for employers who punish workers for refusing the shot.
Senate Bill 7, by state Sen. Mayes Middleton, R-Galveston, cleared its final hurdle Tuesday when senators agreed on a 17-11 vote to accept the House version of the legislation, which raised the fine from the $10,000 initially proposed in the bill.
The legislation, which Republican lawmakers have been trying to pass since 2021, offers no exceptions for doctors’ offices, clinics or other health facilities. The bill also includes unpaid volunteers and students working in medical internships or other unpaid positions as part of graduation requirements.
Private employers are allowed by the legislation to require unvaccinated employees and contractors to wear protective gear, such as masks, or enact other “reasonable” measures to protect medically vulnerable people who work or come into their places of businesses or medical facilities.
The legislation makes it illegal, however, for any employer to take action against or otherwise place requirements on an unvaccinated employee that the Texas Workforce Commission determines would adversely affect the employee or constitute punishment.
Musk Tells Rogan Twitter ‘Suppressed’ Republicans ‘10 Times’ More Than Dems
Elon Musk said that conservative users of X, formerly Twitter, were suppressed “10 times the rate” of liberal users prior to him taking over the company.
Musk, the platform’s CEO, obtained X in October 2022 and quickly made headlines after signing off on a series of reports published by independent journalist Matt Taibbi in late 2022 nicknamed the “Twitter Files.” Taibbi’s work purportedly showed that the platform’s previous ownership, led by Twitter’s founder and previous CEO Jack Dorsey, had worked with the federal government to censor conservative and Republican content online.
Speaking with podcaster Joe Rogan on Tuesday, Musk described Twitter as once acting as “an arm of the government,” and claimed that Dorsey “didn’t really know” that such actions were taking place.
“The degree to which — and by the way, Jack didn’t really know this — but the degree to which Twitter was simply an arm of the government was not well understood by the public,” Musk told Rogan, according to a clip of Tuesday’s episode of The Joe Rogan Experience that was shared to X by the group Mythinformed.
Musk went on to claim that the “old” version of the social platform would not only oppress views considered “far-right,” but also some that may be considered “middle of the road” or “mildly right.”
Exclusive: U.S. Gave $30 Million to Top Chinese Scientist Leading China’s AI ‘Race’
The U.S. government gave at least $30 million in federal grants for research led by a scientist who is now at the forefront of China’s race to develop the most advanced artificial intelligence — which he compared to the atomic bomb due to its military importance, a Newsweek investigation has revealed.
Pentagon funding for Song-Chun Zhu, the former director of a pioneering AI center at the University of California Los Angeles (UCLA), continued even as he set up a parallel institute near Wuhan, took a position at a Beijing university whose primary goal is to support Chinese military research, and joined a Chinese Communist Party “talent plan” whose members are tasked with transferring knowledge and technology to China.
Newsweek’s revelations underline how the United States, with its open academic environment, has not only been a source for China of advanced technology with military applications but has also actively collaborated with and funded scientists from its main rival. Only as tensions with China have grown over everything from global flashpoints to trade to technology has the research started coming under growing scrutiny.
High Court Struggles on Whether Officials May Block Social Media Critics
The Supreme Court on Tuesday struggled to agree on how to determine when public officials can block critics from their private social media accounts, reviewing two cases that will have broad implications for citizen interactions with politicians online.
All nine justices seemed to acknowledge the challenge and importance of defining when government employees are acting in an official capacity online, and therefore bound by First Amendment restrictions on censorship; and when they are acting as private citizens, with their own individual free speech rights.
Biden’s AI Order Is Massive, but Far From Enough to Address the Technology’s Risks
The Biden administration on Monday unveiled its most comprehensive effort to regulate powerful artificial intelligence technologies yet, issuing a sweeping executive order. Its aim: ensuring American leadership in AI while preventing AI abuses that could threaten Americans’ civil rights and safety.
But despite the wide-ranging scope of the executive order, experts said it fails to address a number of issues, including how AI can help prevent societal problems ranging from consumer privacy and security and competition.
“This [plan] is silent on AI and democracy. … This is silent about the fact that we could use AI to promote citizen engagement,” said Beth Simone Noveck, director of the Burnes Center for Social Change at Northeastern University in Boston. “There is nothing in here about public consultation. There’s nothing in here about engaging with citizens to develop any of these things that are going to come next.”
Despite hosting a number of hearings and various lawmakers proposing different forms of legislation, Congress is still far from passing any kind of federal legislation related to AI.
Parents Sue California Over Religious Exemptions for School-Mandated Vaccines as Newsom Seeks to Add COVID Jab + More
Parents Sue California Over Religious Exemptions for School-Mandated Vaccines as Newsom Seeks to Add COVID Jab
Several parents backed by a conservative group are suing California over a state law that eliminated religious exemptions for school-mandated vaccines now that Democratic Gov. Gavin Newsom and the state legislature are reportedly moving to add the COVID-19 jab to the list of required inoculations for schoolchildren.
The federal lawsuit brought by Advocates for Faith and Freedom, a nonprofit law firm dedicated to protecting religious liberty, challenges SB 277, arguing the legislation restricting religious exemptions violates the constitutional rights of parents to make medical decisions for their children. The complaint, filed Tuesday in the Southern District of California, lists state Attorney General Rob Bonta as a defendant.
SB 277, which was signed by former Gov. Jerry Brown in June 2015 and took effect on Jan. 1, 2016, eliminated nonmedical exemptions from state-mandated immunizations for children entering public or private schools. It applies to children enrolled in private or public elementary or secondary schools, daycare centers, and public or private daycares and preschools.
In 2022, the lawsuit says, the state legislature and Newsom “made attempts to add COVID-19 to the list of required vaccines for school entrance even though the virus poses a small risk to schoolchildren.” Meanwhile, the complaint argues, California “allows immigrant and homeless children to attend public and private schools without proof of vaccination.”
U.S. Supreme Court Weighs if Public Officials Can Block Critics on Social Media
The U.S. Supreme Court on Tuesday waded into the issue of free speech rights in the digital age during arguments in cases from California and Michigan involving whether public officials may legally block others on social media, a function often used on these platforms to stifle critics.
Lower courts reached different conclusions in the two cases, reflecting the legal uncertainty over whether such social media activity is bound by the U.S. Constitution’s First Amendment limits on the government’s ability to restrict speech. Supreme Court arguments were ongoing.
The justices are tasked with deciding whether the public officials engaged in a “state action” in blocking critics from social media accounts or were merely acting in their personal capacity. The First Amendment constrains government actors but not private individuals.
The first case involves two public school board trustees from Poway, California who appealed a lower court’s ruling in favor of parents who sued them after being blocked from the personal accounts of the officials on X, called Twitter at the time, and Facebook, which is owned by Meta Platforms (META.O).
Watch: COVID Authoritarians Want Forgiveness — Here’s Why They Don’t Deserve It
Do authoritarians deserve a chance to be treated with grace and forgiveness? The question is circulating regularly these days in the wake of the complete failure of COVID pandemic response and the victory of the anti-mandate movement. The answer relies on a series of counter-questions based on logic and predictable outcomes. It’s the kind of discussion that COVID cultists don’t want to have; they just want everyone to forget because they now have something to lose politically.
Scott Galloway, Professor at the NYU Stern School of Business and member of the World Economic Forum’s “Global Leaders of Tomorrow” list, is one of the cultists who now wants to be given a free pass as he debates the issue in Real Time with Bill Maher.
The question that we need to ask Galloway is: How forgiving was he when confronted with people who opposed his authoritarianism? Galloway was rabidly pro-mandate. He consistently called for harsher punishments for people refusing to comply and he demanded that the unvaccinated be treated as second-tier citizens banned from places of business. As he argued in his blog titled ‘Half Of America Has Its Head Up Its Ass. It’s Time For A Vaccine Mandate’: “Enough already. Federal law should require any citizen who wants to cash a government check, use public transport, or enter a place of business to show proof of vaccination … ”
There were calls to fine the unvaccinated, imprison people who question the vaccine, put the vaccinated on home lockdown and even take away their children. In some states, like New York, there was active legislation put forward to create detention facilities for people who did not comply (COVID camps). That is some serious Stalinist behavior and we are still waiting for it to be addressed and for certain political leaders to be punished.
TikTok, Snapchat and Others Sign Pledge to Tackle AI-Generated Child Sex Abuse Images
Tech firms including TikTok, Snapchat and Stability AI have signed a joint statement pledging to work together to counter child sex abuse images generated by artificial intelligence.
Britain announced the joint statement — which also listed the United States, German and Australian governments among its 27 signatories — at an event on Monday being held in the run-up to a global summit hosted by the U.K. on AI safety this week.
“We resolve to work together to ensure that we utilize responsible AI for tackling the threat of child sexual abuse and commit to continue to work collaboratively to ensure the risks posed by AI to tackling child sexual abuse do not become insurmountable.”
Britain cited data from the Internet Watch Foundation showing that in one dark web forum, users had shared nearly 3,000 images of AI-generated child sexual abuse material.
Marin County to Require Masks in Patient-Care Settings Beginning Nov. 1
The threat of COVID and other respiratory viruses during flu season has Marin County requiring masks in patient-care settings.
The new mandate in Marin County requires patients, staff and visitors to wear a mask in hospitals and skilled nursing facilities for the fall and winter virus season from Nov. 1 through March 31 of next year.
County health officials said the mandate will apply to all individuals while they are in patient care areas. Children under age 6 and those with a valid medical reason are exempt.
UCSF’s Dr. Peter Chin-Hong doesn’t believe there will be any subsequent health mandates for masks in schools, restaurants and other settings besides patient care facilities.
‘Wholly Ineffective and Pretty Obviously Racist’: Inside New Orleans’ Struggle With Facial-Recognition Policing
In the summer of 2022, with a spike in violent crime hitting New Orleans, the city council voted to allow police to use facial-recognition software to track down suspects — a technology that the mayor, police and businesses supported as an effective, fair tool for identifying criminals quickly.
A year after the system went online, data show that the results have been almost exactly the opposite.
Records obtained and analyzed by POLITICO show that computer facial recognition in New Orleans has low effectiveness, is rarely associated with arrests and is disproportionately used on Black people. A review of nearly a year’s worth of New Orleans facial recognition requests shows that the system failed to identify suspects a majority of the time — and that nearly every use of the technology from last October to this August was on a Black person.
Facial recognition has many uses — you can use it to unlock your phone, to help find yourself in group photos and to board a flight. But no use of the $3.8 billion industry has concerned lawmakers and civil rights advocates more than law enforcement.
2 in 3 Physicians Concerned About AI Driving Diagnosis, Treatment Decisions: Survey
Two in three physicians are concerned about artificial intelligence’s (AI) influence on diagnosis and treatment decisions, according to a recent survey.
According to the Medscape survey released Monday, 65% of physicians in the survey are “very” or “somewhat” concerned about AI driving diagnosis and treatment decisions. Thirty-six percent said they were “not very” or “not at all” concerned about AI driving diagnosis and treatment decisions.
Other findings in the survey included 42% of physicians saying they are “enthusiastic” about AI’s future in the workplace. Thirty percent said they are “neutral” about the technology’s future in the workplace and 28% said they are “apprehensive” about it.
Former Food and Drug Administration (FDA) Commissioner Scott Gottlieb said AI could take over aspects of doctors’ jobs sooner rather than later in a July op-ed.
Advocates Raise Privacy, Safety Concerns as NYPD and Other Departments Put Robots on Patrol
In 2014, the creators of Knightscope told USA TODAY they wanted to create a fleet of robots that would cruise through shopping malls, corporate campuses and other public spaces, collecting data and alerting law enforcement when they spot trouble.
Nearly 10 years later, one of the public safety technology company’s 5-foot-2-inch, 400-pound robots is working the graveyard shift, patrolling the Times Square subway station for the country’s largest police department alongside a human New York Police Department officer.
Former Texas police officer and Knightscope co-founder Stacey Stephens said just under a dozen departments now use the Knightscope 5, or K5, and success stories from its deployment in the private sector are attracting the attention of other law enforcement agencies.
Law enforcement’s growing use of devices like the K5 and the robotic dogs produced by Boston Dynamics has been criticized by communities and privacy advocates concerned about the technology’s efficacy, increased surveillance, the potential for weaponization and the lack of clear laws and policies governing its use.
COVID Lockdowns Were a Giant Experiment. It Was a Failure. A Key Lesson of the Pandemic. + More
COVID Lockdowns Were a Giant Experiment. It Was a Failure. A Key Lesson of the Pandemic.
On April 8, 2020, the Chinese government lifted its lockdown in Wuhan. It had lasted 76 days — two and a half months during which no one was allowed to leave this industrial city of 11 million people, or even leave their homes. Until the Chinese government deployed this tactic, a strict batten-down-the-hatches approach had never been used before to combat a pandemic.
Yes, for centuries infected people had been quarantined in their homes, where they would either recover or die. But that was very different from locking down an entire city; the World Health Organization called it “unprecedented in public health history.” The word the citizens of Wuhan used to describe their situation was fengcheng — “sealed city.” But the English-language media was soon using the word lockdown instead — and reacting with horror.
“That the Chinese government can lock millions of people into cities with almost no advance notice should not be considered anything other than terrifying,” a China human rights expert told The Guardian. Lawrence O. Gostin, a professor of global health law at Georgetown University, told the Washington Post that “these kinds of lockdowns are very rare and never effective.”
One of the great mysteries of the pandemic is why so many countries followed China’s example. In the U.S. and the U.K. especially, lockdowns went from being regarded as something that only an authoritarian government would attempt to an example of “following the science.” But there was never any science behind lockdowns — not a single study had ever been undertaken to measure their efficacy in stopping a pandemic. When you got right down to it, lockdowns were little more than a giant experiment.
Biden Issues Sweeping Executive Order That Touches AI Risk, Deepfakes, Privacy
On Monday, President Joe Biden issued an executive order on AI that outlines the federal government’s first comprehensive regulations on generative AI systems. The order includes testing mandates for advanced AI models to ensure they can’t be used for creating weapons, suggestions for watermarking AI-generated media, and provisions addressing privacy and job displacement.
In the United States, an executive order allows the president to manage and operate the federal government. Using his authority to set terms for government contracts, Biden aims to influence AI standards by stipulating that federal agencies must only enter into contracts with companies that comply with the government’s newly outlined AI regulations. This approach utilizes the federal government’s purchasing power to drive compliance with the newly set standards.
Amid fears of existential AI harms that made big news earlier this year, the executive order includes a notable focus on AI safety and security. For the first time, developers of powerful AI systems that pose risks to national security, economic stability, or public health will be required to notify the federal government when training a model. They will also have to share safety test results and other critical information with the U.S. government in accordance with the Defense Production Act before making them public.
While the order calls for internal guidelines to protect consumer data, it stops short of mandating robust privacy protections. According to the Fact Sheet, the administration recognizes the need for comprehensive privacy legislation to fully protect Americans’ data. The order also touches on the possible consequences of data collection and sharing by AI systems, signaling that privacy concerns are on the federal radar, even if they’re not extensively covered by the order.
Privacy Will Die to Deliver Us the Thinking and Knowing Computer
We’re getting a first proper look at much-hyped Humane’s “AI pin” (whatever that is) on November 9, and personalized AI memory startup Rewind is launching a pendant to track not only your digital but also your physical life sometime in the foreseeable future.
Buzz abounds about OpenAI’s Sam Altman meeting with Apple’s longtime design deity Jony Ive regarding building an AI hardware gadget of some kind, and murmurs in the halls of VC offices everywhere herald the coming of an iPhone moment for AI in breathless tones.
Of course, the potential is immense: A device that takes and extends what ChatGPT has been able to do with generative AI to many other aspects of our lives — hopefully with a bit more smarts and practicality. But the cost is considerable; not the financial cost, which is just more wealth transfer from the coal reserves of rich family offices and high-net-worth individuals to the insatiable fires of startup burn rates. No, I’m talking about the price we pay in privacy.
The death of privacy has been called, called off, countered and repeated many times over the years (just Google the phrase) in response to any number of technological advances, including things like mobile device live location sharing; the advent and eventual ubiquity of social networks and their resulting social graphs; satellite mapping and high-resolution imagery; massive credential and personal identifiable information (PII) leaks, and much, much more.
Time to Enhance Digital Privacy Protections for Minors
Few issues strike at the core of parents’ concerns more than protecting children’s privacy on the internet. A quarter century ago, Congress enacted the Children’s Online Privacy Protection Act (COPPA) to help give parents more control over the personal information websites collect on their children. Congress needs to revise the act to fully protect children and families from misuse of confidential information on the internet.
The Federal Trade Commission (FTC) enforces COPPA through regulations that impose restrictions on websites that collect data on children to help ensure the confidentiality, security, and integrity of this information. While COPPA proved historic when Congress and the FTC first implemented it, it is clear that the measure, which Congress and the commission haven’t updated in 10 years, is no longer strong enough to meet the challenges of the modern-day digital environment.
In many cases, there are no clear ways to hold bad actors liable under COPPA for failing to protect our children’s data. In other cases, COPPA has simply proven not to have gone far enough in protecting children from digital abuses.
The rule only protects the data of minors younger than 13, even though teenagers (traditionally more active on social media and the internet more generally) face a disproportionate risk of having their information collected. For this reason, California recently passed the Age-Appropriate Design Code Act.
The bill, which will take effect next year, defines children as anyone under 18 and will require apps and websites to enact more privacy protections for these vulnerable individuals. Other states seek to do the same. However, this is a federal issue, so a federal solution must come to fruition for the problem to be adequately addressed.
Why Congress Keeps Failing to Protect Kids Online
Roughly a decade has passed since experts began to appreciate that social media may be truly hazardous for children, and especially for teenagers. As with teenage smoking, the evidence has accumulated slowly but leads in clear directions. The heightened rates of depression, anxiety, and suicide among young people are measurable and disheartening. When I worked for the White House on technology policy, I would hear from the parents of children who had suffered exploitation or who died by suicide after terrible experiences online. They were asking us to do something.
The severity and novelty of the problem suggest the need for a federal legislative response, and Congress can’t be said to have ignored the issue. In fact, by my count, since 2017 it has held 39 hearings that have addressed children and social media, and nine wholly devoted to just that topic.
Congress gave Frances Haugen, the Facebook whistleblower, a hero’s welcome. Executives from Facebook, YouTube and other firms have been duly summoned and blasted by angry representatives. But just what has Congress actually done? The answer is nothing.
This German Airport Could Be the First to Offer Face-Scanning Technology for All Passengers
Queuing to check in and board a flight is a notoriously tedious experience. But one airport in Germany wants to significantly speed up the process for passengers.
Frankfurt airport says it will begin offering biometric check-in services for all travelers in the next few months. It already offers the facial recognition system for flyers on Lufthansa and its affiliated Star Alliance routes (including United, Air China and Air India).
Instead of queuing up at a desk to have ID and documents checked, your face becomes your boarding pass. Then they will have their faces scanned as they pass by checkpoints instead of having to present their documents.
Meta Is Asking Users for Handouts Amid New Regulations in Europe
Being waterboarded with advertisements sort of feels like second nature on the likes of Facebook and Instagram, but for some in the EU willing to pay, that will change.
Meta revealed its subscription service to access an ad-free version of Facebook and Instagram in a blog post on Monday. Users in several European countries will now have the option to pay anywhere from €9.99 on desktop or €12.99 on mobile — that’s about $10.60 and $13.78 USD, respectively — for an ad-free version of Instagram and Facebook. For now, the fee will cover all accounts linked to the account that purchases the subscription, but starting on March 1, 2024, users will have to fork over €6 ($6.36) on desktop or €8 ($8.48) on mobile for each additional account.
Earlier this month, news broke that Meta was toying with having users pay for an ad-free experience on its platforms. This new subscription system is Meta’s attempt at navigating through the EU’s recent regulations, which clamped down on Big Tech abusing data privacy and targeted ads. Meta previously cited the EU’s General Data Protection Regulation and the Data Privacy Act, which aims to tear down the “walled gardens” of Big Tech “gatekeepers” like Meta, Amazon, and Apple.
Britain Is ‘Omni-Surveillance’ Society, Watchdog Warns
Britain is an “omni-surveillance” society with police forces in the “extraordinary” position of holding more than 3 million custody photographs of innocent people more than a decade after being told to destroy them, the independent surveillance watchdog has said.
Fraser Sampson, who will end his term as the Home Office’s biometrics and surveillance commissioner this month, said there “isn’t much not being watched by somebody” in the U.K. and that the regulatory framework was “inconsistent, incomplete and in some areas incoherent.”
He spoke of his concerns that the law was not keeping up with technological advances in artificial intelligence (AI) that allow millions of images to be sorted through within moments and that there were insufficient checks and balances on the police.
The Mystery of the British Government’s Vanishing WhatsApps
Downing Street’s former top officials face grillings from Britain’s public inquiry into COVID-19 this week. But the headline news may not be their testimony, but the WhatsApp messages they were sending at the time.
As the nation awaits evidence sessions likely to reveal pandemic-era chaos at the heart of Downing Street, nerves in Whitehall are on edge. The inquiry has demanded the mass disclosure of messages from the encrypted app, despite an unsuccessful attempt to block their release by the government.
The furor over those messages and the anticipation of more to come have reopened big questions about government transparency in the digital age — and in particular, the increasing use of the “disappearing messages” function on WhatsApp by senior officials, political advisors and ministers.
Some of those involved argue they should be allowed the same in-person privacy they enjoy in Westminster’s corridors and canteens — and that WhatsApp messages are no different to quiet “water-cooler conversations” in any office environment.