Close menu

Big Brother News Watch

Jan 31, 2024

Tech CEOs Told ‘You Have Blood on Your Hands’ at U.S. Senate Child Safety Hearing + More

Tech CEOs Told ‘You Have Blood on Your Hands’ at U.S. Senate Child Safety Hearing

Reuters reported:

U.S. senators on Wednesday grilled leaders of the biggest social media companies and said Congress must quickly pass legislation, as one lawmaker accused the companies of having “blood on their hands” for failing to protect children from escalating threats of sexual predation on their platforms.

The hearing marks the latest effort by lawmakers to address the concerns of parents and mental health experts that social media companies put profits over guardrails that would ensure their platforms do not harm children.

“Mr. Zuckerberg, you and the companies before us, I know you don’t mean it to be so, but you have blood on your hands,” said Republican Senator Lindsey Graham, referring to Meta (META.O) CEO Mark Zuckerberg. “You have a product that’s killing people.”

Zuckerberg testified along with X CEO Linda Yaccarino, Snap (SNAP.N) CEO Evan Spiegel, TikTok CEO Shou Zi Chew and Discord CEO Jason Citron.

In the hearing room, dozens of parents held pictures of their children who they said had been harmed due to social media. Some parents jeered Zuckerberg, whose company owns Facebook and Instagram, during his opening statement and shouted comments at other points during the hearing.

Mark Zuckerberg Was Forced to Physically Stand Up and Face Families Affected by Online Abuse

Insider reported:

Meta CEO Mark Zuckerberg had to turn and face the families of children who were harmed by social media companies head-on during a contentious Senate hearing on Wednesday.

The shocking moment was prompted by Republican Sen. Josh Hawley during an intense hearing on online child safety before the Senate Judiciary Committee.

Hawley asked if Zuckerberg had apologized to the families, saying “Your products are killing people.” He then asked Zuckerberg if he’d like to directly apologize to the families who attended the hearings whose children were harmed or died from the impacts of social media.

Many senators in the hearing floated stripping away legal protections from social media companies, meaning they could be sued for child pornography or other sexually explicit material on their platforms.

New York Judge Rejects Madison Square Garden’s Bid to Dismiss Biometric Privacy Case Involving Facial Recognition

Reclaim the Net reported:

A New York judge has denied Madison Square Garden Entertainment’s motion to dismiss a biometric privacy lawsuit. The litigation revolves around a contentious policy, enacted by MSGE, which deployed facial recognition technology to prohibit certain attorneys from gaining entry into the entertainment giant’s renowned venues.

The lawsuit had previously survived MSGE’s initial attempt to dismiss it. The entertainment firm once again finds itself rebuffed in the District Court for the Southern District of New York, despite raising multiple arguments pleading for a dismissal.

The suit will move forward, as ruled by the presiding judge, focusing on whether MSGE’s tactics violate the city’s Biometric Identifier Information Protection Law. Even though the judge acknowledged MSGE’s rationale for wanting to dismiss the plaintiffs’ claims of civil rights violations and unjust enrichment, the alleged breach of the city’s biometrics statute remains a query.

U.S. Receives Thousands of Reports of AI-Generated Child Abuse Content in Growing Risk

Reuters reported:

The U.S. National Center for Missing and Exploited Children (NCMEC) said it had received 4,700 reports last year about content generated by artificial intelligence that depicted child sexual exploitation.

The NCMEC told Reuters the figure reflected a nascent problem that is expected to grow as AI technology advances.

In recent months, child safety experts and researchers have raised the alarm about the risk that generative AI tech, which can create text and images in response to prompts, could exacerbate online exploitation.

The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.

Nevada Files Lawsuit Against Facebook, Instagram, Messenger, Snapchat and TikTok: ‘Hazard to Public Health’

FOXBusiness reported:

The state of Nevada is suing some of the most popular social media companies, alleging that their apps are intentionally addictive and have contributed to a decline in mental health for its users, especially teens and young adults.

Nevada Attorney General Aaron Ford filed civil lawsuits Tuesday against the parent companies of Facebook, Instagram, Messenger, Snapchat and TikTok apps, claiming they are a “hazard to public health” and that they use “false, deceptive and unfair marketing” to directly appeal to youth.

The lawsuit also says the respective apps’ algorithms are “designed deliberately to addict young minds and prey on teenagers’ well-understood vulnerabilities.”

Mark Zuckerberg Says Apple and Google Should Manage Parental Consent for Apps, Not Meta

TechCrunch reported:

In today’s online safety hearing, Meta CEO Mark Zuckerberg again pushed back at the idea that businesses like his should be responsible for managing parental consent systems for kids’ use of social media apps, like Facebook and Instagram. Instead, he suggested the problem should be dealt with by the app store providers, like Apple and Google, he said.

This is not the first time Meta has floated the idea. Last November, the company introduced a proposal that argued that Apple and Google should do more with regard to kids’ and teens’ safety by requiring parental approval when users aged 13 to 15 download certain apps.

His suggestion is a clever maneuver by Meta, as it effectively turns Apple’s desire to profit from the apps on its app stores against them. Today, Apple takes a 15% to 30% commission on all in-app purchases that take place through iOS apps, depending on the business’s size and other factors.

Or, simply put, Meta is saying that if Apple wants to be the payment processor for all iOS apps, at a cost to Meta’s profits, then parental consent over app usage should be Apple’s problem, too.

NY Government Gears Up for Fight Against TikTok, Facebook and Other Social Media Giants

Gothamist reported:

Mayor Eric Adams says it’s a public health hazard. Gov. Kathy Hochul calls it “poison.” Attorney General Letitia James claims it’s a “crisis.” In recent weeks, some of New York’s top elected officials have used their bully pulpits to take aim against what, for them, has become a common enemy: Social media and its effect on kids.

Lawmakers across 35 states and Puerto Rico introduced legislation last year that was spurred by concern over social media’s effect on youth mental health, according to the National Conference of State Legislatures. Of those, 12 states adopted measures with varying degrees of action, including New Jersey, which launched a commission to study the issue.

Now, New York is on the verge of joining them, with Hochul and James pushing a pair of measures that would restrict social media platforms from collecting data from minors and exposing them to addictive algorithms.

And in New York City, Adams’ administration issued a public health advisory last week warning parents not to give their kids access to smartphones or other devices that can access social media until at least age 14.

Europe Is Rushing to Tighten Oversight of AI. The U.S. Is Taking Its Time.

Yahoo!Finance reported:

The European Union is applying new legal restraints around artificial intelligence this year. The U.S. is still trying to figure out how far it wants to go.

The European Parliament in December reached a provisional agreement on the world’s first comprehensive legislation to regulate AI, focusing on uses instead of the technology.

The new rules range in severity depending on how risky the application is, with facial recognition and certain medical innovations requiring approval before being made available to customers.

Federal laws specific to AI don’t exist yet in the U.S., and it’s unknown whether that will happen. The EU’s actions, however, could still have a chilling effect on companies based in this country.

Jan 30, 2024

The Rise of Techno-Authoritarianism + More

The Rise of Techno-Authoritarianism

The Atlantic reported:

The new technocrats claim to embrace Enlightenment values, but in fact, they are leading an antidemocratic, illiberal movement.

To worship at the altar of mega-scale and to convince yourself that you should be the one making world-historic decisions on behalf of a global citizenry that did not elect you and may not share your values or lack thereof, you have to dispense with numerous inconveniences — humility and nuance among them.

Many titans of Silicon Valley have made these trade-offs repeatedly. YouTube (owned by Google), Instagram (owned by Meta), and Twitter (which Elon Musk insists on calling X) have been as damaging to individual rights, civil society, and global democracy as Facebook was and is. Considering the way that generative AI is now being developed throughout Silicon Valley, we should brace for that damage to be multiplied many times over in the years ahead.

The behavior of these companies and the people who run them is often hypocritical, greedy, and status-obsessed. But underlying these venalities is something more dangerous, a clear and coherent ideology that is seldom called out for what it is: authoritarian technocracy. As the most powerful companies in Silicon Valley have matured, this ideology has only grown stronger, more self-righteous, more delusional, and — in the face of rising criticism — more aggrieved.

Elon Musk’s Neuralink Implants Brain Chip in First Human

Reuters reported:

The first human patient received an implant from brain-chip startup Neuralink on Sunday and is recovering well, the company’s billionaire founder Elon Musk said. “Initial results show promising neuron spike detection,” Musk said in a post on the social media platform X on Monday.

Spikes are activity by neurons, which the National Institute of Health describes as cells that use electrical and chemical signals to send information around the brain and to the body.

The U.S. Food and Drug Administration had given the company clearance last year to conduct its first trial to test its implant on humans, a critical milestone in the startup’s ambitions to help patients overcome paralysis and a host of neurological conditions.

The study uses a robot to surgically place a brain-computer interface (BCI) implant in a region of the brain that controls the intention to move, Neuralink said previously, adding that its initial goal is to enable people to control a computer cursor or keyboard using their thoughts alone.

Parent Anger at Social Media Companies Boils Over Ahead of Tech CEO Hearing

The Hill reported:

The Senate is hauling in CEOs of social media companies to grill them over online harm to children Wednesday, but parents and advocates said the time for talking is over and Congress must act to protect children and teens.

Parents who became advocates after losing their children to harms they say were created by social media companies will be among the crowd at Thursday’s Judiciary Committee hearing. The hearing will feature testimony from Meta CEO Mark Zuckerberg, TikTok CEO Shou Zi Chew, X CEO Linda Yaccarino, Snap CEO Evan Spiegel and Discord CEO Jason Citron.

The hearing is centered around the online sexual exploitation of children, but advocates said the harms extend to how social media companies amplify cyberbullying and the spread of harmful content that promotes eating disorders and self-harm.

A coalition of teens, parents and other advocates will attend the hearing to make a push for the Kids Online Safety Act (KOSA), a bipartisan bill that would add regulations for social media companies like the five in the hot seat Wednesday.

While advocates aren’t shying away from slamming tech companies as taking too little action to mitigate the risks posed by their services, they also place blame on lawmakers for failing to pass rules that would hold the companies accountable.

South Carolina Lawmaker Whose Son Died by Suicide After Sextortion Scam Files Lawsuit Against Meta

FOXBusiness reported:

A South Carolina lawmaker who lost his son to suicide after the teenager fell victim to a sextortion scam is now suing Meta, which owns Facebook and Instagram. State Rep. Brandon Guffey is alleging that Meta engaged in deceitful practices to get users, particularly children, addicted to the company’s social media platforms, resulting in “pain and suffering” due to poor mental health.

“I’m bringing the suit because of my personal experience of the pain of a father who lost a son,” Guffey told Fox News Digital. “And I believe it’s due to… criminal negligence. I believe that they designed addictive algorithms that target children. They’ve concealed research on the harmful effects, and they’ve misled the public, about the correlation between their products and our current mental health crisis across the globe.”

“I equate it to… these digital companies are the tobacco companies of our kids’ generations,” Guffey said. “They are fully aware of the problems that they’re causing, and they care more about profits than they do about people.”

Since his son’s death, Guffey has made it his life’s mission to spread awareness about the dangers of sextortion and unsafe social media use in general. This lawsuit against Meta is part of that mission, he said.

To Protect Kids, California Might Require Chronological Feeds on Social Media

Los Angeles Times reported:

Social media companies design their feeds to be as gripping as possible, with complicated algorithms shuffling posts and ads into a never-ending stream of entertainment.

A new California law would require companies to shut off those algorithms by default for users under 18 and implement other mandated tweaks that lawmakers say would reduce the negative mental health effects of social media on children.

One of the act’s key provisions is making a chronological feed the default setting on platforms, which would show users posts from the people they follow in the order that they were uploaded, rather than arranging the content to maximize engagement.

The act would also require the default settings on social media apps to mute notifications between midnight and 6 a.m., cap use at one hour daily, and remove the visibility of “like” counts. Parents — and in practice, most likely, the children using these apps — would have the ability to change these default settings.

Meta Says Its Parental Controls Protect Kids. But Hardly Anyone Uses Them.

The Washington Post reported:

Amid scrutiny of social media’s impact on kids and teens, tools that let parents track their children’s online activities have become increasingly popular. Snapchat, TikTok, Google and Discord all have rolled out parental controls in recent years; last week, Meta said these features “make it simpler for parents to shape their teens’ online experiences.”

But inside Meta, kids safety experts have long raised red flags about relying on such features. And their use has been shockingly infrequent.

By the end of 2022, less than 10% of teens on Meta’s Instagram had enabled the parental supervision setting, according to people familiar with the matter who spoke on the condition of anonymity to discuss private company matters; of those who did, only a single-digit percentage of parents had adjusted their kids’ settings.

Internal research described extensive barriers for parents trying to supervise their kids’ online activities, including a lack of time and limited understanding of the technology. Child safety experts say these settings are an industry-wide weakness, allowing tech companies to absolve themselves while requiring parents to do the heavy lifting.

Could the EU’s Artificial Intelligence Act Increase Mass Surveillance Systems?

Euronews reported:

The use of facial recognition technology could increase across the European Union despite efforts to regulate it under the bloc-wide Artificial Intelligence Act.

Last December, EU negotiators reached a preliminary agreement on the AI Act, a world-first attempt to regulate the emerging technology that includes new rules on the use of biometric identification systems such as facial recognition.

But civil society organizations fear there are loopholes in the planned law. “They have set very broad conditions for the police to use these systems. What we fear is that this will have a legitimizing effect,” said Ella Jakubowska of Reclaim Your Face, a coalition calling to ban biometric mass surveillance.

Jakubowska says that until now it had been “possible to challenge” these systems and argue that they were not wanted “in a democratic society.” She fears they will now be harder to reject, and more likely to be adopted by other countries worldwide under the impression they have received the EU seal of approval.

Some Hospitals Are Requiring Masks Again. Will Other Public Places Be Next?

TIME reported:

If you’ve been to a hospital lately, you might have noticed: masks are back. The rising number of COVID-19 hospitalizations is prompting many healthcare systems — including those at the University of Pennsylvania, Johns Hopkins, and all public health hospitals in New York City — to require them once again.

Does wearing a mask still matter — and do the new mandates mean that other restrictions are on the horizon? Here’s what experts say.

Hospitals aren’t the only places vulnerable people gather, so Dr. Robert Murphy, professor of medicine at Northwestern Feinberg School of Medicine, suggests mask mandates could be extended to long-term care facilities and assisted living spaces.

He also believes mask requirements in these settings should become a regular feature every year during respiratory season. “It’s probably a very good idea, from a public health standpoint, to say that this is something that happens every winter from December to February,” he says. “It just makes common sense. If universal masking is never going to be accepted at this point, let’s protect the most vulnerable, and hospitals are places where there are a lot of vulnerable people.”

Jan 29, 2024

Privacy Companies Push Back Against EU Plot to End Online Privacy + More

Privacy Companies Push Back Against EU Plot to End Online Privacy

Reclaim the Net reported:

An urgent appeal has been relayed to ministers across the European Union by a consortium of tech companies, exacting a grave warning against backing a proposed regulation focusing on child sexual abuse as a pretense to jeopardize the security integrity of internet services relying on end-to-end encryption and end privacy for all citizens.

A total of 18 organizations — predominantly comprising providers of encrypted email and messaging services — have voiced concerns about the potential experimental regulation by the European Commission (EC), singling out the “detrimental” effects on children’s privacy and security and the possible dire repercussions for cyber security.

Made public on January 22, 2024, this shared open letter argues that the EC’s draft provision known as “Chat Control,” mandating the comprehensive scanning of encrypted communications, may create cyber vulnerabilities that expose citizens and businesses to increased risk.

Further inflating the issue, the letter also addresses a stalemate amongst member states, the EC, and the European Parliament, who haven’t yet reconciled differing views on the proportionality and feasibility of the EC’s mass-scanning strategy in addressing child safety concerns.

Roomba Won’t Give Amazon a Map of Your Home After Merger Implodes

Gizmodo reported:

Amazon abandoned its $1.4 billion acquisition of Roomba maker, iRobot, on Monday after regulators in the European Union threatened to block the deal. The deal’s implosion means the robot vacuums, and the company’s maps of 40 million floor plans across the globe, will not join the growing list of smart-home devices Amazon uses to collect information about you.

Regulators in the EU sent the companies a list of concerns in November regarding how Amazon’s acquisition would stifle innovation in the robot vacuum cleaner marketplace.

Privacy was not a concern brought by EU regulators, but consumer advocates have spoken out about how the Roomba acquisition would give Amazon another device to track you and dominate your home’s systems. That pressure from regulators seems to have blown up this deal, and it seems to be an inadvertent, but major, win for your home’s privacy. The company has been growing its presence in consumer homes with Amazon Alexa, Ring doorbells and cameras, and Amazon Fire TV Stick.

The Roomba is like a little spy in many ways, understanding the floor plan of your home, the furniture in your living room, what areas of the home get the most use and many other data points. iRobot even noted in 2017 that selling its maps was a key part of a future acquisition. The Roomba would have been yet another Amazon device that adds to the company’s profile it can build on customers.

TSA Uses ‘Minimum’ Data to Fine-Tune Its Facial Recognition, but Some Experts Still Worry

Nextgov/FCW reported:

The Transportation Security Administration is moving forward with plans to implement facial recognition technology at U.S. airports and is working with the Department of Homeland Security’s research and development component to analyze data to ensure that the new units are working correctly, agency officials told Nextgov/FCW.

A TSA official said the agency “is currently in the beginning stages of integrating automated facial recognition capability as an enhancement to the Credential Authentication Technology devices that had been deployed several years ago.”

The latest CAT scanners — known as CAT-2 units — incorporate facial recognition technology by taking real-time pictures of travelers and then comparing those images against their photo IDs. TSA first demonstrated the CAT-2 units in 2020 and began deploying the new screeners at airports in 2022. A Jan. 12 press release from the agency said it added “457 CAT-2 upgrade kits utilizing the facial recognition technology” in 2023.

“The CAT-2 units are currently deployed at nearly 30 airports nationwide, and will expand to more than 400 federalized airports over the coming years,” the TSA official said, noting that it is currently optional for travelers to participate in facial recognition screenings. Those who decline to do so can notify a TSA agent and go through the standard ID verification process instead.

Some lawmakers, privacy advocates and experts have voiced concerns about the continued expansion of facial recognition, either proposing the implementation of new standards and requirements for the technology’s use or calling for a complete halt to the government’s rollout of the tech for security and law enforcement purposes.

Can the Government Ask Social Media Sites to Take Down COVID Misinformation? SCOTUS Will Weigh In

STAT News reported:

The Supreme Court will this March hear arguments centered on the government’s role in communicating — and sometimes censoring — pertinent public health information in the midst of a pandemic.

At the core of the lawsuit is whether the federal government’s requests for social media and search giants like Google, Facebook, Twitter, and YouTube to moderate COVID-19 misinformation violated users’ First Amendment rights.

While the suit was originally filed by then-Missouri Attorney General Eric Schmitt — and known as Missouri v. Biden — a range of plaintiffs arguing that the Biden administration suppressed their COVID-19 content later joined. Those include Jay Bhattacharya and Martin Kulldorff, who co-authored a paper, the Great Barrington Declaration, advancing the theory that people could achieve herd immunity without vaccines.

The case is now referred to as Murthy v. Missouri.

AI Is Coming for Big Pharma

Engadget reported:

If there’s one thing we can all agree upon, it’s that the 21st century’s captains of industry are trying to shoehorn AI into every corner of our world. But for all of the ways in which AI will be shoved into our faces and not prove very successful, it might actually have at least one useful purpose. For instance, by dramatically speeding up the often decades-long process of designing, finding and testing new drugs.

The current process takes “more than a decade and multiple billions of dollars of research investment for every drug approved,” said Dr. Chris Gibson, co-founder of Recursion, another company in the AI drug discovery space.

He says AI’s great skill may be to dodge the misses and help avoid researchers spending too long running down blind alleys. A software platform that can churn through hundreds of options at a time can, in Gibson’s words, “fail faster and earlier so you can move on to other targets.”

OpenAI and Google Will Be Required to Notify the Government About AI Models

Mashable reported:

OpenAI, Google, and other AI companies will soon have to inform the government about developing foundation models, thanks to the Defense Production Act. According to Wired, U.S. Secretary of Commerce Gina Raimondo shared new details about this impending requirement at an event held by Stanford University’s Hoover Institute last Friday.

“We’re using the Defense Production Act … to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results — the safety data — so we can review it,” said Raimondo.

The new rules are part of President Biden‘s sweeping AI executive order announced last October. Amongst the broad set of mandates, the order requires companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety,” to notify the federal government and share the results of its safety testing.

Foundation models are models like OpenAI’s GPT-4 and Google’s Gemini that power generative AI chatbots. However, GPT-4 is likely below the threshold of computing power that requires government oversight.

Facebook Users in the U.K. Have More Privacy Protections Than in the U.S. Here’s Why.

Mashable reported:

Facebook is a behemoth so large, so absolute, that just 20 years after its creation, it’s difficult to imagine a world in which its power doesn’t reach the most desolate of civilizations. Because it’s a platform that spans the entire globe, but one that is commanded from Silicon Valley, it’s easy to assume that it looks the same in every place you can access it.

However, legislators in the EU are working to ensure that there are at least some protections for the people who use the platform.

The European Union’s Digital Markets Act is a significant regulation that addresses antitrust concerns with big tech companies, giving the EU regulatory power that has affected the way some social media platforms function. That means that Facebook, and its parent company Meta, look a bit different in Europe than it does in the U.S., including, primarily, its protections for users.

In November 2023, despite Facebook‘s best attempts to stop it, regulators forced Meta to start offering a monthly subscription fee to use its platforms without any ads for users in the EU, EEA, and Switzerland. It costs users €9.99 per month and is entirely optional — you can continue using the app for free and get the ads, or you can pay and have an ad-free experience.

It’s unclear if any of these tools will ever become available for users in the U.S., but it sure would be nice if U.S. regulators started caring about citizens’ digital privacy as much as EU regulators seem to.

Jan 26, 2024

NSA Buying Americans’ Internet Browsing Records Without Warrant + More

NSA Is Buying Americans’ Internet Browsing Records Without a Warrant

TechCrunch reported:

The U.S. National Security Agency is buying vast amounts of commercially available web browsing data on Americans without a warrant, according to the agency’s outgoing director.

NSA director Gen. Paul Nakasone disclosed the practice in a letter to Sen. Ron Wyden, a privacy hawk and senior Democrat on the Senate Intelligence Committee. Wyden published the letter on Thursday.

Nakasone said the NSA purchases “various types” of information from data brokers “for foreign intelligence, cybersecurity, and authorized mission purposes,” and that some of the data may come from devices “used outside — and in certain cases, inside — the United States.”

“Web browsing records can reveal sensitive, private information about a person based on where they go on the internet, including visiting websites related to mental health resources, resources for survivors of sexual assault or domestic abuse, or visiting a telehealth provider who focuses on birth control or abortion medication,” said Wyden in a statement.

Wyden said he learned of the NSA’s domestic internet records collection in March 2021 but was unable to share the information publicly until it was declassified. As a member of the Senate Intelligence Committee, Wyden is allowed to receive and read classified materials but cannot share them publicly. NSA lifted the restrictions after Wyden put a hold on the nomination of the next NSA director, the senator said.

23andMe Admits Hackers Stole Raw Genotype Data — and That Cyberattack Went Undetected for Months

TechRadar reported:

23andMe has revealed that cyberattacks were targeting customers for months without the company realizing it.

According to an obligatory notification letter sent to California’s attorney general, accounts belonging to users of the genetic testing firm were being hacked from about April to September 2023, in a series of brute force attacks.

Millions of people’s genetic data was leaked on the dark web by the threat actor, after a total of 14,000 users had their accounts breached, according to 23andMe’s filing with the Security and Exchanges Commission (SEC).

23andMe only realized that attacks were taking place in October when the stolen data was being promoted on an unofficial subreddit and on a popular underground forum. However, some data was also leaked on BreachedForums in August, which the company was not aware of at the time.

Victims have filed class action lawsuits against 23andMe in response, although the company did try to change its terms of service to try and prevent such action from being taken against it.

Mass General Fired Nurse Who Refused Booster After Reaction to First Doses, Lawsuit Claims

CBS News reported:

A former registered nurse at Mass General in Boston is suing the hospital for wrongful termination, saying she was fired for refusing a COVID booster despite a request for a medical exemption after an adverse reaction to the first two doses.

Florrie McCarthy of Braintree filed the lawsuit against Mass General Brigham at U.S. District Court in Boston. McCarthy said she worked at the hospital for 36 years. She says in January 2021 after receiving the first dose of the Moderna COVID vaccine, she experienced numbness in her face and tingling around her nose and lips for several hours.

One month later after receiving the second dose of the vaccine, McCarthy said she had a similar reaction along with a metallic taste. According to the lawsuit, McCarthy continues to suffer from a diminished sense of taste and numbness on the right side of her tongue.

McCarthy said she provided a letter from her doctor, and in her request for medical exemption included a letter from Mass General’s Department of Neurology saying “We cannot rule out that the COVID-19 vaccine has contributed to some of the patient[‘]s neurological symptoms.” The hospital denied the request, saying it “does not demonstrate a sufficient medical reason or contraindication to support an exemption,” the lawsuit says.

Schools Are Using Surveillance Tech to Catch Students Vaping, Snaring Some With Harsh Punishments

Associated Press reported:

Like thousands of other students around the country, Aaliyah Iglesias was caught by surveillance equipment that schools have installed to crack down on electronic cigarettes, often without informing students.

Schools nationwide have invested millions of dollars in the monitoring technology, including federal COVID-19 emergency relief money meant to help schools through the pandemic and aid students’ academic recovery. Marketing materials have noted the sensors, at a cost of over $1,000 each, could help fight the virus by checking air quality.

Some districts pair the sensors with surveillance cameras. When activated by a vaping sensor, those cameras can capture every student leaving the bathroom.

A leading provider, HALO Smart Sensors, sells 90% to 95% of its sensors to schools. The sensors don’t have cameras or record audio but can detect increases in noise in a school bathroom and send a text alert to school officials, said Rick Cadiz, vice president of sales and marketing for IPVideo, the maker of the HALO sensors.

Facebook and Instagram Accused of Allowing Predators to Share Tips With Each Other About Victimizing Children

FOXBusiness reported:

Meta, which includes Facebook and Instagram, is accused of facilitating and profiting off of the online solicitation, trafficking and sexual abuse of children, according to a complaint filed by the New Mexico Attorney General.

New Mexico AG Raúl Torrez is bringing legal action against Meta, accusing the company of permitting sponsored content to appear alongside inappropriate content in violation of Meta’s standards, allowing child predators who use dark web message boards to share tips with each other about victimizing children, and reportedly rejecting its own safety teams’ recommendations to make it harder for adults to communicate with children on its platforms, according to an updated complaint reviewed by Fox News Digital.

“Parents deserve to know the full truth about the risks children face when they use Meta’s platforms,” AG Torrez told Fox News Digital. “For years, Meta employees tried to sound the alarm about how decisions made by Meta executives subjected children to dangerous solicitations and sexual exploitation.”

Torrez said the newly unredacted aspects of the complaint help form the basis of their action against Meta and “make clear” that “Mark Zuckerberg called the shots” in making the decisions that mattered to children and parents.

Congress Wants to Ban China’s Largest Genomics Firm From Doing Business in the U.S. Here’s Why.

NBC News reported:

Bipartisan legislation was introduced in both houses of Congress Thursday that would effectively ban China’s largest genomics company from doing business in the U.S., after years of warnings from intelligence officials that Beijing is gathering genetic information about Americans and others in ways that could harm national security.

The bills, backed by leaders of the House Select Committee on the Chinese Communist Party and the Senate Homeland Security Committee, target BGI, formerly known as Beijing Genomics Institute, which in 2021 was blacklisted by the Pentagon as a Chinese military company. Five company affiliates also have been sanctioned by the Commerce Department, which accused at least two of them of improperly using genetic information against ethnic minorities in China.

In an exclusive interview with NBC News, Rep. Mike Gallagher, R-Wis., and Rep. Raja Krishnamoorthi, D-Ill., said their legislation would ban BGI — or any company using its technology — from federal contracts, a move the company said in a statement would “drive BGI from the U.S. market.”

“This bill will protect Americans’ personal health and genetic information from foreign adversaries who have the ability and motivation to use it to undermine our national security,” Peters told NBC News.

First Tech Platform Breaks Ranks to Support Kids Online Safety Bill

Politico reported:

The owner of Snapchat is backing a bill meant to bolster online protections for children on social media, the first company to publicly split from its trade shop days before the company’s CEO prepares to testify on Capitol Hill.

A Snap spokesperson told POLITICO about the company’s support of Kids Online Safety Act. The popular messaging service’s position breaks ranks with its trade group NetChoice, which has opposed KOSA. The bill directs platforms to prevent the recommendation of harmful content to children, like posts on eating disorders or suicide.

Snap’s CEO will appear with the heads of Meta, Discord, TikTok, and X, formerly Twitter, in a hearing on Wednesday before the Senate Judiciary Committee, where lawmakers will grill them over their companies’ alleged failures to remove content promoting the sexual abuse of children.

None of the other platforms testifying have taken public positions supporting KOSA to date. TikTok and Discord declined to comment on KOSA. X did not respond to a request for comment. Meta didn’t answer if it supports KOSA, but said it supports “internet regulation” and issued its own legislative “framework” this month calling on Congress to pass a bill to shift the responsibility to app stores, not platforms, to obtain parental consent for kids to download social media apps.

Italy Fines First City for Privacy Breaches in Use of AI

Reuters reported:

Italy’s privacy watchdog has fined the northern city of Trento for breaking data protection rules in the way it used artificial intelligence (AI) in street surveillance projects.

Trento was fined 50,000 euros ($54,225) and told to delete all data gathered in two European Union-funded projects. It is the first local administration in Italy to be sanctioned by the GPDP watchdog over the use of data from AI tools.

The authority — one of the EU’s most proactive in assessing AI platform compliance with the bloc’s data privacy regime — last year briefly banned popular chatbot ChatGPT in Italy.

In 2021, it also said a facial recognition system tested by the Italian Interior Ministry did not comply with privacy laws.