Close menu

Big Brother News Watch

Oct 24, 2023

Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces + More

Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces

The Verge reported:

PimEyes, a public search engine that uses facial recognition to match online photos of people, has banned searches of minors over concerns it endangers children, reports The New York Times.

At least, it should. PimEyes’ new detection system, which uses age detection AI to identify whether the person is a child, is still very much a work in progress. After testing it, The New York Times found it struggles to identify children photographed at certain angles. The AI also doesn’t always accurately detect teenagers.

PimEyes chief executive Giorgi Gobronidze says he’d been planning on implementing such a protection mechanism since 2021. However, the feature was only fully deployed after New York Times writer Kashmir Hill published an article about the threat AI poses to children last week. According to Gobronidze, human rights organizations working to help minors can continue to search for them, while all other searches will produce images that block children’s faces.

In the article, Hill writes that the service banned over 200 accounts for inappropriate searches of children. One parent told Hill she’d even found photos of her children she’d never seen before using PimEyes. In order to find out where the image came from, the mother would have to pay a $29.99 monthly subscription fee.

PimEyes is just one of the facial recognition engines that have been in the spotlight for privacy violations. In January 2020, Hill’s New York Times investigation revealed how hundreds of law enforcement organizations had already started using Clearview AI, a similar face recognition engine, with little oversight.

Instagram Linked to Depression, Anxiety, Insomnia in Kids — U.S. States’ Lawsuit

Reuters reported:

Dozens of U.S. states are suing Meta Platforms (META.O) and its Instagram unit, accusing them of contributing to a youth mental health crisis through the addictive nature of their social media platforms.

In a complaint filed in the Oakland, California, federal court on Tuesday, 33 states including California and Illinois said Meta, which also operates Facebook, has repeatedly misled the public about the substantial dangers of its platforms and knowingly induced young children and teenagers into addictive and compulsive social media use.

“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint said.

The lawsuit is the latest in a string of legal actions against social media companies on behalf of children and teens. ByteDance’s TikTok and Google‘s YouTube are also the subjects of the hundreds of lawsuits filed on behalf of children and school districts about the addictiveness of social media.

The lawsuit alleges that Meta also violated a law banning the collection of data of children under the age of 13. The state action seeks to patch holes left by the U.S. Congress’s inability to pass new online protections for children, despite years of discussions.

Fed Governor Admits CBDCs Pose ‘Significant’ Privacy Risks

Reclaim the Net reported:

Federal Reserve Governor Michelle Bowman has raised valid concerns about the grave risks and privacy dangers that the introduction of a central bank digital currency (CBDC) might lead to in an appearance at a Harvard Law School Program on October 17.

The concept of a CBDC creation promises no certain benefit, remarks Bowman, but notably hints towards potential “unintended consequences” for the industry of finance.

Drawing from the contentions raised by one of the major participants in the regulation of domestic payment systems and banking, Governor Bowman underscored the trade-offs and risks that a digital dollar could entail. With the “considerable consumer privacy concerns” that a U.S. CBDC implementation might entail, any plausible merits of such a currency, Bowman points out, remain largely elusive.

Notwithstanding the grand promises of hassle-free payment systems or greater financial inclusion, there appears to be a significant lack of persuasive proof that a CBDC would actually contribute to these ends or furnish public access to secure central bank money. Yet, the argument here is not for the halt of research on this subject; according to Bowen, a continuous study of a digital dollar’s technical abilities and potential risks linked with CBDCs could foster a progressive attitude towards such future developments.

AI Firms Must Be Held Responsible for Harm They Cause, ‘Godfathers’ of Technology Say

The Guardian reported:

Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.

Tuesday’s intervention was made as international politicians, tech companies, academics and civil society figures prepare to gather at Bletchley Park next week for a summit on AI safety.

“It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”

He added: “There are more regulations on sandwich shops than there are on AI companies.”

Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI,” who won the ACM Turing Award — the computer science equivalent of the Nobel prize — in 2018 for their work on AI.

FTC Plans to Hire Child Psychologist to Guide Internet Rules

CNBC reported:

The Federal Trade Commission plans to hire at least one child psychologist who can guide its work on internet regulation, Democratic Commissioner Alvaro Bedoya told The Record in an interview published Monday.

FTC Chair Lina Khan backs the plan, Bedoya told the outlet, adding that he hopes it can become a reality by next fall, though the commission does not yet have a firm timeline.

The FTC’s plan is indicative of a broader push across the U.S. government, focusing on online protections for kids and teens. Federal and state lawmakers have proposed new legislation they believe will make the internet safer by mandating stronger age authentication or placing more responsibility on tech companies to design safe products for young users. The U.S. Surgeon General issued an advisory in May that young people’s social media use poses significant mental health risks.

Bedoya envisions an in-house child psychologist to be a helpful resource for commissioners like himself. Those experts could bring important insights that can link a cause to alleged harm and inform the appropriate damages the agency seeks, Bedoya said. He added that child psychologists could help the FTC evaluate allegations of how social media may affect mental health, as well as assess the effect of dark patterns or other deceptive features.

A Controversial Plan to Scan Private Messages for Child Abuse Meets Fresh Scandal

Wired reported:

Danny Mekić, an Amsterdam-based Ph.D. researcher, was studying a proposed European law meant to combat child sexual abuse when he came across a rather odd discovery. All of a sudden, he started seeing ads on X, formerly Twitter, that featured young girls and sinister-looking men against a dark background, set to an eerie soundtrack. The advertisements, which displayed stats from a survey about child sexual abuse and online privacy, were paid for by the European Commission.

Mekić thought the videos were unusual for a governmental organization and decided to delve deeper. The survey findings highlighted in the videos suggested that a majority of EU citizens would support the scanning of all their digital communications.

Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.

Oct 23, 2023

‘Invasive’ Google Keyword Search Warrants Get Court Greenlight + More

‘Invasive’ Google Keyword Search Warrants Get Court Greenlight. Here’s Everything You Need to Know

Gizmodo reported:

Colorado’s Supreme Court this week had the opportunity to hand down a historic judgment on the constitutionality of “reverse keyword search warrants,” a powerful new surveillance technique that grants law enforcement the ability to identify potential criminal suspects based on broad, far-reaching internet search results.

Police say the creative warrants have helped them crack otherwise cold cases. Critics, which include more than a dozen rights organizations and major tech companies, argue the tool’s immense scope tramples on innocent users’ privacy and runs afoul of Fourth Amendment Protections against unreasonable searches by the government.

Civil liberties and digital rights experts speaking with Gizmodo described the court’s “confusing” decision to punt on the constitutionality of reverse keyword search this week as a major missed opportunity and one that could inevitably lead to more cops pursuing the controversial tactics, both in Colorado and beyond.

Critics fear these broad warrants, which compel Google and other tech companies to sift through its vast cornucopia of search data to sniff out users who’ve searched for specific keywords, could be weaponized against abortion seekers, political protestors, or even everyday internet users who inadvertently type a result that could someday be used against them in court.

Supreme Court Will Hear Biden Social Media Case This Term

The Hill reported:

The Supreme Court said Friday it will consider a social media censorship case brought against Biden administration officials in its next term, setting up a legal battle with resounding implications for online speech.

The high court also issued a stay in an injunction ordered by the 5th U.S. Circuit Court of Appeals, pausing its effect until the justices decide the case on its merits. Justices Samuel Alito, Clarence Thomas and Neil Gorsuch dissented in their decision to stay the order.

“At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news,” Alito wrote in his dissenting opinion. “That is most unfortunate.”

Missouri Attorney General Andrew Bailey, one of the attorneys general who brought the lawsuit, called the high court’s decision to stay the order “the worst First Amendment violation in our nation’s history.”

“We look forward to dismantling Joe Biden’s vast censorship enterprise at the nation’s highest court,” Bailey said in a statement.

The U.S. Has Failed to Pass AI Regulation. New York City Is Stepping Up

Wired reported:

As the U.S. federal government struggles to meaningfully regulate AI — or even function — New York City is stepping into the governance gap.

The city introduced an AI Action Plan this week that Mayor Eric Adams calls a first of its kind in the nation. The set of roughly 40 policy initiatives is designed to protect residents against harm like bias or discrimination from AI. It includes the development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.

New York’s AI regulation could soon expand still further. City council member Jennifer Gutiérrez, chair of the body’s technology committee, today introduced legislation that would create an Office of Algorithmic Data Integrity to oversee AI in New York.

Several U.S. senators have suggested creating a new federal agency to regulate AI earlier this year, but Gutiérrez says she’s learned that there’s no point in waiting for action in Washington, DC. “We have a unique responsibility because a lot of innovation lives here,” she says. “It’s really important for us to take the lead.”

Americans Are Concerned About AI Data Collection, New Poll Shows

Newsweek reported:

Most Americans who have an awareness of emerging artificial intelligence (AI) technology are worried that companies won’t use AI tools responsibly, according to survey results released this week by Pew Research Center.

There has been an increase in public discourse about AI this year due in part to the wide adoption of ChatGPT, a chatbot unveiled last November by the AI company OpenAI. Users are able to communicate with ChatGPT after initiating conversations through textual, visual and audio prompts. Global monthly web visits to the ChatGPT website were estimated to be at 1.43 billion in August, according to Reuters.

Technology leaders say AI development poses positive potential, particularly in the healthcare, drug development and transportation industries. But there is also risk and uncertainty associated with AI, as no one knows for certain what it could one day become.

Nearly half of American adults don’t want social media companies using their personal data for personalized user experiences, and 44% don’t like the idea of AI being used to identify people through voice analysis, according to the poll’s results.

Young People Are Increasingly Worried About Privacy in the AI Age

TechRadar reported:

Younger consumers are more likely to exercise Data Subject Access Rights, according to a new Cisco study which found nearly half (42%) of 18-24 year-olds have done so compared with just 6% of 75 year-olds and older.

The number is on the up, too, at four percentage points higher in 2023 compared with 2022, suggesting growing concerns over data privacy. Of the 2,600 consumer participants, almost two-thirds (62%) expressed their concern about how organizations could be using their personal data for AI.

Overall, the study suggests that consumer lack of trust is on the rise, and companies need to act to better inform their users. The blame isn’t entirely on large corporations, though, according to VP and Chief Privacy Officer Harvey Jang: “As governments pass laws and companies seek to build trust, consumers must also take action and use technology responsibly to protect their own privacy.”

AI Makes Hiding Your Kids’ Identity on the Internet More Important Than Ever. But It’s Also Harder to Do.

The New York Times via The Seattle Times reported:

Historically, the main criticism of parents who overshare online has been the invasion of their progeny’s privacy, but advances in artificial intelligence-based technologies present new ways for bad actors to misappropriate the online content of children.

Among the novel risks are scams featuring deepfake technology that mimic children’s voices and the possibility that a stranger could learn a child’s name and address from just a search of their photo.

Amanda Lenhart, the head of research at Common Sense Media, a nonprofit that offers media advice to parents, pointed to a recent public service campaign from Deutsche Telekom that urged more careful sharing of children’s data.

The video featured an actress portraying a 9-year-old named Ella, whose fictional parents were indiscreet about posting photos and videos of her online. Deepfake technology generated a digitally aged version of Ella who admonishes her fictional parents, telling them that her identity has been stolen, her voice has been duplicated to trick them into thinking she’s been kidnapped and a nude photo of her childhood self has been exploited.

Empty Classroom Seats Reveal ‘Long Shadow’ of COVID Chaos on Britain’s Children

The Guardian reported:

While this paints a picture of chaotic decision-making and rancorous divisions at the top of government, none of it is surprising. By far the most important testimony so far — much more essential than who said what about who on WhatsApp — came from England’s former children’s commissioner Anne Longfield, who told the inquiry children will be living under the “long shadow” of the pandemic for two decades to come.

It may seem odd to think of COVID in terms of silver linings. But I’ve often pondered how lucky we were that, unlike many pandemic-causing infectious diseases that carry the highest risk of death among the very young and very old, COVID was generally associated with mild symptoms in children.

But the government squandered this precious silver lining from the start. After the decision to close schools in March 2020, they should have been the first thing to reopen as infection rates started to fall in May that year.

Instead, they remained mostly closed as pubs and restaurants were allowed to reopen. Rishi Sunak threw almost a billion pounds at subsidizing people to eat out in August but couldn’t find the cash to put on outdoor enrichment activities over the summer for children stuck at home for months on end. Boris Johnson delayed imposing the social restrictions later that year to the extent that he was forced to take more drastic action when it eventually acted, again closing schools for weeks.

Stay in EU, Comply With EU Law: EU’s Digital Chief Warns X’s Musk

Politico reported:

X owner Elon Musk will have to comply with European Union law and clamp down on illegal content on the social network if it wants to keep on doing “good business” in the region, the EU’s digital chief Věra Jourová said today.

The tech mogul denied a report last week that he was considering pulling X out of Europe to avoid new requirements for digital platforms. X is used by over 101 million Europeans in the bloc. Under the EU’s Digital Services Act (DSA), the company must swiftly take down content and ensure the network limits disinformation and cyberviolence.

Musk does “good business in [the] European Union, but it will be his decision and if he decides to stay in as well, he will have to comply with the EU law,” Jourová said.

Oct 19, 2023

Their Kids Died After Buying Drugs on Snapchat. Now the Parents Are Suing + More

Their Kids Died After Buying Drugs on Snapchat. Now the Parents Are Suing

The Guardian reported:

Hanh Badger was working from home on the morning of June 17, 2021. She went to the kitchen to grab a second cup of coffee and noticed her daughter’s bedroom door was still shut. Badger found Brooke, 17, pale and motionless in bed.

In the ensuing days, Badger’s husband and son were able to gain access to Brooke’s computer and, with it, her Snapchat account. They found screenshots of what looked like a menu of narcotics, and conversations with a drug dealer showing Brooke had purchased what she believed to be Roxicet, a prescription medication containing acetaminophen and oxycodone typically prescribed for pain relief. Instead, the substance was a counterfeit pill that held a lethal dose of fentanyl.

Across the U.S., young people are dying from fentanyl in record numbers, even as overall drug use is on the decline. Nationally, the number of opioid overdose deaths for people 24 and under nearly doubled from 2019-2021. And according to the National Institute on Drug Abuse, the number of overdoses attributed to synthetic opioids like fentanyl dwarfs that of any other substance.

In California, where Brooke lived, fentanyl-related overdose deaths among 15- to 19-year-olds surged by nearly 800% between 2018 and 2021, according to data from the California Overdose Surveillance Dashboard. Many are young victims poisoned by counterfeit pills that have been pressed to look like legitimate prescription drugs, but that are laced with fentanyl, an opioid that is deadly even in granular quantities. Typically, those teenagers acquired what they believed to be Percocet, Xanax or other pharmaceuticals online through social media.

In their grief, victims’ parents are motivated to end this crisis to prevent another family’s suffering while also giving meaning to their loss. Many have launched awareness campaigns, founded educational programs and advocated for legislative change. And now, some parents are taking to the civil courts, targeting the tech giants whose platforms facilitated their children’s purchases of pills that killed them.

With New Declaration, Luminaries Warn That Online Censorship Is Destroying Freedom

New York Post reported:

Since the COVID pandemic, authoritarians in the U.S. and around the world have cynically used claims of “disinformation” to censor ordinary people and stifle dissent about everything from the efficacy of masks and vaccines to the war in Ukraine, the Middle East situation, and Hunter Biden’s laptop.

Whatever your political bent, this new form of speech control is a threat to you.  Only by debating freely in a rapidly fragmenting world can we resolve differences without resorting to violence.

To that end, a group of 136 academics, historians and journalists from the left, right and center of the political spectrum have come together to warn President Biden that this rapidly growing censorship regime “undermines the foundational principles of representative democracy.” In their “Westminster Declaration,” released Wednesday, the international group points out that the best way to combat actual disinformation is with free speech.

The eclectic group that has signed the declaration to fight censorship includes Canadian psychologist Jordan Peterson, U.K. biologist Richard Dawkins, NYU social psychologist Jonathan Haidt, Julian Assange, the Australian founder of WikiLeaks, actor Tim Robbins, evolutionary biologist Bret Weinstein, economist Glenn Loury, filmmaker Oliver Stone, whistleblower Edward Snowden, British comedian John Cleese, Slovenian philosopher Slavoj Žižek, British journalist Matt Ridley, Stanford professor Jay Bhattacharya, Harvard professor of medicine Martin Kulldorf, Australian journalist Adam Creighton, French science journalist Xavier Azalbert and German filmmaker Robert Cibis.

AI Is Becoming More Powerful — but Also More Secretive

Wired reported:

When OpenAI published details of the stunningly capable AI language model GPT-4, which powers ChatGPT, in March, its researchers filled 100 pages. They also left out a few important details — like anything substantial about how it was actually built or how it works.

That was no accidental oversight, of course. OpenAI and other big companies are keen to keep the workings of their most prized algorithms shrouded in mystery, in part out of fear the technology might be misused but also from worries about giving competitors a leg up.

A study released by researchers at Stanford University this week shows just how deep — and potentially dangerous — the secrecy is around GPT-4 and other cutting-edge AI systems. Some AI researchers I’ve spoken to say that we are in the midst of a fundamental shift in the way AI is pursued. They fear it’s one that makes the field less likely to produce scientific advances, provides less accountability, and reduces reliability and safety.

Threads ‘Temporarily’ Blocking COVID-Related Search Terms

Gizmodo reported:

Threads search is blocking terms like “COVID,” “vaccines,” “long COVID,” and others, but it is reportedly only temporary according to Instagram’s head, Adam Mosseri. This comes a week after he said the app would not “amplify news.”

When Thread users input COVID-related search terms for news articles or other information, the platform blocks any information from pulling up, barring a link to the Centers for Disease Control and Prevention website. The Washington Post first reported on the move back in September, and Threads’ parent company, Meta, acknowledged that it is intentionally blocking the search terms.

Mosseri posted on Threads Monday that the company doesn’t have a timeline for when it will reverse the block but says the move is temporary and being worked on. He pointed to the war in Israel and Gaza, saying responsibly managing that content takes precedence and is “the biggest safety focus right now.” It could take weeks or months to reverse the block, according to Mosseri.

Threads entered the fray as an alternative to X, formerly called Twitter, back in July. Meta CEO Mark Zuckerberg promoted Threads as going back to the basics saying, “The vision for Threads is to create an option and friendly public space for conversation.”

Millions More 23andMe Records Leaked Online

TechRadar reported:

Another database belonging to genetic testing website 23andMe has allegedly been published on a dark net forum, just days after an initial leak was revealed.

An individual going by the alias Golem published a database on BreachForums containing sensitive information on four million users.

Subsequent TechCrunch investigations confirmed that at least some of the data published matched known and public information. Roughly two weeks ago, Golem announced stealing sensitive user data from 23andMe, claiming to have done so by means of credential stuffing.

The database Golem posted most recently contains records on four million users, reports said. The hacker said the data includes information on British individuals, including some of the “wealthiest people living in the U.S. and Western Europe.” A company spokesperson told TechCrunch that the company is aware of the news and is currently “reviewing the data to determine if it is legitimate.”

Zuckerberg and Chan Announce a New York Biohub to Build Disease-Fighting Cellular Machines

STAT News reported:

Meta founder Mark Zuckerberg and his wife, pediatrician and philanthropist Priscilla Chan, announced on Wednesday plans to invest $250 million over 10 years to establish a new “biohub” in New York City focused on building a new class of cellular machines that can surveil the body and snuff out disease.

The new initiative, publicly revealed at the 2023 STAT Summit and previewed exclusively to STAT, is the latest program from the Chan Zuckerberg Initiative, or CZI, a company the couple founded in 2015 to help cure, prevent, or manage all disease by 2100. It joins the original San Francisco Biohub founded in 2016 and a Chicago Biohub founded earlier this year.

The initial biohub in CZI’s network was diffuse in ambition but developed tools for analyzing large troves of data, particularly from single-cell sequencing, and did direct work on infectious diseases. For the new hubs, CZI solicited specific grand challenges to be achieved in 10 to 15 years. The Chicago researchers will try to use tiny devices to decode the secrets of inflammatory disease — such as how immune cells malfunction to wreak havoc.

The new hub, consisting of researchers at Yale, Columbia, and Rockefeller, will essentially attempt the opposite: decode precisely how immune cells successfully sense and snuff out cellular fires, such as cancer and Alzheimer’s, in hopes of engineering cells that can detect and eventually treat conflagrations the immune system can’t see or can’t put out.

If everything goes right, it would eventually amount to an internal lab-built police, fire, and medical corps to stamp out disease before symptoms appear.

Shasta County Appoints Public Health Officer Who Fought COVID Vaccine Mandates

Los Angeles Times reported:

The Shasta County Board of Supervisors has appointed an outspoken critic of COVID-19 vaccine mandates to be the county’s new public health officer.

The hiring of Redding family physician James Mu comes 17 months after the board fired its previous public health officer, Karen Ramstrom, whom supervisors had criticized for following state mandates requiring masks and vaccinations during the pandemic.

“I would like to follow evidence-based policy. However, if there are medical dogma that are not good for the population in our county, then I will question and even challenge them,” said Mu, who also opposed vaccines and masks for children, after the supervisors voted to hire him Tuesday night.

Mu was one of 12 Shasta County physicians who, in February 2022, publicly signed an “Open Letter on COVID-19” that decried vaccine mandates, the testing of asymptomatic people and “the physical, psychological, and social impacts of mask and vaccine mandates on children.” The letter touted the benefits of natural immunity and promoted the use of “early treatments” to prevent the disease.

Amazon Will Start Testing Drones That Will Drop Prescriptions on Your Doorstep, Literally

Associated Press reported:

Amazon will soon make prescription drugs fall from the sky when the e-commerce giant becomes the latest company to test drone deliveries for medications.

The company said Wednesday that customers in College Station, Texas, can now get prescriptions delivered by a drone within an hour of placing their order.

Amazon says customers will be able to choose from more than 500 medications, a list that includes common treatments for conditions like the flu or pneumonia, but not controlled substances.

Oct 18, 2023

Private Health Data Still Being Exposed to Big Tech, Report Says + More

Private Health Data Still Being Exposed to Big Tech, Report Says

Bloomberg reported:

Despite recent efforts to address the issue, medical-related websites continue to be mined for data including personal medical information, in an apparent violation of patients’ privacy rights, according to a new study.

Some of the most common tracking pixels were from Alphabet Inc.’s Google, Microsoft Corp., Meta Platforms Inc. and ByteDance, the parent company of TikTok, according to a report by the cybersecurity company Feroot Security.

Feroot analyzed hundreds of healthcare and telehealth websites and found that more than 86% are collecting and transferring data without obtaining consent from the user. More than 73% of login and registration pages have trackers, exposing personal health information.

About 15% of the tracking pixels identified by Feroot read and collect a user’s keystrokes, meaning they could identify Social Security numbers, names, email addresses, appointment dates, IP addresses, billing information and even a medical diagnosis and treatment, according to the report.

If personal health information is collected through a tracker or third party without a user’s consent, it would represent a violation of the Health Insurance Portability and Accountability Act, known as HIPAA, according to Feroot Chief Executive Officer Ivan Tsarynny. Personal health information can include everything from current mental or physical health conditions to billing information.

The Biden Administration Is Waging War on the First Amendment

Newsweek reported:

This past Independence Day, a U.S. federal judge in Missouri v. Biden found that the Biden Administration violated Americans’ First Amendment rights in urging social media companies to censor opinions. It also found that the Administration had funded universities and non-governmental organizations to create a veritable hit list of censorship, which it used to tell social media companies which people and ideas to deboost and censor.

Citing the need to censor speech as the only way to protect the American public, the Biden Administration told the court that it is too dangerous to apply the First Amendment to social media posts, given the depredations of sorting through misinformation from foreign states, political actors, or cranks.

The court was not impressed and issued a preliminary injunction telling the Biden Administration it could no longer coerce Facebook, Twitter/X, and the like to censor users, because doing so violated the First Amendment. Under the order, the Administration also could not engage third parties to craft its censorship agenda. The court excoriated the Biden Administration for establishing an “Orwellian Ministry of Truth” in its zeal for censorship.

On appeal, a district court reinforced the first part of the injunction against the Administration — that it cannot coerce social media censorship — but failed to prohibit the second. As things stand, the Administration can still engage non-governmental actors to target people and ideas for censorship in the name of identifying “misinformation” online. The case currently sits at the Supreme Court, but more important than any judicial orders and opinions is the information unearthed during discovery.

ECB Starts Preparation for Digital Euro in Multi-Year Project

Reuters reported:

The European Central Bank took a further step on Wednesday towards launching a digital version of the euro that would let people in the 20 countries that share the single currency make electronic payments securely and free of charge.

The ECB said it would start a two-year “preparation phase” for the digital euro on November 1, in which it would finalize rules, choose its private-sector partners and do some “testing and experimentation.”

While Wednesday’s decision is a small step in a multi-year project, it sets the ECB ahead of the other central banks of the Group of Seven (G7) wealthy nations and it may constitute a blueprint for others to follow.

“So far, the ECB has not been able to clearly communicate the added value of the digital euro,” said Markus Ferber, a German member of the European Parliament for the conservative European People’s Party. One of the key complaints is that a digital currency may facilitate a run on commercial banks at times of crisis while providing little improvement compared to existing accounts.

Depression Among Tween Girls Deepens as Social Media Use Spikes: Survey

Newsweek reported:

Preteen girls’ rates of depression are skyrocketing as social media use spikes among the younger generation, according to the most in-depth U.S. survey intended to examine the challenges facing girls in grades five through 12.

Girls as young as 10 are experiencing significant declines in self-confidence as they consume social media at unprecedented rates, according to The Girls’ Index 2023 report, conducted by the nonprofit Ruling Our eXperiences (ROX) and based on survey results from more than 17,000 girls.

While only 5% of fifth- and sixth-grade girls reported feeling sad or depressed every day in 2017, which was the first year of the survey, that number tripled to 15% in 2023. This decline in happiness occurs as fewer preteen girls describe themselves as confident, dipping from 86% to 68% during the same six-year span.

All of the mental health concerns pointed out in the survey come alongside a significant uptick in social media use by the generation. A whopping 95% of fifth-grade girls reported using social media this year, and 46% spend six or more hours per day.

‘We Are Being Lied to’: Marc Andreessen’s Techno-Optimist Manifesto Warns Civilization Depends on More AI, Not Less

ZeroHedge reported:

Amidst the deafening anti-technology rhetoric, Marc Andreessen emerges as a champion for the power and potential of tech with his latest essay: ‘The Techno-Optimist Manifesto,’ which builds on claims Andreessen made in a June essay titled “Why AI Will Save the World.”

“We are being lied to,” Andreessen begins … “We are told that technology takes our jobs, reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything. We are told to be angry, bitter, and resentful about technology.”

Instead, with technology as humanity’s spearhead, the Silicon Valley legend foresees a future driven by growth, invention, and unstoppable progress. ” … we have the tools, systems, and ideas to advance to a far superior way of living and being.” And he calls on people to embrace technology and work together to create a better world.

In conclusion, Andreessen’s Techno-Optimist Manifesto is a call to action for all those who believe in the power of technology’s ability to change the world for the better, and fight the techno-pessimist “enemies.”

One thing to bear in mind amid all this ‘almost utopia’ evocation, Andreessen and his partners stand to make cajillions more dollars from any continued growth in AI given their early investments in companies innovating in that area. Talking his book? Of course. But, unarguably, there are some good points.

Selfie-Scraper, Clearview AI, Wins Appeal Against U.K. Privacy Sanction

TechCrunch reported:

Controversial U.S. facial recognition company, Clearview AI, has won an appeal against a privacy sanction issued by the U.K. last year.

In May 2022, the Information Commissioner’s Office (ICO) issued a formal enforcement notice on Clearview — which included a fine of around £7.5 million (~$10M) — after concluding the self-scraping AI firm had committed a string of breaches of local privacy laws. It also ordered the company, which uses the scraped personal data to sell an identity-matching service to law enforcement and national security bodies, to delete information it held on U.K. citizens.

Clearview filed an appeal against the decision. And in a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.

Republicans Want Schools to Block Social Media or Lose Internet Funds

The Washington Post reported:

Republican lawmakers on Wednesday are proposing legislation to block children from using social media in school, preventing access to the platforms on poorer schools’ networks that receive federal broadband subsidies, the latest in a growing crop of bills to bar younger users from sites such as TikTok and Instagram.|

The measure illustrates how policymakers are turning to a broadening and increasingly aggressive arsenal of tools to try to restrict children’s online activity amid concerns about their safety.

Led by Sens. Ted Cruz (R-Tex.), Ted Budd (R-N.C.) and Shelley Moore Capito (R-W.Va.), the bill would require that schools prohibit youths from using social media on their networks to be eligible for the E-Rate program, which provides lower prices for internet access.

U.S. House Committee Investigating University of Maryland COVID Policy

CBS News Baltimore reported:

A U.S. House committee is now investigating a COVID-19 policy at the University of Maryland, College Park.

Starting in September, if a student tests positive, they have to quarantine off campus. For students who live on campus, this means having to either go back to their family’s home or book lodging off campus like a hotel. Its cost falls squarely on the student’s shoulders.

Even though it’s been a little more than a month since its implementation, some UMD students still aren’t aware of the policy. WJZ explained the policy to several students on campus Monday, and every single one we spoke to expressed concern over the cost.

The concern about this policy is now going off campus. The U.S. House Select Subcommittee on the Coronavirus Pandemic is investigating the policy. In a letter sent to UMD President Darryll Pines Friday, the committee questions how UMD’s $115 million in CARES Act funding has been spent. The committee also questioned how quarantining students are supported.