Close menu

Big Brother News Watch

Feb 06, 2024

Supreme Court to Weigh Whether COVID Misinformation Is Protected Speech + More

Supreme Court to Weigh Whether COVID Misinformation Is Protected Speech

STAT News reported:

As social media sites were flooded with misleading posts about vaccine safety, mask effectiveness, COVID-19’s origins and federal shutdowns, Biden officials urged platforms to pull down posts, delete accounts, and amplify correct information.

Now the Supreme Court could decide whether the government violated Americans’ First Amendment rights with those actions — and dictate a new era for what role, if any, officials can play in combating misinformation on social media.

The Supreme Court is set to hear arguments next month in a case that could have sweeping ramifications for federal health agencies’ communications in particular. Murthy v. Missouri alleges that federal officials coerced social media and search giants like Facebook, Twitter, YouTube, and Google to remove or downgrade posts that questioned vaccine safety, COVID’s origins, or shutdown measures. Biden’s lawyers argue that officials made requests but never forced companies.

Government defenders say that if the Court limits the government’s power, it could hamstring agencies scrambling to achieve higher vaccination rates and other critical public health initiatives. Critics argue that federal public health officials — already in the throes of national distrust and apathy — never should have tried to remove misleading posts in the first place.

Amazon ‘Censored’ COVID Vaccine Books After ‘Feeling Pressure’ From Biden White House: Docs

New York Post reported:

The Biden administration pressured Amazon to censor books related to COVID-19 vaccines in early 2021 citing concerns that the material contained “propaganda” and “misinformation,” internal company emails released by Rep. Jim Jordan (R-Ohio) appear to show.

The documents were obtained by the House Judiciary Committee and the Subcommittee on the Weaponization of the Federal Government via subpoena, Jordan said in an X thread Monday, which he dubbed, “THE AMAZON FILES.”

Jordan, who chairs the House Judiciary Committee and Weaponization subcommittee, revealed that both panels will investigate the alleged censorship effort. “That’s right.  Amazon caved to the pressure from the Biden White House to censor speech,” Jordan said in a tweet.

In March, the Supreme Court will hear oral arguments in a lawsuit filed by attorneys general in Missouri and Louisiana alleging that the Biden administration colluded with social media companies to suppress the freedom of speech related to the COVID-19 pandemic.

Google Agrees to Pay $350 Million Settlement in Data Privacy Case

The Washington Post reported:

Google agreed to pay $350 million to settle a lawsuit years after a security lapse meant the personal data of users of its now-defunct social media website Google Plus was exposed to the internet.

The settlement comes just weeks after Google settled another lawsuit brought by users of its Chrome web browser who had their data tracked even though they were using private mode. That case could cost Google billions, though a specific amount has yet to be announced.

In 2018, Google realized that its systems had been exposing the data of millions of users of its Google Plus website to external developers for years, but executives chose not to notify the public or shareholders. An internal Google memo at the time pointed out that if the security lapse came to light, the company might be subject to the kind of scrutiny Facebook was then receiving for how its data was used by Cambridge Analytica in the 2016 election. Months later, the Wall Street Journal reported on the potential data breach, sending the company’s stock plummeting and triggering a wave of negative media reports.

In September, Google will face another trial, this one brought by the U.S. Department of Justice, which alleges the company has broken competition laws in the digital ad market.

Biden’s AI Plan to Censor You Revealed: Researchers Say Americans Can’t ‘Tell Fact From Fiction’

New York Post reported:

Twitter’s censorship of the Hunter Biden laptop story in 2020 could soon be possible on an industrial scale — thanks to AI tools being built with funding from his father’s administration, a report from Republicans on the House Judiciary Committee claimed Tuesday.

The report reveals how the Biden administration is spending millions on artificial intelligence research designed to make anti “misinformation” tools which could then be passed to social media giants.

And it discloses how researchers who got funding for the plan — known as “Track F” — emailed each other to say that Americans could not tell fact from fiction online and that conservatives and veterans were even more susceptible than the public at large.

The report was published by the House Judiciary Committee’s Subcommittee on the Weaponization of Government, which is chaired by Jim Jordan (R-OH).

It casts new light on how funding from the National Sciences Foundation is being given to elite institutions including the Massachusetts Institute of Technology, the University of Madison-Wisconsin and the University of Michigan, for a program called “Trust & Authenticity in Communication Systems.”

Not Wearing a Mask During COVID Health Emergency Isn’t a Free Speech Right, Appeals Court Says

Associated Press reported:

A federal appeals court shot down claims Monday that New Jersey residents’ refusal to wear face masks at school board meetings during the COVID-19 outbreak constituted protected speech under the First Amendment.

The 3rd Circuit Court of Appeals issued a ruling in two related cases stemming from lawsuits against officials in Freehold and Cranford, New Jersey.

The suits revolved around claims that the plaintiffs were retaliated against by school boards because they refused to wear masks during public meetings. In one of the suits, the court sent the case back to a lower court for consideration. In the other, it said the plaintiff failed to show she was retaliated against.

Still, the court found that refusing to wear a mask during a public health emergency didn’t amount to free speech protected by the Constitution.

The court added: “Skeptics are free to — and did — voice their opposition through multiple means but disobeying a masking requirement is not one of them. One could not, for example, refuse to pay taxes to express the belief that ‘taxes are theft.’ Nor could one refuse to wear a motorcycle helmet as a symbolic protest against a state law requiring them.”

Study Confirms Fears That COVID Pandemic Reduced Kindergarten Readiness

Cincinnati Children’s Research Horizons reported:

Numerous studies have raised alarms about how the COVID-19 pandemic disrupted learning, development and mental health among school-aged children. But few have focused on the effects felt by the 22 million children under age 6 who were not yet in school.

Now a study published Feb. 5, 2024, in JAMA Pediatrics, led by researchers at Cincinnati Children’s in collaboration with the Cincinnati Public Schools, documents the pandemic’s harmful effects on kindergarten readiness. The findings are based on data from about 8,000 kindergartners who took a state-required Kindergarten Readiness Assessment (KRA) in 2018, 2019, and 2021–including 3,200 children who receive care through Cincinnati Children’s primary care clinics.

What the researchers found was concerning. Only 30% (or 3 in 10) of Cincinnati Public Schools students were assessed as kindergarten-ready in 2021, a significant decline from 40% (or 4 in 10) assessed as ready in 2018. Researchers found a similar pattern in the 3,200 children who receive care through Cincinnati Children’s primary care sites: 21.5% were deemed ready to learn in 2021 compared to 32% in 2018.

“This means that 7 of every 10 children in the Cincinnati Public Schools were considered not ready to learn when they entered kindergarten during the pandemic. This trend was even more pronounced among the more disadvantaged, Medicaid-covered children we see in our primary care clinics,” says the study’s lead author Kristen Copeland, MD, Division of General and Community Pediatrics.

Social Media Algorithms ‘Amplifying Misogynistic Content’

The Guardian reported:

Algorithms used by social media platforms are rapidly amplifying extreme misogynistic content, which is spreading from teenagers’ screens and into school playgrounds where it has become normalized, according to a new report.

Researchers said they detected a four-fold increase in the level of misogynistic content suggested by TikTok over a five-day period of monitoring, as the algorithm served more extreme videos, often focused on anger and blame directed at women.

While this particular study looked at TikTok, researchers said their findings were likely to apply to other social media platforms and called for a “healthy digital diet” approach to tackling the problem, rather than outright bans on phones or social media which “are likely to be ineffective”.

The study, by teams at University College London and the University of Kent, comes at a time of renewed concern about the impact of social media on young people. Research last week found young men from Generation Z — many of whom revere social media influencer Andrew Tate — are more likely than baby boomers to believe that feminism has done more harm than good.

Meta Announces New Updates to Help Teens on Its Platforms Combat Sextortion

TechCrunch reported:

Meta is introducing a few new updates and efforts to help teens on its platforms combat sextortion, the company announced on Tuesday. Most notably, Meta announced the expanded availability of Take It Down, which is an online tool that it helps finance and is run by the National Center for Missing and Exploited Children (NCMEC). The company also updated its Sextortion hub with new guidance and is launching a global campaign to raise awareness about sextortion.

Take It Down is designed to prevent the spread of non-consensual intimate imagery, and is now available in 25 more languages after originally only launching in English and Spanish last year. It allows teens to take back control of their personal intimate photos and prevents ex-partners and scammers from spreading them online.

The system can be used by people under 18 who are worried their content has been or may be posted online. It can also be used by parents or trusted adults on behalf of a young person. Plus, it can be used by adults who are concerned about images taken of them when they are under 18.

Meta Will Label AI-Generated Content From OpenAI and Google on Facebook, Instagram

Ars Technica reported:

On Tuesday, Meta announced its plan to start labeling AI-generated images from other companies like OpenAI and Google, as reported by Reuters. The move aims to enhance transparency on platforms such as Facebook, Instagram, and Threads by informing users when the content they see is digitally synthesized media rather than an authentic photo or video.

Coming during a U.S. election year that is expected to be contentious, Meta’s decision is part of a larger effort within the tech industry to establish standards for labeling content created using generative AI models, which are capable of producing fake but realistic audio, images, and video from written prompts. (Even non-AI-generated fake content can potentially confuse social media users, as we covered yesterday.)

Meta says the technology for labeling AI-generated content labels will rely on invisible watermarks and metadata embedded in files. Meta adds a small “Imagined with AI” watermark to images created with its public AI image generator.

However, Meta President of Global Affairs Nick Clegg mentioned that there’s currently no effective way to label AI-generated text, suggesting that it’s too late for such measures to be implemented for written content. This is in line with our reporting that AI detectors for text don’t work.

Government Hackers Targeted iPhone Owners With Zero Days, Google Says

TechCrunch reported:

Government hackers last year exploited three unknown vulnerabilities in Apple’s iPhone operating system to target victims with spyware developed by a European startup, according to Google.

On Tuesday, Google’s Threat Analysis Group, the company’s team that investigates nation-backed hacking, published a report analyzing several government campaigns conducted with hacking tools developed by several spyware and exploit sellers, including Barcelona-based startup Variston.

In one of the campaigns, according to Google, government hackers took advantage of three iPhone “zero-days,” which are vulnerabilities not known to Apple at the time they were exploited.

In this case, the hacking tools were developed by Variston, a surveillance and hacking technology startup whose malware has already been analyzed twice by Google in 2022 and 2023.

Feb 05, 2024

‘They Don’t Want It Discussed’ — Megyn Kelly Reacts to Moderna Surveilling Her Vaccine Criticism + More

‘They Don’t Want It Discussed’ — Megyn Kelly Reacts to Moderna Surveilling Her Vaccine Criticism

Reclaim the Net reported:

Political commentator Megyn Kelly has reacted to the fact that she was singled out and monitored by the large pharmaceutical corporation, Moderna, after sharing her adverse reactions to the COVID vaccine publicly.

Last year, Kelly voiced regret over her decision to get the COVID shot which, in her case, reportedly resulted in autoimmune complications. As a healthy 52-year-old woman, Kelly, in her podcast, expressed doubt over the necessity of getting vaccinated, as she contracted COVID “many times” after.

She further shared that her annual medical check-up had revealed a positive result for an autoimmune condition for the first time ever. “And she said ‘yes.’ Yes. I wasn’t the only one she’d seen that with,” Kelly noted, referring to New York’s finest rheumatologist’s reaction to her querying whether her vaccination and subsequent COVID contraction within three weeks could be linked.

Subsequently, a year later, a separate investigation conducted by Lee Fang revealed that Moderna had marked Kelly under its controversial “misinformation” reporting system.

Moderna utilized artificial intelligence to scrutinize millions of online conversations worldwide, influencing the narrative around vaccines. Internal documents revealed that the company paid special attention to prominent vaccine dissenters.

Florida Grand Jury Investigating COVID Vaccines Releases First Report

Orlando Sentinel reported:

More than a year after the Florida Supreme Court granted Gov. Ron DeSantis’ request to impanel a statewide grand jury to investigate “criminal or wrongful activity” related to COVID-19 vaccines, the body released its first report and said its probe is “nowhere near complete.”

Their 33-page report released late Friday said “lockdowns were not a good trade” and that “we have never had sound evidence of (masks’) effectiveness against SARS-CoV-2 transmission,” among other conclusions.

“In a way, this Grand Jury has allowed us to do something that most Americans simply do not have the time, access, or wherewithal to do: Follow the science,” the report said.

Conclusions in the report on masks contradict recommendations from the U.S. Centers for Disease Control and Prevention. The CDC’s guidance says research shows masks are effective in stopping the spread of COVID-19 and recommends as of late January that people with symptoms, people who have tested positive, and people who have been exposed to the virus should wear masks when indoors in public.

The report said the grand jury talked with doctors, scientists and professors “with a broad range of viewpoints.”

Zuck Brags About How Much of Your Facebook, Instagram Posts Will Power His AI

Gizmodo reported:

When you post something on Instagram or Facebook, you probably think you’re just sharing it with your friends, family, and maybe a few others. But that’s not all. Everything you’ve ever posted is being used to train Meta’s powerful AI. Mark Zuckerberg bragged about his vast library of content, which includes all your posts, reels, and comments, during Meta’s earnings call Thursday. Your social media profiles are now one of the most valuable datasets on Earth, and Meta claims it owns them.

“On Facebook and Instagram there are hundreds of billions of publicly shared images and tens of billions of public videos,” said Meta’s CEO on its earnings call last week. “We estimate [this] is greater than the Common Crawl dataset, and people share large numbers of public text posts in comments across our services as well.”

This is Meta’s next big play. Instagram and Facebook have addicted users for the last 20 years, making sure to monetize us through advertisers every step of the way. Now, they’re revisiting your old posts, your special moments, and your big life updates, and using it to create billion-dollar AI tools.

Zuckerberg’s braggadocious claim about Meta’s very large dataset comes shortly after The New York Times sued OpenAI over intellectual property. But Meta is pulling an old trick out of its playbook: extracting as much value out of Instagram and Facebook users as humanly possible, and totally owning your online self.

‘Inevitable Fire Sale’ of 23andMe to ‘Overseas PE Firm’ Could Be National Security Risk

ZeroHedge reported:

Several years ago, DNA-testing company 23andMe began publicly trading on Nasdaq following a deal to merge with VG Acquisition Corp., a special-purpose acquisition company founded by billionaire Richard Branson. The company was pimped by Hollywood elites, such as Oprah and Lizzo, as market capitalization topped $6 billion in late 2021.

Fast forward this past week, 23andMe has lost 94% of its market cap following the November 2021 peak, and Nasdaq threatened to delist the penny stock as it closed around 69 cents per share on Friday.

Anne Wojcicki, 23andMe’s chief executive, has led the cash-burning startup that has never turned a profit. After three rounds of layoffs and a subsidiary sale, a Wall Street Journal report said the company “could run out of cash by 2025.”

A healthcare investor named Will Manidis asked this question on X: “Within months you will be able to buy genomics data from 14 million Americans for +/- $200m?” Manidis warned in the viral post: “The inevitable fire sale of this mess to an overseas PE firm is going to be a national security matter on the scale of which we haven’t seen in healthcare in years.”

A possible fire sale of 23andMe and its stockpile of millions of DNA samples of Americans is something to keep a close eye on.

Meta and Mark Zuckerberg Must Not Be Allowed to Shape the Next Era of Humanity

The Guardian reported:

A combination of tech exceptionalism, brazen defiance and lots and lots of money have enabled Mark Zuckerberg’s company to accumulate vast market power over the two decades of its existence amid sclerotic antitrust oversight and “repeated and deliberate policy failures,” in the words of Daniel A Hanley of the Open Markets Institute.

In the absence of federal privacy legislation, public interest data governance and meaningful antitrust enforcement, Meta, along with Google, was able to create a new economic order built on the parasitic logic of surveillance and behavior modification in what technologist Shoshana Zuboff has aptly dubbed surveillance capitalism.

We did not vote for or acquiesce to this system. It was foisted upon us by Mark Zuckerberg’s “move fast and break things” ethos and the failure to prevent the vast consolidation of power in a handful of Big Tech firms, whose insatiable quest for “permissionless innovation” has come at great cost. Now generative AI is poised to upend labor markets and democratic institutions around the world.

If the past 20 years have taught us anything about this one-time upstart turned tech titan, it is that Meta and Zuckerberg must not be allowed to unilaterally shape the next era of humanity. As the tech world celebrates this milestone, it is time to demand accountability for the harms propagated by powerful tech companies such as Meta and break up the tech behemoth before it wreaks further havoc on individuals, society and the economy. Dismantling Meta’s digital legacy is imperative if we want to wrest back control and save democracy.

Meta Surges With Record $196 Billion Gain in Stock Market Value

Reuters reported:

Meta Platforms added $196 billion in stock market value on Friday, marking the biggest one-day gain by any company in Wall Street history after the Facebook parent declared its first dividend and posted robust results.

Meta’s (META.O) stock surged 20.3% for the session, also recording its biggest one-day percentage increase in a year and its third biggest since its 2012 Wall Street debut. Its stock market value now stands at more than $1.22 trillion.

TSA Facial Recognition Tech Is Latest Security Theater Absurdity

New York Post reported:

Get ready for more TSA follies: The agency that oversees our Kafka-esque airport experiences is set to add another layer of security theater.

This time it’s plans for a wide-scale rollout of its controversial facial recognition tech at more than 400 airports. The tech is purported to take real-time, photo-based biometric data on each traveler so it can be matched against their ID.

Color us skeptical: The Transportation Security Administration is a byword for government ineptitude. It’s accidentally published confidential guides to how passenger screening works. This is the agency we’re going to trust to run complex biometric technology correctly?

And while we generally don’t buy the “privacy concerns” here (airports are public places, after all), it’s hard to trust TSA’s vow that any info collected “will not be used for surveillance or any law enforcement purpose,” when it already got in hot water back in the Bush years for misappropriating airline passenger data and lying to Congress about it.

Mother Whose Son Died From Drugs Bought on Social Media Wants Stronger Protections for Kids

FOXBusiness reported:

Amy Neville, whose teenage son died of fentanyl poisoning after obtaining counterfeit pills he obtained from a drug dealer on Snapchat, is calling for Snap and other social media companies to implement stronger safeguards for children on those platforms.

“These days, my life’s work is traveling the country and educating folks on social media harms and the drug crisis as it is right now, and that all stems from the fact that I lost my own child, Alexander, who was 14 years old when we lost him,” said Neville, president of the not-for-profit Alexander Neville Foundation, told FOX Business.

Neville and a group of other families who lost children to overdoses caused by drugs obtained via Snapchat filed a lawsuit against Snap, the company that operates the social media platform. She said the suit is the first of its kind and that it’s progressing to the discovery stage this month.

‘If Instagram Didn’t Exist, It Wouldn’t Have Happened’: A Mother’s Search for Her Trafficked Daughter

The Guardian reported:

Robyn Cory’s daughter Kristen was 15 when she was allowed to open her own Instagram account. “We thought we’d been responsible and done everything we could to make it safe,” says Cory. Months later, Kristen disappeared from the family home after being groomed on Instagram’s direct message service by a criminal gang, who then sold her for sex on the streets of Houston.

Her daughter never recovered from her ordeal, Cory says. Kristen returned home but has since gone missing after being trafficked again. Her mother does not know if she is still alive. Cory blames the gang who trafficked her daughter for destroying her life. She also blames Instagram, which she believes played a critical role in her daughter’s sex trafficking.

“If Instagram didn’t exist, this wouldn’t have happened to my daughter,” she says. “Instagram is why it was so easy [for these people] to do this.”

“My message for other parents is: don’t let your kids have social media. Instagram needs to take measures to stop kids from signing up for accounts and to stop them from receiving messages from people they don’t know. They need to be protected.”

Feb 01, 2024

Hawley Calls Out Facebook After Hearing: Zuckerberg Regularly Censored Conservatives, Not Child Predators + More

Hawley Calls Out Facebook After Hearing: Zuckerberg Regularly Censored Conservatives, Not Child Predators

Fox News reported:

Senate Judiciary Committee member Josh Hawley, R-Mo., who successfully convinced Meta CEO Mark Zuckerberg to rise and apologize to victims’ families during a hearing on child exploitation on social media, told Fox News his platform gladly censored conservatives but has done little to stem predators.

On “Hannity,” host Sean Hannity asked the lawmaker if he believes Zuckerberg knows the extent to which bad actors use Facebook and Instagram to exploit, target and sexualize children.

“He absolutely knows what’s going on,” Hawley replied, adding senators have heard from Facebook whistleblowers who claim to have alerted Zuckerberg’s office they had collected information about potential exploitation that the executive purportedly “ignored.”

Hawley suggested Zuckerberg take 10% of his more than $140 billion net worth and allocate it to helping victims affected by exploitation on Meta platforms, and use the funds to target and remove potential sexual predators.

He noted how conservatives, especially during the COVID-19 pandemic, collectively surmised that Facebook was censoring or suppressing free speech on key topics like alternative antiviral pharmaceuticals, vaccine hesitancy and later dissemination of the New York Post’s verified reportage about the existence and contents of Hunter Biden’s laptop.

Parents of Children Victimized on Social Media Share Horror Stories With CEOs in Senate Hearing

New York Post reported:

Parents of children victimized on social media shamed the CEOs of America’s most prominent platforms as they entered a Senate hearing Wednesday — with many family members holding pictures of their deceased or scarred children while an emotional impact video was played.

A crowd of forlorn parents lined the front gallery of the packed Senate Judiciary Committee chamber as Committee members grilled the executives over their failure to protect underage users on their platforms.

An audible hiss spread from the gallery as the CEOs filed into their seats and the parents skewered them with penetrating glares. On hand at the hearing were: Meta CEO Mark Zuckerberg, TikTok CEO Shou Chew, X CEO Linda Yaccarino, Snap Inc. CEO Evan Spiegel, and Discord CEO Jason Citron.

Once the execs entered, the crowd raised photos of their children who had either committed suicide or been psychologically damaged after being victimized by predators they met on Facebook and Instagram.

Watch: Lawmakers and Tech CEOs Push Online Age and ID Verification Proposals During Hearing on Child Safety

Reclaim the Net reported:

As we previously reported as something to look out for in 2024, U.S. lawmakers are intent on pushing online ID, age verification, and causing an end to online anonymity — despite constitutional concerns.

And during a hearing today, tech CEOs supported proposals that would greatly expand the requirements for online ID verification and erode the ability to use the internet without connecting your online activity to your identity.

The proposals are being pushed in the name of protecting children online but would impact anyone who doesn’t want to tie all of their online speech and activity to their real ID — over surveillance, censorship, or whistleblowing concerns.

In response to criticism from lawmakers, Meta CEO Mark Zuckerberg pushed for far-reaching online age verification standards that would impose age verification at the app store level — a proposal that would mean the vast majority of mobile app usage could be tied to a person’s official identity.

Senators Find Tech CEOs’ Responses Hollow After Four-Hour Hearing

The Verge reported:

During an unusually emotional hearing on Wednesday, senators spent hours trying to get a group of five tech CEOs to confront the harms their platforms have caused and submit to more checks on their power.

The Senate Judiciary Committee invited the CEOs of Meta, TikTok, Snap, X, and Discord to face the families of children who’d died following cyberbullying, sexual exploitation, or other harmful events on their platforms. They asked why Section 230, the law that shields online platforms from being held liable for their users’ posts, should stop these families from facing them in court.

The CEOs expressed condolences for the families hurt on their services but reiterated the work and investment they’ve already made to keep users safe. Advocates and lawmakers were left unimpressed by the CEOs’ remarks — but emboldened to push forward their proposals.

A package of five kids online safety bills has already passed out of the Senate Judiciary Committee with unanimous votes, including the EARN IT Act, which seeks to weaken Section 230 protections, and the Cooper Davis Act, that would require platforms to report known illicit drug trafficking on their sites to the Drug Enforcement Administration. KOSA, which was introduced in another committee, already has the support of nearly half the Senate.

14 Massachusetts Colleges Land on Restrictive Free Speech List: ‘Censorship and Terrible Policies’

Boston Herald reported:

More than a dozen Bay State colleges have been called out on a list of schools with policies that “clearly and substantially restrict free speech,” a contentious issue in recent months amid student protests during the Israel-Hamas war.

The number of colleges and universities with the harshest student speech codes increased for the second year in a row, according to a new report from the Foundation for Individual Rights and Expression.

FIRE’s “Spotlight on Speech Codes” report rates 489 of America’s top colleges and universities on their student speech policies. More than 85% of those schools have at least one policy that could be used to improperly censor students for constitutionally protected speech, FIRE reported.

This year’s report found that 98 colleges — or 20% — got a “red light” rating, meaning they have at least one policy that “clearly and substantially restricts freedom of speech.”

Massachusetts is home to 14 of those colleges: Boston College, Northeastern University, Tufts University, UMass Lowell, Fitchburg State University, Framingham State University, Worcester State University, Bridgewater State University, Salem State University, Clark University, College of the Holy Cross, Massachusetts College of Liberal Arts, Mount Holyoke College, and Westfield State University.

Transparency Troubles: The Global Disinformation Index Faces Scrutiny Over Government Ties and Biased Practices

Reclaim the Net reported:

The Global Disinformation Index (GDI), a U.S. government-funded pro-censorship organization, has come under fire for lacking transparency, ironically the same issue it labels non-mainstream websites for.

Despite hypocritically casting aspersions on sites that reject the mainstream narrative on many issues, the GDI, as per a report by the Washington Examiner, exhibits a conspicuous absence of this very transparency in its operations.

Billing itself as nonpartisan and objective while routinely favoring leftist narratives, the GDI has received over $100k from the State Department’s Global Engagement Center. Part of the score it assigns to online platforms stems from the possibility of controversial interests emerging from shadowy ownership structures — a principle it doesn’t appear to abide by itself.

AI Can Speed Drug Discovery. But Is It Really Better Than a Human?

Bloomberg reported:

In mid-January, Genentech started recruiting 200 patients to test whether one of its experimental drugs can tame ulcerative colitis, a painful, incurable type of inflammatory bowel disease. Until then, the compound had only been given during experiments to treat lung and skin disorders.

Deciding whether to shift a drug for use against a different disease than originally intended often takes years of painstaking lab work, but the California biotech did it in just nine months. The difference: artificial intelligence, which the company says helped its researchers scan millions of possibilities to confirm the drug could be useful against diseases affecting the cells of the colon.

“It’s not like the human is not needed anymore,” says Aviv Regev, a Harvard University and Massachusetts Institute of Technology computational biologist who took a leave from her academic work to run Genentech’s research and development. “But the human all of a sudden gets the superpower.”

Jan 31, 2024

Tech CEOs Told ‘You Have Blood on Your Hands’ at U.S. Senate Child Safety Hearing + More

Tech CEOs Told ‘You Have Blood on Your Hands’ at U.S. Senate Child Safety Hearing

Reuters reported:

U.S. senators on Wednesday grilled leaders of the biggest social media companies and said Congress must quickly pass legislation, as one lawmaker accused the companies of having “blood on their hands” for failing to protect children from escalating threats of sexual predation on their platforms.

The hearing marks the latest effort by lawmakers to address the concerns of parents and mental health experts that social media companies put profits over guardrails that would ensure their platforms do not harm children.

“Mr. Zuckerberg, you and the companies before us, I know you don’t mean it to be so, but you have blood on your hands,” said Republican Senator Lindsey Graham, referring to Meta (META.O) CEO Mark Zuckerberg. “You have a product that’s killing people.”

Zuckerberg testified along with X CEO Linda Yaccarino, Snap (SNAP.N) CEO Evan Spiegel, TikTok CEO Shou Zi Chew and Discord CEO Jason Citron.

In the hearing room, dozens of parents held pictures of their children who they said had been harmed due to social media. Some parents jeered Zuckerberg, whose company owns Facebook and Instagram, during his opening statement and shouted comments at other points during the hearing.

Mark Zuckerberg Was Forced to Physically Stand Up and Face Families Affected by Online Abuse

Insider reported:

Meta CEO Mark Zuckerberg had to turn and face the families of children who were harmed by social media companies head-on during a contentious Senate hearing on Wednesday.

The shocking moment was prompted by Republican Sen. Josh Hawley during an intense hearing on online child safety before the Senate Judiciary Committee.

Hawley asked if Zuckerberg had apologized to the families, saying “Your products are killing people.” He then asked Zuckerberg if he’d like to directly apologize to the families who attended the hearings whose children were harmed or died from the impacts of social media.

Many senators in the hearing floated stripping away legal protections from social media companies, meaning they could be sued for child pornography or other sexually explicit material on their platforms.

New York Judge Rejects Madison Square Garden’s Bid to Dismiss Biometric Privacy Case Involving Facial Recognition

Reclaim the Net reported:

A New York judge has denied Madison Square Garden Entertainment’s motion to dismiss a biometric privacy lawsuit. The litigation revolves around a contentious policy, enacted by MSGE, which deployed facial recognition technology to prohibit certain attorneys from gaining entry into the entertainment giant’s renowned venues.

The lawsuit had previously survived MSGE’s initial attempt to dismiss it. The entertainment firm once again finds itself rebuffed in the District Court for the Southern District of New York, despite raising multiple arguments pleading for a dismissal.

The suit will move forward, as ruled by the presiding judge, focusing on whether MSGE’s tactics violate the city’s Biometric Identifier Information Protection Law. Even though the judge acknowledged MSGE’s rationale for wanting to dismiss the plaintiffs’ claims of civil rights violations and unjust enrichment, the alleged breach of the city’s biometrics statute remains a query.

U.S. Receives Thousands of Reports of AI-Generated Child Abuse Content in Growing Risk

Reuters reported:

The U.S. National Center for Missing and Exploited Children (NCMEC) said it had received 4,700 reports last year about content generated by artificial intelligence that depicted child sexual exploitation.

The NCMEC told Reuters the figure reflected a nascent problem that is expected to grow as AI technology advances.

In recent months, child safety experts and researchers have raised the alarm about the risk that generative AI tech, which can create text and images in response to prompts, could exacerbate online exploitation.

The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.

Nevada Files Lawsuit Against Facebook, Instagram, Messenger, Snapchat and TikTok: ‘Hazard to Public Health’

FOXBusiness reported:

The state of Nevada is suing some of the most popular social media companies, alleging that their apps are intentionally addictive and have contributed to a decline in mental health for its users, especially teens and young adults.

Nevada Attorney General Aaron Ford filed civil lawsuits Tuesday against the parent companies of Facebook, Instagram, Messenger, Snapchat and TikTok apps, claiming they are a “hazard to public health” and that they use “false, deceptive and unfair marketing” to directly appeal to youth.

The lawsuit also says the respective apps’ algorithms are “designed deliberately to addict young minds and prey on teenagers’ well-understood vulnerabilities.”

Mark Zuckerberg Says Apple and Google Should Manage Parental Consent for Apps, Not Meta

TechCrunch reported:

In today’s online safety hearing, Meta CEO Mark Zuckerberg again pushed back at the idea that businesses like his should be responsible for managing parental consent systems for kids’ use of social media apps, like Facebook and Instagram. Instead, he suggested the problem should be dealt with by the app store providers, like Apple and Google, he said.

This is not the first time Meta has floated the idea. Last November, the company introduced a proposal that argued that Apple and Google should do more with regard to kids’ and teens’ safety by requiring parental approval when users aged 13 to 15 download certain apps.

His suggestion is a clever maneuver by Meta, as it effectively turns Apple’s desire to profit from the apps on its app stores against them. Today, Apple takes a 15% to 30% commission on all in-app purchases that take place through iOS apps, depending on the business’s size and other factors.

Or, simply put, Meta is saying that if Apple wants to be the payment processor for all iOS apps, at a cost to Meta’s profits, then parental consent over app usage should be Apple’s problem, too.

NY Government Gears Up for Fight Against TikTok, Facebook and Other Social Media Giants

Gothamist reported:

Mayor Eric Adams says it’s a public health hazard. Gov. Kathy Hochul calls it “poison.” Attorney General Letitia James claims it’s a “crisis.” In recent weeks, some of New York’s top elected officials have used their bully pulpits to take aim against what, for them, has become a common enemy: Social media and its effect on kids.

Lawmakers across 35 states and Puerto Rico introduced legislation last year that was spurred by concern over social media’s effect on youth mental health, according to the National Conference of State Legislatures. Of those, 12 states adopted measures with varying degrees of action, including New Jersey, which launched a commission to study the issue.

Now, New York is on the verge of joining them, with Hochul and James pushing a pair of measures that would restrict social media platforms from collecting data from minors and exposing them to addictive algorithms.

And in New York City, Adams’ administration issued a public health advisory last week warning parents not to give their kids access to smartphones or other devices that can access social media until at least age 14.

Europe Is Rushing to Tighten Oversight of AI. The U.S. Is Taking Its Time.

Yahoo!Finance reported:

The European Union is applying new legal restraints around artificial intelligence this year. The U.S. is still trying to figure out how far it wants to go.

The European Parliament in December reached a provisional agreement on the world’s first comprehensive legislation to regulate AI, focusing on uses instead of the technology.

The new rules range in severity depending on how risky the application is, with facial recognition and certain medical innovations requiring approval before being made available to customers.

Federal laws specific to AI don’t exist yet in the U.S., and it’s unknown whether that will happen. The EU’s actions, however, could still have a chilling effect on companies based in this country.