Close menu

Big Brother News Watch

May 15, 2024

Artificial Intelligence Surveillance Coming to NJ’s Largest School District + More

Artificial Intelligence Surveillance Coming to NJ’s Largest School District

New Jersey 101.5 reported:

More than 7,000 surveillance cameras that can detect problems in real-time using artificial intelligence will be installed in the state’s largest school district before next September.

A $12-million contract to install the cameras at the district’s 63 schools was approved by the Newark school board at a meeting earlier this month. Funds left over from the American Rescue Plan that expire in September will cover a portion of the contract, according to district Business Administrator Valerie Wilson.

The Sayreville company that was awarded the two-year contract, Turn-Key Technologies, will also create a data warehouse for the digital camera footage. Newark plans to have the cameras installed by the end of August so they are available for the next school year, Chalkbeat Newark reported.

Cameras with capabilities for facial recognition and the ability to detect and read license plates will be placed inside and outside school buildings. School officials said that the new surveillance system will not be an invasion of privacy.

WHO Needs a Treaty? ‘One Health’ Is Already Firmly Established in America

Technocracy News reported:

While the World Health Organization has been gaslighting the world about the need for a global “Pandemic Agreement,” the Feds had already rolled out the infrastructure to support it when nobody was watching. While the United Nations and its WHO should be kicked out of New York into the Atlantic Ocean, the real problem is our own government, which has been front-running the whole operation for years. It’s called “One Health.”

Initially conceived by the World Wildlife Conversation Association in 2004, the One Health Commission (see below) was funded by the Rockefeller Foundation in 2009 with the objective of spreading the concept widely. It worked.

In 2023, not surprisingly, the CDC and the HHS (Department of Health and Human Services) conducted a study: National One Health Framework To Address Zoonotic Diseases and Advance Public Health Preparedness in the United States: A Framework for One Health Coordination and Collaboration Across Federal Agencies.” So, off to the races they went. spreading the contagion (as a bonafide fact, not!) throughout several government agencies.

Basically, One Health intends to control all facets of life: Economics, water, public policy, occupational health risks, agriculture, global trade, commerce, environmental health, ecosystems, communications, climate change and incidentally, pandemics and human health.

COVID Vaccine Mandate for NSW Health Workers Set to Be Scrapped After Three Years

Sky News reported:

The COVID-19 vaccine mandate implemented for NSW Health workers three years ago is set to be scrapped this week.

Under the current policy, all NSW Health workers are required to have received two doses of vaccine to work or be employed in connection with a NSW Health agency. In March this year, the department flagged it was reviewing the policy, with NSW Health Minister Ryan Park saying Australians needed to get “back on with life.”

“That means having a look at the measures we put in place during this period and seeing whether they still apply,” he said.

As States Loosen Childhood Vaccine Requirements, Health Experts’ Worries Grow

Stateline reported:

Louisiana Republican state Rep. Kathy Edmonston believes no one ought to be required to vaccinate their children. So, she wants schools to proactively tell parents that it’s their right under Louisiana law to seek an exemption. “It’s not the vaccine itself, it is the mandate,” Edmonston told Stateline. “The law is the law. And it already says you can opt-out if you don’t want it. If you do want it, you can go anywhere and get it.”

Although Louisiana scores among the bottom states in most health indicators, nearly 90% of kindergarten children statewide have complete vaccination records, according to data from the Louisiana Department of Health from last school year. That’s even as Louisiana maintains some of the broadest exemptions for personal, religious and moral reasons. The state only requires a written notice from parents to schools.

Edmonston has sponsored legislation that would require schools to provide parents with information about the exemptions. The bill is intended to ensure parents aren’t denied medically necessary information, she said.

Edmonston’s bill is one of dozens this session that aim to relax vaccine requirements, according to a database maintained by the National Conference of State Legislatures, a nonpartisan research organization that serves lawmakers and their staffs. Most of the bills have either died in committee or failed to advance, but a few have become law.

North Carolina Bill to Curb Mask-Wearing in Protests Could Make It Illegal for Medical Reasons Too

Associated Press reported:

People wearing a mask during protests in North Carolina could face extra penalties if arrested, under proposed legislation that critics say could make it illegal to wear a mask in public as a way to protect against COVID-19 or for other health reasons.

Republicans supporters say the legislation, which passed its first committee Tuesday, was prompted in part by the recent wave of protests on universities nationwide — including at the University of North Carolina at Chapel Hill — against Israel’s war in Gaza.

While the main thrust of the bill enhances penalties for people wearing a mask during a crime or intentionally blocking traffic during protests, most concerns centered on the health and safety exemption. According to the bill’s summary, people could no longer wear masks in public for medical reasons.

Social Media Bills Aim to Protect Kids’ Health

Politico reported:

Senate leaders are gauging support for three bills promoting children’s online safety, a Senate aide told our Rebecca Kern. The Kids Online Safety Act, which Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) sponsored, would require social media platforms to prevent the spread of harmful content, such as material related to suicide or eating disorders, on their sites.

Why it matters: Surgeon General Vivek Murthy has warned that social media might be contributing to an increase in mental illness among youth. An advisory from Murthy last year said adolescents who spend more than three hours a day on social media face double the risk of experiencing poor mental health outcomes.

Assessing support, and opposition, is known as hotlining. If no one objects, a bill sponsor can call for passage by unanimous consent, avoiding the lengthy debate that accompanies other Senate legislation.

Behind the scenes: Lawmakers started additional hotlines Thursday to push forward the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) — a bill to update a 1998 children’s privacy law by Sen. Ed Markey (D-Mass.) — and the Kids Off Social Media Act — a bill to bar kids under 13 on apps by Sen. Brian Schatz (D-Hawaii) — according to a Senate aide, who was granted anonymity to speak about the legislative maneuvering.

Connected Cars’ Illegal Data Collection and Use Now on FTC’s ‘Radar’

Ars Technica reported:

The Federal Trade Commission’s Office of Technology has issued a warning to automakers that sell connected cars. Companies that offer such products “do not have the free license to monetize people’s information beyond purposes needed to provide their requested product or service,” it wrote in a blog post on Tuesday. Just because executives and investors want recurring revenue streams, that does not “outweigh the need for meaningful privacy safeguards,” the FTC wrote.

Based on your feedback, connected cars might be one of the least-popular modern inventions among the Ars readership. And who can blame them? Last January, a security researcher revealed that a vehicle identification number was sufficient to access remote services for multiple different makes, and yet more had APIs that were easily hackable.

Later, in 2023, the Mozilla Foundation published an extensive report examining the various automakers’ policies regarding the use of data from connected cars; the report concluded that “cars are the worst product category we have ever reviewed for privacy.”

The FTC is not taking specific action against any automaker at this point. Instead, the blog post is meant to be a warning to the industry. It says that “connected cars have been on the FTC’s radar for years,” although the agency appears to have done very little other than hold workshops in 2013 and 2018, as well as publishing guidance for consumers reminding them to wipe the data from their cars before selling them.

U.S. Lawmakers Seek $32 Billion to Keep American AI Ahead of China

Reuters reported:

A bipartisan group of senators, including Majority Leader Chuck Schumer, on Wednesday called on Congress to approve $32 billion in funding for artificial intelligence research to keep the U.S. ahead of China in the powerful technology.

If China is “going to invest $50 billion, and we’re going to invest in nothing, they’ll inevitably get ahead of us. So that’s why even these investments are so important,” Schumer said Wednesday. The roadmap could help the U.S. address mounting worries about China’s advances in AI. Washington fears Beijing could use it to meddle in other countries’ elections, create bioweapons or launch muscular cyberattacks.
May 14, 2024

Judge Tosses Suit Accusing MSG, James Dolan of Using Facial ID for Profit + More

Judge Tosses Suit Accusing MSG, James Dolan of Using Facial ID for Profit

New York Post reported:

A federal judge this week tossed a data-privacy lawsuit accusing Madison Square Garden of illegally using facial recognition technology to scare off the arena’s legal opponents. “As objectionable as the defendant’s use of biometric data may be, it does not . . . violate” privacy laws, Manhattan federal Judge Lewis Kaplan wrote in a five-page ruling.

Kaplan rejected a January recommendation by U.S. Magistrate Judge James Cott that the class-action lawsuit accusing MSG Entertainment and owner James Dolan of illegally using biometric data for personal gain should proceed. Instead, Kaplan in Tuesday’s decision said he disagreed with claims that MSG “profited” by collecting facial images in part to scare off future lawsuits.

Dolan has come under fire for his controversial use of creepy facial-recognition software to bar unwelcome attorneys and other critics from entering the World’s Most Famous Arena — home of the Rangers and Knicks — and sister venues like Radio City Music Hall.

An MSG spokesperson hailed the judge’s decision, saying, “As we’ve always said, our policies and practices are 100% legal, and we’ve always made clear we don’t sell or profit from customer data.”

The suit filed on behalf of two New Yorkers, Aaron Gross and Jacob Blumenkrantz, potentially would have covered the millions of people who’ve attended events at MSG-owned venues since the city’s biometric data protection law went into effect in July 2021.

Elon Musk’s X Scores a Win in His Feud With Australia

Insider reported:

An Australian court handed Elon Musk a victory Monday in what he described as an ongoing fight for “free speech.”

The court ruled it would not extend a temporary block on footage posted to X of a church stabbing that occurred in a Sydney suburb on April 15. X had opposed the block, which was initially ordered on April 22, the Australian Broadcasting Corporation reported.

That said, Musk isn’t completely in the clear. A final hearing is set to decide the matter in coming weeks, according to the ABC. “Not trying to win anything,” Musk wrote in response to a commenter on X. “I just don’t think we should be suppressing Australian’s rights to free speech.”

It’s just one of many international battlegrounds where Musk is waging war in the name of content moderation. He’s been feuding with a judge on the Brazilian supreme court over an order to block accounts and has announced he will fund legal challenges to Ireland’s upcoming hate speech laws.

EU’s Controversial Digital ID Regulations Set for 2024, Mandating Big Tech Compliance by 2026

Reclaim the Net reported:

The EU’s new digital ID rules, the Digital Identity Regulation (eIDAS 2.0), are about to come into force on May 20, mandating compliance from Big Tech and member countries in supporting the EU Digital Identity (EUDI) Wallet.

However, work is not complete on the EUDI Wallet, as several pilots are planned for 2025 to consolidate the process of the implementation of the rules. According to the framework, the European Council passed recently, which has now been officially published, the deadline for the digital ID wallet to be recognized and made available is 2026. For now, it will be used in several scenarios, including accessing government services and age verification, reports note.

As things stand now, that deadline means that while the wallet scheme must become fully functional by that time, it will not be obligatory for citizens of the EU’s 27 members, and protection against discrimination is promised to those choosing not to opt in.

Digital IDs can also be used to control access to essential services, potentially manipulating social or political compliance. The extensive data collection involved can lead to profiling and discrimination. Furthermore, these IDs are susceptible to hacking and identity theft, placing individuals at risk of financial and reputation damage. Often, citizens are coerced into participating without genuine consent, and the lack of transparency and oversight in these systems increases the risk of misuse.

Gov to Inject $288 Million Into Digital ID

iTnews reported:

The federal government is set to include $288.1 million in funding in the federal budget to boost the adoption of its Digital ID system over the next four years. The funding is an 11-fold increase compared to the $24.7 million included in the previous budget. The announcement comes two weeks after over one million sign-in and identity details of ClubNSW patrons were exposed following a data breach.

The government will allocate $23.4 million over two years for the Australian Taxation Office (ATO), and Finance and Services Australia to pilot the use of government digital wallets and verifiable credentials.

The lion’s share of the Digital ID funding — $155.6 million — will be given to the ATO over two years. The funding aims to improve the government’s existing myGovID credential — which has 12 million users — and relationship authorization manager (RAM) service that allows people to access government services on behalf of a business using a Digital ID.

The government has already spent almost $750 million on the digital identification system and had its Digital ID Bill 2023 pass the Senate this year.

FAA Reauthorization Skips Proposed Airport Facial Recognition Ban, Funds Modernization

Biometric Update reported:

The U.S. Senate has approved the mandate of the Federal Aviation Administration for another five years, without a proposed amendment that would have barred the expansion of facial recognition at America’s airports.

The legislation to reauthorize the FAA was approved by an 88 to 4 vote. The amendment to block the expansion of face biometrics technology deployed by the TSA until at least 2027 was proposed by Senator Jeff Merkley (D-Ore.). It would also have required “simple and clear signage, spoken announcements, or other accessible notifications” of the option not to participate.

Merkley claims that the TSA began informing travelers of their right to opt-out with “a little postcard” after he complained that the choice was not being made clear.

Similar proposed bans have been introduced several times by Sen. Merkley and peers in the upper chamber. Like the Real ID standard for American driver’s licenses, the introduction of facial recognition in airports has faced pushback since it was first approved in the aftermath of the 9/11 terrorist attacks.

Cyberattack Cripples Major U.S. Healthcare Network

U.S. News & World Report reported:

Ascension, a major U.S. healthcare system with 140 hospitals in 19 states, announced late Thursday that a cyberattack has caused disruptions at some of its hospitals.

“Systems that are currently unavailable include our electronic health records system, MyChart (which enables patients to view their medical records and communicate with their providers), some phone systems, and various systems utilized to order certain tests, procedures and medications,” Ascension said in the statement.

The cyberattack on Ascension is just one in a series that has hit U.S. healthcare organizations.

In February, Change Healthcare, a subsidiary of healthcare giant UnitedHealth Group, was hit by a ransomware attack that disrupted billing at pharmacies nationwide and compromised the personal data of up to a third of Americans, CNN reported.

Minnesota Bill Would Do More Harm Than Good to Kids’ Online Safety

Star Tribune reported:

Legislation under consideration in Minnesota that would require any website that may “reasonably likely be accessed” by minors to take certain steps to protect them would actually have severe unintended consequences affecting the privacy and security of both kids and adults.

While the authors’ goal is admirable, the reality of this legislation is troubling and falls short for a number of reasons.

Under the proposal, billed as the Age-Appropriate Design Code Act (HF 2257/SF 2810), companies with websites “likely to be accessed” by a minor (aka every website) will be forced to require proof of age. This may include a wide range of personal information such as birth dates, addresses, pictures and government IDs.

In practice, this legislation will result in every website amassing a massive trove of data on every one of its users — be they adults or children. This will be a ripe target for hackers and criminals. The fact that every website will have to comply means that the protection of users’ data is only as good as the weakest security of any single website they visit.

AI Has Already Figured Out How to Deceive Humans

Insider reported:

AI can boost productivity by helping us code, write, and synthesize vast amounts of data. It can now also deceive us. A range of AI systems have learned techniques to systematically induce “false beliefs in others to accomplish some outcome other than the truth,” according to a new research paper.

The paper focused on two types of AI systems: special-use systems like Meta‘s CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI’s GPT-4, which are trained to perform a diverse range of tasks.

While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road. “Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals,” the paper’s first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT, said in a news release.

Even general-purpose systems like GPT-4 can manipulate humans. “We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models,” Park told Cell Press. “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”

Musk’s X Corp Loses Lawsuit Against Israeli Data-Scraping Company

Reuters reported:

A U.S. judge dismissed a lawsuit in which Elon Musk‘s X Corp accused an Israeli data-scraping company of illegally copying and selling content, and selling tools that let others copy and sell content, from the social media platform.

U.S. District Judge William Alsup in San Francisco ruled on Thursday that X, formerly Twitter, failed to plausibly allege that Bright Data Ltd violated its user agreement by allowing the scraping and evading X’s own anti-scraping technology.

Alsup said using scraping tools is not inherently fraudulent, and giving social media companies free rein to decide how public data are used “risks the possible creation of information monopolies that would disserve the public interest.”

In January, another San Francisco judge ruled that Bright Data had not violated Meta Platforms’ (META.O) terms of service by scraping data from Facebook and Instagram. Meta ended its lawsuit against Bright Data a month later.

May 07, 2024

TikTok Sues U.S. Government to Stop Potential Ban + More

TikTok Sues the U.S. Government to Stop a Potential Ban

WIRED reported:

TikTok sued the U.S. federal government on Tuesday, arguing that the possible app ban violates the First Amendment. Last month, President Biden signed a bill that forces TikTok and its Chinese owner, Bytedance, to divest its ownership of the app or face a nationwide ban. At the time, TikTok said that it planned to sue, calling the law unconstitutional.

In the lawsuit, TikTok says that the law violates the First Amendment and the divesting requirement is “simply not possible.”

Responding to the law’s enactment last month, a TikTok spokesperson told WIRED. “This unconstitutional law is a TikTok ban, and we will challenge it in court. We believe the facts and the law are clearly on our side, and we will ultimately prevail. The fact is, we have invested billions of dollars to keep U.S. data safe and our platform free from outside influence and manipulation.”

First Amendment lawyers have suggested that TikTok has a strong case. Without solid evidence to support the government’s claims that TikTok is a threat to national security, a court could find that a ban would go too far and could cause the company irreparable damage. Others have suggested that a strong data privacy and security law could protect U.S. user data better than an outright ban.

49 GOP Senators Confront Biden With a Bold Plea to Halt WHO Pandemic Proposals

Reclaim the Net reported:

Forty-nine GOP Senators are unified in their stance by submitting a formal request to President Joe Biden. They urged him to retract his support for two significant international agreements that are set to be debated later this month at the World Health Assembly in Geneva.

The discussions, starting May 27, will revolve around what has been termed the “Pandemic Agreement” or “pandemic treaty,” as well as proposed amendments to the International Health Regulations (IHR), which have pushed the control of “misinformation” and advocated for vaccine passports.

The Biden administration has thrown its support behind these proposals, which would see global health officials granted extensive control over the management of pandemics. However, the senators — all members of the Senate Republican Conference — argue that the WHO must first address its failures during the COVID-19 crisis, which, according to them, were both total and predictable. They believe these failures have caused significant damage to the United States.

Their letter emphasizes that no treaty should be signed nor amendments to the IHR approved without rectifying these shortcomings, which, if unaddressed, would potentially lead to increased authority for the WHO, undermine intellectual property rights, and endanger free speech.

Exclusive: Report Urges Sustained U.S. Biodefense Buildup

Axios reported:

A new report calls on all levels of government to strengthen U.S. biodefense measures and urges policymakers to codify parts of a national strategy to address an array of biological threats.

Why it matters: Threats in the form of infectious disease outbreaks, lab accidents and biology-based weapons are expected to increase in the coming years, according to the report’s authors and other experts. But biodefense investments get caught in a cycle of “panic and neglect” — an intense focus for a short period, after which policymakers, funders and the public move on, the report notes.

Reality check: With a full pandemic preparedness package still in congressional limbo, immediate prospects for more sweeping biodefense reforms face long odds.

The intrigue: The commission also highlights emerging astrobiological threats at “the intersection of space exploration and infectious disease.”

The commission acknowledges “it may seem far-fetched” but some microorganisms can survive the extreme conditions of space. It says there is a risk of probes or humans bringing back microbes that could “pose a threat to Earth’s human, animal, plant, or ecosystem health or the Moon.”

A Doctor Whose Views on COVID Vaccinations Drew Complaints Has Her Medical License Reinstated

Associated Press reported:

An Ohio doctor who drew national attention when she told state legislators that COVID-19 vaccines made people magnetic has had her medical license reinstated after it was suspended for failing to cooperate with an investigation.

The Ohio State Medical Board recently voted to restore Sherri Tenpenny’s license after she agreed to pay a $3,000 fine and cooperate with investigators. Tenpenny, an osteopathic doctor, has been licensed in Ohio since 1984. She drew national attention in 2021 when she testified before a state legislative panel in support of a measure that would block vaccine requirements and mask mandates.

Tenpenny’s license was suspended in August 2023 on procedural grounds for failing to cooperate with the investigation. Her attorney had told the board she wouldn’t participate in an “illegal fishing expedition.”

The board voted 7-2 last month to restore her license, with proponents saying she had met the requirements for reinstatement. Tenpenny announced the reinstatement in a post made on the X social platform.

Microsoft Creates Top Secret Generative AI Service for U.S. Spies

Bloomberg via Yahoo!Finance reported:

Microsoft Corp. has deployed a generative AI model entirely divorced from the internet, saying U.S. intelligence agencies can now safely harness the powerful technology to analyze top-secret information.

It’s the first time a major large language model has operated fully separated from the internet, a senior executive at the U.S. company said. Most AI models including OpenAI’s ChatGPT rely on cloud services to learn and infer patterns from data, but Microsoft wanted to deliver a truly secure system to the U.S. intelligence community.

Spy agencies around the world want generative AI to help them understand and analyze the growing amounts of classified information generated daily but must balance turning to large language models with the risk that data could leak into the open — or get deliberately hacked.

“This is the first time we’ve ever had an isolated version — when isolated means it’s not connected to the internet — and it’s on a special network that’s only accessible by the U.S. government,” Chappell told Bloomberg News ahead of an official announcement later on Tuesday.

Civil Rights Groups Want Facial Recognition Technology Banned in New York State

Biometric Update reported:

A group of major civil rights organizations in New York is pushing for a statewide ban on the use of facial recognition and other biometric technologies by law enforcement, residential buildings, public accommodations and schools. A release from the Surveillance Technology Oversight Project (S.T.O.P.), which is leading the public advocacy campaign, says the “Ban The Scan” coalition demands the passage of four bills to enact a total ban on facial recognition across New York State.

The concerns fueling the Ban the Scan campaign are not baseless. Facial recognition technology has already tried to plant roots in a few different sectors. In New York alone, back-and-forths over the ethical, legal and practical aspects of facial recognition have erupted in the grocery and retail sector, the school system and the entertainment industry. Meanwhile, in Los Angeles, transit officials are planning to deploy facial recognition on public trains and buses in response to a spike in violent crime.

As their own response to the increased use of facial recognition in New York (and to Dolan’s sleight), a working group of the New York State Bar Association issued a formal recommendation that its members endorse State Senate Bill 4459/Assembly Bill 1362. Otherwise known as the Biometric Privacy Act, the law would enable customers to sue private organizations that collect their biometric data without express written consent.

Italy’s RAI Journalists Strike Over Budget Streamlining, Complain of Censorship and Media Repression

Associated Press reported:

Some journalists at Italy’s state-run RAI went on strike Monday to protest budget streamlining and what they said was an increasingly repressive atmosphere in Italy for media under the government of Premier Giorgia Meloni.

The 24-hour RAI strike is the latest protest by Italian journalists against what they say are threats to freedom of the press and expression in Italy, including criminal investigations of journalists and suspected episodes of censorship. Not all journalists participated and RAI newscasts were still airing, though in a somewhat reduced form.

The strike came just days after the media watchdog group Reporters Without Borders downgraded Italy five notches in its annual index of press freedom. At No. 46 out of 180, Italy moved into the “problematic” category of countries alongside other EU members Poland and Hungary.

May 03, 2024

Jim Jordan Drops ‘Smoking Gun’ Over White House ‘Lab Leak’ Suppression at Facebook + More

Jim Jordan Drops ‘Smoking Gun’ Over White House ‘Lab Leak’ Suppression at Facebook

ZeroHedge reported:

Rep Jim Jordan (R-OH) has released several new pieces of previously unseen information revealing what Elon Musk called a “smoking gun” in regards to White House pressure on Facebook to censor the lab leak theory of COVID-19.

First, Jordan shares a text message from Mark Zuckerberg to Sheryl Sandberg, Nick Clegg and Joel Kaplan — the company’s highest-ranking executives at the time, in which he asks if Facebook can tell the world that “the [Biden] WH put pressure on us to censor the lab leak theory?” — hours after Biden accused Facebook of “killing people.”

Clegg responded that the Biden White House is “highly cynical and dishonest,” while Sandberg said that they were being scapegoated because the White House wasn’t hitting its vaccination numbers. In fact, Facebook felt that they had been ‘combating misinformation,’ (aka censoring Americans) all year.

Then in late May of 2021, Facebook finally stopped removing content regarding the lab leak theory — though they did demote it. When employees told Zuckerberg about the reversal and explained why they censored the lab leak theory in the first place, Zuckerberg replied that this is what happens when Facebook “compromises [its] standards due to pressure from an administration.”

U.S. Lawmakers Grill Former Biden Admin Officials Who Pressured Social Media Companies to Censor

Reclaim the Net reported:

In a session of the Select Subcommittee on the Weaponization of the Federal Government, Congressman Jim Jordan intensely questioned Rob Flaherty, former White House Director of Digital Strategy, on the Biden administration’s messaging on COVID-19 and its interactions with Big Tech platforms.

During the hearing, Jordan pressed Flaherty on several controversial statements made by the administration regarding the pandemic. Flaherty was specifically grilled about whether these statements constituted misinformation or disinformation, particularly in relation to claims that vaccinated individuals could not contract the virus, the effectiveness of masks, and the denial of natural immunity.

Jordan continued to press for clearer answers, challenging Flaherty to define “misinformation.” Flaherty’s response was evasive, noting, “Congressman, you know, certainly there’s different and varying definitions of misinformation … ”

The Congressman also scrutinized Flaherty’s role in influencing content moderation on social media platforms, citing specific instances where the White House appeared to have urged tech giants to censor certain viewpoints, particularly conservative ones. “You weren’t a medical expert, but you could suggest to Facebook that they needed to change their algorithms so that the American people would not see stuff from the Daily Wire, they’d only see stuff from the New York Times,” Jordan highlighted, emphasizing the selective suppression of information.

Congressman Matt Gaetz aggressively questioned Rob Flaherty on his role in influencing content moderation on social media platforms, particularly regarding COVID-19 information. Flaherty dodged most of the questions, seemingly reluctant or unwilling to provide a specific answer.

House Panel Requests FTC Investigate if TikTok Violated Child Protection Act

The Hill reported:

The leaders of a bipartisan panel focused on China have sent a letter asking the Federal Trade Commission (FTC) to investigate whether TikTok has violated child protection laws in its efforts to stop the United States from banning the app.

The letter, obtained by The Hill and first reported by NBC News, is addressed to FTC Chair Lina Khan and asks the organization to examine if the app violated the Children’s Online Privacy Protection Act (COPPA), or Section 5 of the FTC Act when it sent pop-up notifications to users that requested personal information and asked them to contact Congress.

Rep. John Moolenaar (R-Mich.), the chair of the House Select Committee on the Chinese Communist Party, and Raja Krishnamoorthi (D-Ill.), the committee’s ranking member, said TikTok’s messaging was sent to young children in the classroom and other minors under the age of 13.

China Trying to Develop World ‘Built on Censorship and Surveillance’

Al Jazeera reported:

China is exporting its model of digital authoritarianism abroad with the help of its far-reaching tech industry and massive infrastructure projects, offering a blueprint of “best practices” to neighbors including Cambodia, Malaysia and Vietnam, a human rights watchdog has warned.

In 2015, two years after kicking off its massive Belt and Road initiative, China launched its “Digital Silk Road” project to expand access to digital infrastructure such as submarine cables, satellites, 5G connectivity and more.

Article 19, a United Kingdom-based human rights group, argues that the project has been about more than just expanding access to WiFi or e-commerce.

The Digital Silk Road “has been just as much about promoting China’s tech industry and developing digital infrastructure as it has about reshaping standards and internet governance norms away from a free, open, and interoperable internet in favor of a fragmented digital ecosystem, built on censorship and surveillance, where China and other networked autocracies can prosper”, the watchdog said in a report released in April.

The 80-page report describes how the Chinese state is inextricably linked to its tech industry, a key player in the Digital Silk Road project, as private companies like Huawei, ZTE, and Alibaba serve as “proxies” for the Communist Party.

OpenAI Ceo’s Eyeball-Scanning Digital ID Project, Worldcoin, Hopes to Partner With OpenAI and Has Had Conversations With PayPal

Reclaim the Net reported:

Worldcoin, a digital ID project based on biometrics, namely, eyeball scanning, co-founded by OpenAI CEO Sam Altman, is eying (no pun) partnerships not only with OpenAI but also PayPal, reports say.

However, these movements are not accompanied by any clarity for now, an example of this being another Worldcoin co-founder and its CEO Alex Blania refusing to make a direct announcement regarding the deal with OpenAI.

Blania at the same time confirmed that the company (specifically, Tools for Humanity, the main Worldcoin developer) is talking to PayPal — but the payments transactions giant is currently not commenting on any of this.

Worldcoin’s stated effort is to have “every person in the world” in its ID service, where the transactional nature of the thing is users giving up the sensitive biometric data contained in the irises of their eyes in exchange for what some might call “cryptocurrency change.”

Japan’s Kishida Unveils a Framework for Global Regulation of Generative AI

Associated Press reported:

Japanese Prime Minister Fumio Kishida unveiled an international framework for regulation and use of generative AI on Thursday, adding to global efforts on governance for the rapidly advancing technology.

“Generative AI has the potential to be a vital tool to further enrich the world,” Kishida said. But “we must also confront the dark side of AI, such as the risk of disinformation.”

Some 49 countries and regions have signed up to the voluntary framework, called the Hiroshima AI Process Friends Group, Kishida said, without naming any. They will work on implementing principles and code of conduct to address the risks of generative AI and “promote cooperation to ensure that people all over the world can benefit from the use of safe, secure, and trustworthy AI,” he said.

The European Union, the United States, China and many other nations have been racing to draw up regulations and oversight for AI, while global bodies such as the United Nations have been grappling with how to supervise it.