Close menu
May 14, 2024 Big Tech Censorship/Surveillance

Censorship/Surveillance

Judge Tosses Suit Accusing MSG, James Dolan of Using Facial ID for Profit + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

Judge Tosses Suit Accusing MSG, James Dolan of Using Facial ID for Profit

New York Post reported:

A federal judge this week tossed a data-privacy lawsuit accusing Madison Square Garden of illegally using facial recognition technology to scare off the arena’s legal opponents. “As objectionable as the defendant’s use of biometric data may be, it does not . . . violate” privacy laws, Manhattan federal Judge Lewis Kaplan wrote in a five-page ruling.

Kaplan rejected a January recommendation by U.S. Magistrate Judge James Cott that the class-action lawsuit accusing MSG Entertainment and owner James Dolan of illegally using biometric data for personal gain should proceed. Instead, Kaplan in Tuesday’s decision said he disagreed with claims that MSG “profited” by collecting facial images in part to scare off future lawsuits.

Dolan has come under fire for his controversial use of creepy facial-recognition software to bar unwelcome attorneys and other critics from entering the World’s Most Famous Arena — home of the Rangers and Knicks — and sister venues like Radio City Music Hall.

An MSG spokesperson hailed the judge’s decision, saying, “As we’ve always said, our policies and practices are 100% legal, and we’ve always made clear we don’t sell or profit from customer data.”

The suit filed on behalf of two New Yorkers, Aaron Gross and Jacob Blumenkrantz, potentially would have covered the millions of people who’ve attended events at MSG-owned venues since the city’s biometric data protection law went into effect in July 2021.

Elon Musk’s X Scores a Win in His Feud With Australia

Insider reported:

An Australian court handed Elon Musk a victory Monday in what he described as an ongoing fight for “free speech.”

The court ruled it would not extend a temporary block on footage posted to X of a church stabbing that occurred in a Sydney suburb on April 15. X had opposed the block, which was initially ordered on April 22, the Australian Broadcasting Corporation reported.

That said, Musk isn’t completely in the clear. A final hearing is set to decide the matter in coming weeks, according to the ABC. “Not trying to win anything,” Musk wrote in response to a commenter on X. “I just don’t think we should be suppressing Australian’s rights to free speech.”

It’s just one of many international battlegrounds where Musk is waging war in the name of content moderation. He’s been feuding with a judge on the Brazilian supreme court over an order to block accounts and has announced he will fund legal challenges to Ireland’s upcoming hate speech laws.

EU’s Controversial Digital ID Regulations Set for 2024, Mandating Big Tech Compliance by 2026

Reclaim the Net reported:

The EU’s new digital ID rules, the Digital Identity Regulation (eIDAS 2.0), are about to come into force on May 20, mandating compliance from Big Tech and member countries in supporting the EU Digital Identity (EUDI) Wallet.

However, work is not complete on the EUDI Wallet, as several pilots are planned for 2025 to consolidate the process of the implementation of the rules. According to the framework, the European Council passed recently, which has now been officially published, the deadline for the digital ID wallet to be recognized and made available is 2026. For now, it will be used in several scenarios, including accessing government services and age verification, reports note.

As things stand now, that deadline means that while the wallet scheme must become fully functional by that time, it will not be obligatory for citizens of the EU’s 27 members, and protection against discrimination is promised to those choosing not to opt in.

Digital IDs can also be used to control access to essential services, potentially manipulating social or political compliance. The extensive data collection involved can lead to profiling and discrimination. Furthermore, these IDs are susceptible to hacking and identity theft, placing individuals at risk of financial and reputation damage. Often, citizens are coerced into participating without genuine consent, and the lack of transparency and oversight in these systems increases the risk of misuse.

Gov to Inject $288 Million Into Digital ID

iTnews reported:

The federal government is set to include $288.1 million in funding in the federal budget to boost the adoption of its Digital ID system over the next four years. The funding is an 11-fold increase compared to the $24.7 million included in the previous budget. The announcement comes two weeks after over one million sign-in and identity details of ClubNSW patrons were exposed following a data breach.

The government will allocate $23.4 million over two years for the Australian Taxation Office (ATO), and Finance and Services Australia to pilot the use of government digital wallets and verifiable credentials.

The lion’s share of the Digital ID funding — $155.6 million — will be given to the ATO over two years. The funding aims to improve the government’s existing myGovID credential — which has 12 million users — and relationship authorization manager (RAM) service that allows people to access government services on behalf of a business using a Digital ID.

The government has already spent almost $750 million on the digital identification system and had its Digital ID Bill 2023 pass the Senate this year.

FAA Reauthorization Skips Proposed Airport Facial Recognition Ban, Funds Modernization

Biometric Update reported:

The U.S. Senate has approved the mandate of the Federal Aviation Administration for another five years, without a proposed amendment that would have barred the expansion of facial recognition at America’s airports.

The legislation to reauthorize the FAA was approved by an 88 to 4 vote. The amendment to block the expansion of face biometrics technology deployed by the TSA until at least 2027 was proposed by Senator Jeff Merkley (D-Ore.). It would also have required “simple and clear signage, spoken announcements, or other accessible notifications” of the option not to participate.

Merkley claims that the TSA began informing travelers of their right to opt-out with “a little postcard” after he complained that the choice was not being made clear.

Similar proposed bans have been introduced several times by Sen. Merkley and peers in the upper chamber. Like the Real ID standard for American driver’s licenses, the introduction of facial recognition in airports has faced pushback since it was first approved in the aftermath of the 9/11 terrorist attacks.

Cyberattack Cripples Major U.S. Healthcare Network

U.S. News & World Report reported:

Ascension, a major U.S. healthcare system with 140 hospitals in 19 states, announced late Thursday that a cyberattack has caused disruptions at some of its hospitals.

“Systems that are currently unavailable include our electronic health records system, MyChart (which enables patients to view their medical records and communicate with their providers), some phone systems, and various systems utilized to order certain tests, procedures and medications,” Ascension said in the statement.

The cyberattack on Ascension is just one in a series that has hit U.S. healthcare organizations.

In February, Change Healthcare, a subsidiary of healthcare giant UnitedHealth Group, was hit by a ransomware attack that disrupted billing at pharmacies nationwide and compromised the personal data of up to a third of Americans, CNN reported.

Minnesota Bill Would Do More Harm Than Good to Kids’ Online Safety

Star Tribune reported:

Legislation under consideration in Minnesota that would require any website that may “reasonably likely be accessed” by minors to take certain steps to protect them would actually have severe unintended consequences affecting the privacy and security of both kids and adults.

While the authors’ goal is admirable, the reality of this legislation is troubling and falls short for a number of reasons.

Under the proposal, billed as the Age-Appropriate Design Code Act (HF 2257/SF 2810), companies with websites “likely to be accessed” by a minor (aka every website) will be forced to require proof of age. This may include a wide range of personal information such as birth dates, addresses, pictures and government IDs.

In practice, this legislation will result in every website amassing a massive trove of data on every one of its users — be they adults or children. This will be a ripe target for hackers and criminals. The fact that every website will have to comply means that the protection of users’ data is only as good as the weakest security of any single website they visit.

AI Has Already Figured Out How to Deceive Humans

Insider reported:

AI can boost productivity by helping us code, write, and synthesize vast amounts of data. It can now also deceive us. A range of AI systems have learned techniques to systematically induce “false beliefs in others to accomplish some outcome other than the truth,” according to a new research paper.

The paper focused on two types of AI systems: special-use systems like Meta‘s CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI’s GPT-4, which are trained to perform a diverse range of tasks.

While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road. “Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals,” the paper’s first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT, said in a news release.

Even general-purpose systems like GPT-4 can manipulate humans. “We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models,” Park told Cell Press. “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”

Musk’s X Corp Loses Lawsuit Against Israeli Data-Scraping Company

Reuters reported:

A U.S. judge dismissed a lawsuit in which Elon Musk‘s X Corp accused an Israeli data-scraping company of illegally copying and selling content, and selling tools that let others copy and sell content, from the social media platform.

U.S. District Judge William Alsup in San Francisco ruled on Thursday that X, formerly Twitter, failed to plausibly allege that Bright Data Ltd violated its user agreement by allowing the scraping and evading X’s own anti-scraping technology.

Alsup said using scraping tools is not inherently fraudulent, and giving social media companies free rein to decide how public data are used “risks the possible creation of information monopolies that would disserve the public interest.”

In January, another San Francisco judge ruled that Bright Data had not violated Meta Platforms’ (META.O) terms of service by scraping data from Facebook and Instagram. Meta ended its lawsuit against Bright Data a month later.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.