Close menu

Big Brother News Watch

May 02, 2024

The Breach of a Face Recognition Firm Reveals Hidden Danger of Biometrics + More

The Breach of a Face Recognition Firm Reveals a Hidden Danger of Biometrics

WIRED reported:

Police and federal agencies are responding to a massive breach of personal data linked to a facial recognition scheme that was implemented in bars and clubs across Australia. The incident highlights emerging privacy concerns as AI-powered facial recognition becomes more widely used everywhere from shopping malls to sporting events.

The affected company is Australia-based Outabox, which also has offices in the United States and the Philippines. In response to the COVID-19 pandemic, Outabox debuted a facial recognition kiosk that scans visitors and checks their temperature. The kiosks can also be used to identify problem gamblers who enrolled in a self-exclusion initiative.

This week, a website called “Have I Been Outaboxed” emerged, claiming to be set up by former Outabox developers in the Philippines. The website asks visitors to enter their names to check whether their information had been included in a database of Outabox data, which the site alleges had lax internal controls and was shared in an unsecured spreadsheet. It claims to have more than 1 million records.

The incident has rankled privacy experts who have long set off alarm bells over the creep of facial recognition systems in public spaces such as clubs and casinos.

“Sadly, this is a horrible example of what can happen as a result of implementing privacy-invasive facial recognition systems,” Samantha Floreani, head of policy for Australia-based privacy and security nonprofit Digital Rights Watch, tells WIRED. “When privacy advocates warn of the risks associated with surveillance-based systems like this, data breaches are one of them.”

Louisiana House Splits on Vaccine ‘Discrimination’ Proposals

Louisiana Illuminator reported:

The Louisiana House of Representatives approved a proposal Tuesday to prohibit what its author considers “discrimination” against K-12 students on the basis of vaccination status. But lawmakers rejected a bill that would have placed similar restrictions on businesses and governmental entities.

Both bills are authored by Rep. Beryl Amedee, R-Schriever, who has carried several pieces of legislation dealing with vaccines and medical consent in the aftermath of the COVID-19 pandemic. She has consistently been critical of vaccine mandates, insisting that the COVID shot did not prevent the spread of the virus while failing to acknowledge vaccine rates were well below herd immunity status.

Amedee told House members unvaccinated students were discriminated against during the peak of the pandemic by being seated separately from their classmates and not being allowed to participate in extracurricular sports. Her bill to prevent such actions would apply to all vaccinations.

Her proposal would not allow teachers or school administrators to use a student’s vaccination status to determine their eligibility for athletics or other extracurricular activities or to allow or deny their participation inside and outside of the classroom. Teachers would not be allowed to organize seating arrangements based on vaccination status, nor would schools be allowed to issue surveys to students asking about what vaccines they have received.

Supreme Court Rejects Military Chaplains’ Lawsuit Claiming Refusal of COVID Vaccine Hurt Their Careers

Military.com reported:

The U.S. Supreme Court has decided not to hear a case involving 39 military chaplains who say they continue to face recrimination for refusing to get the COVID-19 vaccine for religious reasons. In an announcement Monday of the cases the court has selected to hear next year, the justices denied the chaplains’ petition to review last year’s dismissal of the case by the U.S. Court of Appeals for the Fourth Circuit.

The appellate court ruled that the Defense Department’s decision in January 2023 to rescind the vaccine mandate rendered the chaplains’ case moot.

In their petition, the chaplains said they needed the court to consider the case to protect them and their First Amendment rights. They argued that many continue to have bad marks in their fitness reports influencing assignments and promotions.

At least 50 service members previously sued the Defense Department over its vaccine mandate, alleging that the services and the Pentagon had violated their right to religious freedom for “categorically denying” their request for religious exemptions from the COVID-19 vaccine.

Senators Want Limits on the Government’s Use of Facial Recognition Technology for Airport Screening

Associated Press reported:

A bipartisan group of senators is pushing for restrictions on the use of facial recognition technology by the Transportation Security Administration, saying they are concerned about travelers’ privacy and civil liberties.

In a letter on Thursday, the group of 14 lawmakers called on Senate leaders to use the upcoming reauthorization of the Federal Aviation Administration as a vehicle to limit TSA’s use of the technology so Congress can put in place some oversight.

The effort, led by Sens. Jeff Merkley, D-Ore., John Kennedy, R-La., and Roger Marshall, R-Kan., “would halt facial recognition technology at security checkpoints, which has proven to improve security effectiveness, efficiency, and the passenger experience,” TSA said in a statement.

The technology is currently in use at 84 airports around the country and is planned to expand in the coming years to the roughly 430 covered by TSA.

A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

WIRED reported:

A lawsuit filed Wednesday against Meta argues that U.S. law requires the company to let people use unofficial add-ons to gain more control over their social feeds.

It’s the latest in a series of disputes in which the company has tussled with researchers and developers over tools that give users extra privacy options or that collect research data. It could clear the way for researchers to release add-ons that aid research into how the algorithms on social platforms affect their users, and it could give people more control over the algorithms that shape their lives.

The suit was filed by the Knight First Amendment Institute at Columbia University on behalf of researcher Ethan Zuckerman, an associate professor at the University of Massachusetts — Amherst. It attempts to take a federal law that has generally shielded social networks and use it as a tool forcing transparency.

Maricopa County and Arizona State Collaborate to Surveil Social Media and Censor ‘Misinformation’

Reclaim the Net reported:

The Arizona Secretary of State’s Office and the Maricopa County Recorder’s Office have been exposed as doing their best to team up with social media companies, non-profits, as well as the U.S. government to advance online censorship.

This, yet another case of “cooperation” (aka, collusion) between government and private entities to stifle speech disapproved of by federal and some state authorities has emerged from several public records, brought to the public’s attention by the Gavel Project.

The official purpose of several initiatives was to counter “misinformation” using monitoring and reporting whatever the two offices decided qualified; another was to censor content on social platforms, while plans also included restricting discourse to the point of banning users from county-run accounts.

Bipartisan Senators Unveil Bill Limiting Kids’ Social Media Use

The Hill reported:

Sens. Ted Cruz (R-Texas) and Brian Schatz (D-Hawaii) are leading a group of bipartisan senators reintroducing a bill that would limit kids’ social media use by setting a minimum age for users and restricting access to the sites in schools.

Schatz originally introduced a version of the legislation, called the Kids Off Social Media Act, in the spring of 2023. It would set the minimum age for online social media users to 13 years old, and prevent platforms from “feeding algorithmically-boosted content” to users under 17.

Users under 18 must have parental permission to use apps like TikTok, Instagram and Snapchat under the bill.

“There is no good reason for a nine-year-old to be on Instagram or TikTok. There just isn’t. The growing evidence is clear: social media is making kids more depressed, more anxious, and more suicidal,” Schatz said in a statement. “This is an urgent health crisis, and Congress must act.”

Schatz’s past bill faced some opposition, but its sponsors hail it as a way for parents to regain control over what their kids see online and say it would prevent platforms from using minors’ data to promote potentially harmful content.

What Happens When Google Censors News to Protect Its Bottom Line?

The Seattle Times reported:

Search engines like Google have always selectively censored content by limiting what its users can find. Usually, the censored material has been either sexually explicit, intellectual property guarded by copyright, disinformation or sensitive personal information.

Last month, Google stopped being a neutral aggregator of the internet and began censoring out of its economic self-interest by denying its users access to content created and reported by California news outlets. Sacramento is no stranger to political hardball, but what Google is doing is different and dangerous.

Google’s suppression of journalism raises the issue of whether a tech giant should be allowed to block access to news content for purely economic reasons. How is the consumer protected? How should government regulate a search engine when it becomes a suppression engine?

UnitedHealth CEO Estimates One-Third of Americans Could Be Impacted by Change Healthcare Cyberattack

CNBC reported:

UnitedHealth Group CEO Andrew Witty on Wednesday told lawmakers that data from an estimated one-third of Americans could have been compromised in the cyberattack on its subsidiary Change Healthcare and that the company paid a $22 million ransom to hackers.

Witty testified in front of the Subcommittee on Oversight and Investigations, which falls under the House of Representatives Committee on Energy and Commerce. He said the investigation into the breach is still ongoing, so the exact number of people affected remains unknown. The one-third figure is a rough estimate.

UnitedHealth has previously said the cyberattack likely impacts a “substantial proportion of people in America,” according to an April release. The company confirmed that files containing protected health information and personally identifiable information were compromised in the breach.

Microsoft’s Billion-Dollar OpenAI Investment Was Triggered by Google Fears, Emails Reveal

The Verge reported:

Microsoft invested $1 billion in OpenAI in 2019 because it was “very worried” that Google was years ahead in scaling up its AI efforts. An internal email, titled “Thoughts on OpenAI,” between Microsoft CTO Kevin Scott, CEO Satya Nadella, and co-founder Bill Gates reveals some of the high-level discussions around an investment opportunity in the months before Microsoft revealed the partnership.

The email was released on Tuesday as part of the ongoing U.S. Justice Department antitrust case against Google, Business Insider reports.

Apr 30, 2024

FCC Fines U.S. Wireless Carriers Over Illegal Location Data Sharing + More

FCC Fines U.S. Wireless Carriers Over Illegal Location Data Sharing

Reuters reported:

The Federal Communications Commission on Monday fined the largest U.S. wireless carriers nearly $200 million for illegally sharing access to customers’ location information.

The FCC is finalizing fines first proposed in February 2020, including $80 million for T-Mobile (TMUS.O); $12 million for Sprint, which T-Mobile has since acquired; $57 million for AT&T (T.N), and nearly $47 million for Verizon Communications (VZ.N).

The carriers sold “real-time location information to data aggregators, allowing this highly sensitive data to wind up in the hands of bail-bond companies, bounty hunters, and other shady actors,” FCC Chair Jessica Rosenworcel said in a statement.

Lawmakers in 2019 expressed outrage that aggregators were able to buy user data from wireless carriers and sell “location-based services to a wide variety of companies” and others, including bounty hunters.

If Social Media Is a ‘Digital Heroin’ for Today’s Youth, AI Will Be Their Fentanyl

Newsweek reported:

A pre-teen girl sees an innocuous advertisement for a weight loss program on a social media platform. She’s intrigued. After all, she wants to look like all those slender influencers on her feed. Little does she know, the ad — generated with artificial intelligence (AI) technology — was carefully targeted to her based on her AI-analyzed browsing habits and “private” conversations.

Once she clicks on the ad, her feed becomes a relentless barrage of AI-curated content promoting harmful diet strategies — including deepfake videos from beloved influencers. Her online world morphs into a dangerous echo chamber, magnifying her insecurities and spiraling her into depression.

These scenarios are not merely hypothetical; many aspects of them are taken from real stories. But as AI explodes, already addictive social media platforms will become even more capable at hooking kids to their content. If social media is already a “digital heroin” for our youth, new and enhanced AI will become their fentanyl.

For years, predatory social media platforms have capitalized on human psychology by triggering dopamine rushes akin to those induced by narcotic substances. As a result, teenagers are ensnared in an average of five hours per day on these platforms. And a disturbingly young cohort, children aged 7-9, are increasingly exposed to their allure. By 10, children on average, have their first smartphone and their childhood starts to end.

UN Official Condemns Health ‘Misinformation,’ Advocates for ‘Digital Integrity Code’

Reclaim the Net reported:

The United Nations continues with an attempt to advance the agenda to get what the organization calls its Code of Conduct for Information Integrity on Digital Platforms implemented. This code is based on a previous policy brief that recommends censorship of whatever is deemed to be “disinformation, misinformation, hate” but that is only the big picture of the policy UN Under-Secretary-General for Global Communications Melissa Fleming is staunchly promoting.

In early April, Fleming gave a talk at Boston University, and here the focus was on AI, whose usefulness in various censorship ventures makes it seen as a tool that advances “resilience in global communication.”

“One of our biggest worries is the ease with which new technologies can help spread misinformation easier and cheaper, and that this content can be produced at scale and far more easily personalized and targeted,” she said.

Fleming said that with the pandemic, this “skyrocketed” around the issue of vaccines. But she didn’t address why that may be — other than, apparently, being simply a furious sudden proliferation of “misinformation” for its own sake.

ChatGPT Keeps Hallucinating — and That’s Bad for Your Privacy

TechRadar reported:

After triggering a spike in VPN service downloads following a temporary ban about a year ago, OpenAI faces troubles in the European Union once again. The culprit this time? ChatGPT‘s hallucination problems.

The popular AI chatbot is infamous for making up false information about individuals—something that OpenAI is admittedly unable to fix or control, experts say. That’s why Austria-based digital rights group Noyb (stylized as noyb, short for “none of your business”) filed a complaint to the country’s data protection authority on April 29, 2024, for allegedly breaking GDPR rules.

The organization is now urging the Austrian privacy protection body to investigate how OpenAI verifies the accuracy of citizens’ personal data. Noyb also calls authorities to impose a fine to ensure GDPR compliance in the future.

We already discussed how ChatGPT and similar AI chatbots will probably never stop making stuff up. That’s quite worrying considering that “chatbots invent information at least three percent of the time — and as high as 27%,” the New York Times reported.

Facebook, Instagram in EU Crosshairs for Election Disinformation

Reuters reported:

Meta Platforms’ (META.O) Facebook and Instagram have failed to tackle disinformation and deceptive advertising in the run-up to European Parliament elections, the European Commission said on Tuesday as it opened an investigation into suspected breaches of EU online content rules.

The move by EU tech regulators came amid concerns about Russia, China and Iran as potential sources of disinformation, but also inside the EU, with some political parties and organizations seeking to attract voters with lies in the June 6-9 vote to select the next five-year parliament.

The Digital Services Act which kicked in last year requires Big Tech to do more to counter illegal and harmful content on their platforms or risk fines of as much as 6% of their global annual turnover.

China Threatens Retaliation for Taiwan, TikTok Law Signed by Biden

Fox News reported:

China on Monday is threatening to take “resolute and forceful steps” to defend itself after President Biden recently signed a bill that provides foreign aid to Taiwan and forces TikTok’s China-based owner to sell the app or be banned in the U.S.

The legislation approved by Biden last Wednesday offers $95 billion in assistance to Ukraine and Israel, including nearly $2 billion to replenish U.S. weapons provided to Taiwan and other regional allies, according to The Associated Press. It also gives ByteDance nine months to sell TikTok, as well as a possible three-month extension if a sale is in progress.

“If the United States clings obstinately to its course, China will take resolute and forceful steps to firmly defend its own security and development interests,” Chinese Foreign Ministry spokesperson Lin Jian reportedly added.

U.S. lawmakers have accused TikTok of being a risk to U.S. national security, collecting user data, and spreading propaganda. China has previously said it would oppose forcing the sale of TikTok. TikTok has long denied it is a security threat and is preparing a lawsuit to block the legislation.

France Must Curb Child, Teen Use of Smartphones, Social Media, Says Panel

Reuters reported:

France should limit smartphone and social media use for children and teenagers, an expert panel commissioned by French President Emmanuel Macron said on Tuesday, amid growing global concern about their negative impact on young minds.

Children under 11 should be barred from having a cellphone while the use of smartphones with internet access should be prohibited for anyone under 13 years old, they said in a report.

Social media apps should be forbidden for anyone under 15, they added, and minors over 15 should only have access to platforms deemed “ethical.” Lawmakers would be tasked with deciding what platforms could be considered as such, they said.

Last year the U.S. Surgeon General said social media could profoundly harm young people’s mental health and called on tech companies to safeguard children who are at critical stages of brain development.

Apr 29, 2024

Banking on Surveillance: Republicans Investigate Major Banks’ Warrantless Data Sharing With Federal Agencies + More

Banking on Surveillance: Republicans Investigate Major Banks’ Warrantless Data Sharing With Federal Agencies

Reclaim the Net reported:

Congressional Republicans are further investigating claims that at least 13 major US banks collaborated with federal agencies to monitor private transactions for signs of “extremism” following the January 6 Capitol events. The House Select Subcommittee on the Weaponization of the Federal Government, led by Republican Jim Jordan from Ohio, is delving further into the alleged cooperation between these financial institutions and federal agencies without proper warrants.

These banks, including Bank of America, Chase, US Bank, Wells Fargo, Citi Bank, and more, are among those scrutinized for their roles in the reported surveillance. We previously reported about how Bank of America was found to be handing over data of everyone in the area during the events of January 6, whether they were suspect or not — and whether they had a warrant or not. But now, investigations suggest that the transfer of data was more systematic, potentially involving multiple financial institutions and the Biden administration itself.

Concerns about this alleged surveillance extend to Americans’ rights to privacy and freedom of expression. Jim Jordan criticized the federal government’s “backdoor information sharing,” which categorized broad groups of transactions as suspicious or indicative of extremism. In a letter to Treasury Secretary Janet Yellen, Jordan highlighted that this type of financial monitoring infringes on fundamental civil liberties.

Photo-Sharing Community EyeEm Will License Users’ Photos to Train AI if They Don’t Delete Them

TechCrunch reported:

EyeEm, the Berlin-based photo-sharing community that exited last year to Spanish company Freepik after going bankrupt, is now licensing its users’ photos to train AI models. Earlier this month, the company informed users via email that it was adding a new clause to its Terms & Conditions that would grant it the rights to upload users’ content to “train, develop, and improve software, algorithms, and machine-learning models.” Users were given 30 days to opt out by removing all their content from EyeEm’s platform. Otherwise, they were consenting to this use case for their work.

At the time of its 2023 acquisition, EyeEm’s photo library included 160 million images and nearly 150,000 users. The company said it would merge its community with Freepik’s over time. Despite its decline, almost 30,000 people are still downloading it each month, according to data from Appfigures.

Once thought of as a possible challenger to Instagram — or at least “Europe’s Instagram” — EyeEm had dwindled to a staff of three before selling to Freepik, TechCrunch’s Ingrid Lunden previously reported. Joaquin Cuenca Abela, CEO of Freepik, hinted at the company’s possible plans for EyeEm, saying it would explore how to bring more AI into the equation for creators on the platform. As it turns out, that meant selling their work to train AI models.

Of note, the notice says that these deletions from EyeEm market and partner platforms could take up to 180 days. Yes, that’s right: Requested deletions take up to 180 days but users only have 30 days to opt out. That means the only option is manually deleting photos one by one.

Section 8 is where licensing rights to train AI are detailed. In Section 10, EyeEm informs users they will forgo their right to any payouts for their work if they delete their account — something users may think to do to avoid having their data fed to AI models. Gotcha!

Kashmir Hill: ‘They Shouldn’t Be Collecting Photos From Social Media Without People’s Consent, but They Keep Doing It and Nobody’s Stopping Them’

El País reported:

In November 2019, journalist Kashmir Hill received a tip that a startup called Clearview AI claimed to be able to identify anyone from a picture. Her source said that the company had collected billions of photos from social networks like Facebook, Instagram and LinkedIn without telling either the websites or the people involved and that if you uploaded someone’s photo into the app, it would show you all the websites where that person appeared, plus their complete name and personal information.

Until then, no one had dared to develop anything like this. An application capable of identifying strangers was too much. It could be used, for example, to photograph someone in a bar and find out in seconds where they live and who their friends are. Hill, a reporter for The New York Times, published the story about this small company, which in a few months went from being a total unknown to receiving the support of Peter Thiel, one of the godfathers of Silicon Valley, and becoming a service coveted by police forces in the U.S. and abroad.

She reached Hoan Ton-That, the inscrutable engineer and co-founder of Clearview AI, who made the tool with Richard Schwartz, a politician with a long career behind the scenes in the Republican Party. Hill’s research informed her book Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy.

“I just thought Clearview AI was striking because of what a small ragtag group it was. Unusual and fascinating characters. And I just thought that really captured something about the tech industry, a certain kind of naivete. And just this desire to create these things, these really transgressive new technologies without a serious reckoning with the implications and how it would change society,” she explains by videoconference from New York.

Middle Schools in Norway Banned Smartphones. The Benefits Were Dramatic, a Study Shows.

Boston Globe reported:

The long-running debate over whether to ban smartphones in schools has intensified in recent months, fueled by increased warnings about the harms of social media on youth mental health and the distractions phones cause in class.

This week, social media was abuzz about a study published earlier this year out of Norway that tested the argument: How would student outcomes and mental health be affected if schools banned smartphones?

The research found the impacts were positive, including decreased bullying and improved academic performance among girls. Author and organizational psychologist Adam Grant highlighted the findings on X, formerly Twitter, saying “Smartphones belong at home or in lockers.”

In schools with bans, the number of specialist care visits for mental health issues fell among middle-school girls. And the data suggested the longer the girls were exposed to the ban, the fewer visits they needed.

FTC Finalizes Changes to Data Privacy Rule to Step Up Scrutiny of Digital Health Apps

Fierce Healthcare reported:

The Federal Trade Commission (FTC) finalized a rule Friday that aims to tighten the reins on digital health apps sharing consumers’ sensitive medical data with tech companies.

The agency issued a final version of its revised Health Breach Notification Rule to underscore the rule’s applicability to health apps in a bid to protect consumers’ data privacy and provide more transparency about how companies collect their health information.

The Health Breach Notification Rule (HBNR) requires vendors that manage digital health records, including health apps, that are not covered by the Health Insurance Portability and Accountability Act to notify individuals, the FTC, and, in some cases, the media of a breach of unsecured personally identifiable health data.

​​Bill Gates Never Left

Insider reported:

In 2017, just before Microsoft forged a partnership with a then-relatively unknown startup called OpenAI, Bill Gates shared a memo with CEO Satya Nadella and a small group of the company’s top executives. A new world order, Gates predicted, would soon be brought on by what he called “AI agents” — digital personal assistants that could anticipate our every want and need. These agents would be far more powerful than Siri and Alexa, with godlike knowledge and supernatural intuition.

“Agents are not only going to change how everyone interacts with computers,” Gates wrote. “They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.”

Today, though, it’s clear that Gates’ secret correspondence anticipated Copilot, the artificial intelligence tool that has helped propel Microsoft to become the world’s most valuable public company. Powered by a version of OpenAI’s GPT large language model, Copilot debuted last year as a tool within Microsoft products to help users with tasks such as preparing presentations and summarizing meetings. “Copilot now sounds exactly like what he wrote,” the executive said.

That’s not by accident.

Publicly, Gates has been almost entirely out of the picture at Microsoft since 2021, following allegations that he had behaved inappropriately toward female employees. In fact, Business Insider has learned, that Gates has been quietly orchestrating much of Microsoft’s AI revolution from behind the scenes. Current and former executives say Gates remains intimately involved in the company’s operations — advising on strategy, reviewing products, recruiting high-level executives, and nurturing Microsoft’s crucial relationship with Sam Altman, the co-founder and CEO of OpenAI.

Turning Point: COVID-Era Hospital Reporting Set to End

Axios reported:

Hospitals starting this week will no longer have to report data on admissions, occupancy and other indicators of possible system stress from respiratory diseases to federal officials as another COVID-era mandate expires.

Why it matters: The sunset of the reporting requirement on May 1 marks a turning point in the government’s real-time tracking of airborne pathogens that helped drive coronavirus surveillance and reports like the Centers for Disease Control and Prevention’s FluView.

The required reporting to the CDC’s National Healthcare Safety Network was scheduled to end with the COVID-19 public health emergency last May but was extended through this Tuesday, with fewer requirements.

Google Hits a New Milestone: $2 Trillion

Insider reported:

Google is now the world’s fourth most valuable public company, right behind Nvidia, Apple, and Microsoft, which has a market cap of just over $3 trillion and overtook Apple earlier this year for first place.

This isn’t Alphabet’s first brush with the $2 trillion club. The company briefly hit the threshold in November 2021 and earlier this month but closed above it for the first time on Friday, according to Bloomberg.

Austria Calls for Rapid Regulation as It Hosts Meeting on ‘Killer Robots’

Reuters reported:

Austria called on Monday for fresh efforts to regulate the use of artificial intelligence in weapons systems that could create so-called ‘killer robots‘, as it hosted a conference aimed at reviving largely stalled discussions on the issue.

With AI technology advancing rapidly, weapons systems that could kill without human intervention are coming ever closer, posing ethical and legal challenges that most countries say need addressing soon.

We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control,” Austrian Foreign Minister Alexander Schallenberg told the meeting of non-governmental and international organizations as well as envoys from 143 countries.

“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines,” he said in an opening speech to the conference entitled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.”

Media Freedom ‘Perilously Close to Breaking Point’ in Several EU Countries

The Guardian reported:

Media freedom is declining across the EU and “perilously close to breaking point” in several countries, a leading civil liberties network has said, highlighting widespread threats against journalists and attacks on the independence of public broadcasters.

The Berlin-based Civil Liberties Union for Europe (Liberties) said in its annual media freedom report, compiled with 37 rights groups in 19 countries, that alarming trends identified previously persisted in 2023 — although new EU-wide legislation could offer hope of improvement.

“Media freedom is clearly in steady decline across the EU — in many countries as a result of deliberate harm or neglect by national governments,” said Eva Simon, the senior advocacy officer at Liberties.

“Declining media freedom goes hand in hand with a decline in the rule of law. There’s a close correlation between the two. This is the playbook of authoritarian regimes.” She said new EU media legislation “has potential” but must be properly implemented.

Apr 26, 2024

​​TSA Visited Apple and Google to Discuss Collaboration for Digital ID + More

​​TSA Visited Apple and Google to Discuss Collaboration for Digital ID

Reclaim the Net reported:

The U.S. Transportation Security Administration (TSA) is continuing collaboration with Big Tech concerning the use of biometric surveillance technology, but also the development of digital IDs for passengers.

On its site, the TSA revealed that its officials traveled to California recently where they met with representatives of Apple and Google to talk about continuing work on implementing digital ID on people’s phones. The TSA delegation to Silicon Valley, led by Administrator David Pekoske, referred to Apple and Google as their “innovation partners.” The goal is to continue with yet another example of “public-private” aka, government-Big Tech “partnership.”

Interestingly, in March, the Biden White House said there were effectively new rules that would allow travelers to opt out of TSA’s facial recognition process, “without losing their place in line.”

The reality on the ground, however, can be quite different, as Senator Jeff Merkley, a Democrat, found out when he tried to avoid facial recognition at a Washington airport last year. Reports the newspaper: “(Markley) was pressured by a TSA officer who told the senator to step aside while others were allowed to bypass him. The senator published a video showing the TSA officer’s actions on his website.”

Maine High Court Says EMS Board Had Authority to Impose Vaccine Mandate

Bangor Daily News reported:

Maine’s highest court has ruled a state board overseeing emergency medical service workers had the authority to impose a COVID-19 vaccine mandate. That decision from the Maine Supreme Judicial Court, released Thursday, upheld a Superior Court judge’s decision to dismiss a lawsuit brought against the Maine EMS Board.

A group of EMS workers had sued the board claiming it lacked the authority to impose a mandate requiring COVID-19 and influenza vaccines.

The modified vaccine mandate survived numerous lawsuits and was ultimately upheld by the U.S. Supreme Court.

The state, in September 2023, reversed course and lifted the vaccine mandate for healthcare workers, though still requiring vaccinations against other diseases such as measles, mumps and tuberculosis.

Stop Using Your Face or Thumb to Unlock Your Phone

Gizmodo reported:

Last week, the 9th Circuit Court of Appeals in California released a ruling that concluded state highway police were acting lawfully when they forcibly unlocked a suspect’s phone using their fingerprint. You probably didn’t hear about it. The case didn’t get a lot of coverage, especially because the courts weren’t giving a blanket green light for every cop to shove your thumb to your screen during an arrest.

But it’s another toll of the warning bell that reminds you to not trust biometrics to keep your phone’s sensitive info private. In many cases, especially if you think you might interact with the police (at a protest, for example), you should seriously consider turning off biometrics on your phone entirely.

The ruling in United States v. Jeremy Travis Payne found that highway officers acted lawfully by using Payne’s thumbprint to unlock his phone after a drug bust. The three-judge panel said cops did not violate Payne’s 5th Amendment rights against self-incrimination nor the 4th Amendment’s protections against unlawful search and seizure for the “forced” use of Payne’s thumb (which was more to say unlocking his phone was coerced, rather than physically placed on the screen by a third party). The court panel admitted from the outset “neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.”

The 9th Circuit’s ruling was narrow and doesn’t necessarily create a new precedent, but it points out that the arguments surrounding the 5th Amendment and biometrics are still unsettled. The ruling was also complicated by the fact that Payne was on parole at the time, back in 2021, when he was stopped by California Highway Patrol where he allegedly had a stash of narcotics including fentanyl, fluoro-fentanyl, and cocaine. He was charged with possession with intent to sell.

Health Insurance Giant Kaiser Will Notify Millions of a Data Breach After Sharing Patients’ Data With Advertisers

TechCrunch reported:

U.S. health conglomerate Kaiser is notifying millions of current and former members of a data breach after confirming it shared patients’ information with third-party advertisers, including Google, Microsoft and X (formerly Twitter).

In a statement shared with TechCrunch, Kaiser said that it conducted an investigation that found “certain online technologies, previously installed on its websites and mobile applications, may have transmitted personal information to third-party vendors.”

Kaiser said that the data shared with advertisers includes member names and IP addresses, as well as information that could indicate if members were signed into a Kaiser Permanente account or service and how members “interacted with and navigated through the website and mobile applications, and search terms used in the health encyclopedia.” Kaiser said it subsequently removed the tracking code from its websites and mobile apps.

Kaiser is the latest healthcare organization to confirm it shared patients’ personal information with third-party advertisers by way of online tracking code, often embedded in web pages and mobile apps and designed to collect information about users’ online activity for analytics. Over the past year, telehealth startups Cerebral, Monument and Tempest have pulled tracking code from their apps that shared patients’ personal and health information with advertisers.

FTC Awarding More Than $5 Million in Refunds to Ring Customers Over Privacy Settlement

The Hill reported:

The Federal Trade Commission (FTC) began distributing more than $5 million in refunds to Amazon Ring customers Tuesday, enforcing a settlement with the tech giant over claims that Ring failed to protect consumer privacy.

The FTC claimed in a 2023 complaint that Ring allowed employees and contractors improper access to records from the company’s security cameras, potentially putting customers’ privacy at risk. Ring allegedly used such footage to train algorithms without consent, among other purposes. The agency called the lapses “egregious violations of users’ privacy.”

Amazon, which owns Ring, settled the claims last year and agreed to pay $5.6 million in refunds to Ring customers. The company separately settled a second privacy claim over its Alexa voice assistant for $25 million.

The FTC said it will send refunds to more than 115,000 customers who owned certain Ring devices, including indoor cameras, via PayPal. The refunds perpetuate public concerns about Ring camera data privacy. The company announced in January that it would no longer share video with law enforcement, following criticism from customers.

ByteDance Says It Won’t Sell TikTok Business in U.S.

The Hill reported:

TikTok’s Chinese parent company, ByteDance, said it will not sell the popular video sharing app to continue its business in the U.S., despite facing a potential ban under a law President Biden signed Wednesday.

The bill, included in a foreign aid package Biden signed, gives ByteDance up to a year to sell TikTok or be banned from operating in the U.S. The proposal was fueled by national security concerns raised by the supporters, who argued the Chinese government could compel TikTok to share U.S. user data.

“Foreign media reports that ByteDance is exploring the sale of TikTok are untrue,” ByteDance said in the statement. “ByteDance doesn’t have any plan to sell TikTok,” it continued.

The other route for TikTok to remain active in the U.S. is through a successful court case. TikTok announced Wednesday, immediately after Biden signed the law, that it would challenge it in court.

YouTube Delivers $8.1 Billion in Quarterly Ad Revenue, Beating Wall Street Expectations

The Hollywood Reporter reported:

The Google-owned video platform on Thursday reported advertising revenue of $8.1 billion in Q1 2024, up more than 20% from $6.7 billion in the Q1 quarter a year ago. Wall Street estimates were for YouTube ad revenue of $7.7 billion.

Q1 is often a soft one for advertising, in Q4 (which is usually the best quarter for ads), YouTube had ad revenue of $9.2 billion. Both quarters were up by more than $1.2 billion from a year ago.

YouTube parent company Alphabet reported revenue of $80.5 billion, and net income of more than $23.6 billion. The bulk of that revenue is still from Google search.