Biometrics Take Center Stage in Daily Life but Privacy Concerns Loom: Report
A new report reveals that biometric technology is becoming increasingly common in everyday life, with more than half of users relying on biometrics for daily authentication.
According to the 2024 Biometrics Business Guide 2024: Consumer Trust Report from Aware, biometric authentication is being embraced on devices like smartphones and laptops, with more than 50% of all users now authenticating with biometrics daily, signaling widespread adoption in the near future.
However, concerns remain about data privacy, with consumers seeking clear policies from service providers on how their biometric data is managed.
The report surveyed U.S. consumers to gauge their experiences and attitudes toward biometric technology.
While security and convenience are driving the widespread use of biometrics, trust issues around data storage and management persist, as 41% of respondents either don’t trust companies at all or only slightly trust companies to responsibly manage their biometric data.
Despite this, 62% say that trust issues are not worrying enough to keep them from using biometrics.
DNA Records of Millions of Americans Could Be Exposed Amid 23andMe Turmoil
Millions of Americans who have used 23andMe‘s genetic testing services may be at risk of having their DNA data exposed, as the company faces mounting turmoil.
Privacy advocates warn that the genetic information collected from over 15 million customers could be vulnerable to misuse, potentially impacting not only individual privacy but also that of their relatives.
Digital rights organization, the Electronic Frontier Foundation (EFF), has expressed serious concerns about the potential exposure of genetic data. EFF’s staff attorney Mario Trujillo and associate director of digital strategy Jason Kelley highlighted the risks associated with any sale or transfer of 23andMe’s vast DNA database.
Bad News: We’ve Lost Control of Our Social Media Feeds. Good News: Courts Are Noticing.
During a recent rebranding tour, sporting Gen Z-approved tousled hair, streetwear and a gold chain, the Meta chief Mark Zuckerberg let the truth slip: Consumers no longer control their social-media feeds. Meta’s algorithm, he boasted, has improved to the point that it is showing users “a lot of stuff” not posted by people they had connected with and he sees a future in which feeds show you “content that’s generated by an A.I. system.”
Spare me. There’s nothing I want less than a bunch of memes of Jesus-as-a-shrimp, pie-eating cartoon cats and other artificial intelligence (AI) slop added to all the clickbait already clogging my feed.
But there is a silver lining: Our legal system is starting to recognize this shift and hold tech giants responsible for the effects of their algorithms — a significant, and even possibly transformative, development that over the next few years could finally force social media platforms to be answerable for the societal consequences of their choices.
Let’s back up and start with the problem. Section 230, a snippet of law embedded in the 1996 Communications Decency Act, was initially intended to protect tech companies from defamation claims related to posts made by users. That protection made sense in the early days of social media, when we largely chose the content we saw, based on whom we “friended” on sites such as Facebook. Since we selected those relationships, it was relatively easy for the companies to argue they should not be blamed if your Uncle Bob insulted your strawberry pie on Instagram.
Do you have a news tip? We want to hear from you!
Gmail Users Warned About New Account Takeover Scam: Here’s What to Look For
A security researcher and a technology startup CEO are warning that some Gmail users could fall prey to a sophisticated, AI-based scam that could lead to their accounts being taken over.
Garry Tan, chief executive of prominent tech-oriented venture capital firm Ycombinator, wrote on X late last week that there is a “pretty elaborate” phishing scam that uses an AI-generated voice.
The scammers “[claim] to be Google Support (caller ID matches, but is not verified),” he wrote in an Oct. 10 post that he termed a “public service announcement.”
“DO NOT CLICK YES ON THIS DIALOG — You will be phished. They claim to be checking that you are alive and that they should disregard a death certificate filed that claims a family member is recovering your account. It’s a pretty elaborate ploy to get you to allow password recovery.”
IT consultant Sam Mitrovic, in a blog post last month, wrote of a similar scam attempt targeting Gmail accounts and also using an AI-generated voice.
“The scams are getting increasingly sophisticated, more convincing and are deployed at ever larger scale,” Mitrovic wrote in the post. “People are busy and this scam sounded and looked legitimate enough that I would give them an A for their effort. Many people are likely to fall for it.”
According to the post, Mitrovic said he received a notification to approve an attempt to recover a Gmail account, which he ultimately rejected. He then received a phone call about 40 minutes later with a caller ID as “Google Sydney” and rejected it as well.
Smart Ring Integrates World ID for Biometric Digital Identity
Blockchain-based smart ring startup CUDIS has integrated the World App, allowing users to verify and secure their biometric data through World’s proof-of-human technology. The integration aims to enhance privacy and enables self-custody of data onchain, using a decentralized storage system via the InterPlanetary File System.
The smart ring is similar to the Oura ring but integrated with decentralized physical infrastructure networks (DePIN), meaning it connects to and interacts with a distributed system of physical devices or infrastructure (like sensors, servers, or devices) that are not controlled by a single entity. The ring tracks users’ health data and rewards them for activities such as engaging with their artificial intelligence, or AI, fitness coach.
Now, it will support World ID, a digital identity system that verifies a person’s uniqueness through biometric data, specifically iris scans.