Buying Booze? Your Face — or Palm — Could Verify Your Age
Move over, fake IDs: Biometric systems that can “read” a person’s face or palm image and determine if they’re too young for a beer are gaining traction at sports stadiums and liquor shops.
Why it matters: While these tools are handy for alcohol sellers — and can offer more privacy for consumers than handing over a driver’s license to a store clerk — they tap into fears about potential abuses of facial recognition systems.
Driving the news: Legislative proposals in New York and Washington state would let bars, restaurants and other purveyors of adult products verify a customer’s age through biometric data — like a finger or palm image, or a retinal or face scan.
The New York bill would require all biometric data to be encrypted and prohibit businesses from selling it to third parties.
“This is the new frontier of age verification,” state Sen. and bill sponsor James Skoufis told the New York Post. “It does advance the interests of convenience.”
Phishing Could Be the Most Dangerous Security Threat You Face This Year — Here’s How to Stay Safe
Phishing is a notorious type of email scam where criminals aim to trick victims into handing over personal information or financial details by using legitimate-looking emails that hide malicious links.
The scams, where criminals target victims using legitimate-looking emails hiding malicious links, often use realistic assets to spoof a real company, with similar branding and corporate images used to fool users into clicking on dodgy URLs.
Victims can be hooked by fake CEO messages, invoices from a fake accounting department, and even pretend IT department messages — phishing scams are wide-ranging and can be a serious threat to you and your business.
Mastodon Has a Child Abuse Material Problem, Like Every Other Major Web Platform
A new report suggests that the lax content moderation policies of Mastodon and other decentralized social media platforms have led to a proliferation of child sexual abuse material.
Stanford’s Internet Observatory published new research Monday that shows that such decentralized sites have serious shortcomings when it comes to “child safety infrastructure.” Unfortunately, that doesn’t make them all that different from a majority of platforms on the normal internet.
When we talk about the “decentralized” web, we’re of course talking about “federated” social media or “the Fediverse”— the loose constellation of platforms that eschew centralized ownership and governance for an interactive model that prioritizes user autonomy and privacy.
Despite the exciting promise of the Fediverse, there are obvious problems with its model. Security threats, for one thing, are an issue. The limited user friendliness of the ecosystem has also been a source of contention.
And, as the new Stanford study notes, the lack of centralized oversight means that there aren’t enough guardrails built into the ecosystem to defend against the proliferation of illegal and immoral content.
Indeed, researchers say that over a two-day period they encountered approximately 600 pieces of either known or suspected child sexual abuse material or CSAM, content on top Mastodon instances.
New Cryptocurrency Offers Users Tokens for Scanning Their Eyeballs
Members of the public are being invited to have their eyeballs scanned by a silver orb as part of cryptocurrency project that aims to use biometric verification to distinguish humans from AI systems.
People signing up to the Worldcoin scheme via an app this week will receive a “genesis grant” of 25 tokens, equivalent to about £40, after having their iris scanned by one of the bowling-ball-sized devices.
Once users scan their eyes, they will receive a World ID, which the scheme says will prove they are a “real and unique person,” while preserving their privacy, and a crypto wallet on their smartphone.
The project was launched by Sam Altman of the machine learning research firm OpenAI, and on Tuesday the London orb at Techspace Worship Street, near Old Street tube station, was busy with potential users.
Worldcoin promises those scanning their irises will have their privacy protected.
Lots of Sensitive Data Is Still Being Posted to ChatGPT
New data from Netskope has claimed employees are continuing to share sensitive company information with AI writers and chatbots like ChatGPT despite the clear risk of leaks or breaches.
The research covers some 1.7 million users across 70 global organizations, and found an average of 158 monthly incidents of source code being posted to ChatGPT per 10,000 users, making it the most significant company vulnerability ahead of other types of sensitive data.
While cases of regulated data (18 incidents/10,000 users/month) and intellectual property (four incidents/10,000 users/month) being posted to ChatGPT are much less common, it’s clear that many developers are simply not realizing the damage that can be caused by leaked source code.