Privacy Companies Push Back Against EU Plot to End Online Privacy
An urgent appeal has been relayed to ministers across the European Union by a consortium of tech companies, exacting a grave warning against backing a proposed regulation focusing on child sexual abuse as a pretense to jeopardize the security integrity of internet services relying on end-to-end encryption and end privacy for all citizens.
A total of 18 organizations — predominantly comprising providers of encrypted email and messaging services — have voiced concerns about the potential experimental regulation by the European Commission (EC), singling out the “detrimental” effects on children’s privacy and security and the possible dire repercussions for cyber security.
Made public on January 22, 2024, this shared open letter argues that the EC’s draft provision known as “Chat Control,” mandating the comprehensive scanning of encrypted communications, may create cyber vulnerabilities that expose citizens and businesses to increased risk.
Further inflating the issue, the letter also addresses a stalemate amongst member states, the EC, and the European Parliament, who haven’t yet reconciled differing views on the proportionality and feasibility of the EC’s mass-scanning strategy in addressing child safety concerns.
Roomba Won’t Give Amazon a Map of Your Home After Merger Implodes
Amazon abandoned its $1.4 billion acquisition of Roomba maker, iRobot, on Monday after regulators in the European Union threatened to block the deal. The deal’s implosion means the robot vacuums, and the company’s maps of 40 million floor plans across the globe, will not join the growing list of smart-home devices Amazon uses to collect information about you.
Regulators in the EU sent the companies a list of concerns in November regarding how Amazon’s acquisition would stifle innovation in the robot vacuum cleaner marketplace.
Privacy was not a concern brought by EU regulators, but consumer advocates have spoken out about how the Roomba acquisition would give Amazon another device to track you and dominate your home’s systems. That pressure from regulators seems to have blown up this deal, and it seems to be an inadvertent, but major, win for your home’s privacy. The company has been growing its presence in consumer homes with Amazon Alexa, Ring doorbells and cameras, and Amazon Fire TV Stick.
The Roomba is like a little spy in many ways, understanding the floor plan of your home, the furniture in your living room, what areas of the home get the most use and many other data points. iRobot even noted in 2017 that selling its maps was a key part of a future acquisition. The Roomba would have been yet another Amazon device that adds to the company’s profile it can build on customers.
TSA Uses ‘Minimum’ Data to Fine-Tune Its Facial Recognition, but Some Experts Still Worry
The Transportation Security Administration is moving forward with plans to implement facial recognition technology at U.S. airports and is working with the Department of Homeland Security’s research and development component to analyze data to ensure that the new units are working correctly, agency officials told Nextgov/FCW.
A TSA official said the agency “is currently in the beginning stages of integrating automated facial recognition capability as an enhancement to the Credential Authentication Technology devices that had been deployed several years ago.”
The latest CAT scanners — known as CAT-2 units — incorporate facial recognition technology by taking real-time pictures of travelers and then comparing those images against their photo IDs. TSA first demonstrated the CAT-2 units in 2020 and began deploying the new screeners at airports in 2022. A Jan. 12 press release from the agency said it added “457 CAT-2 upgrade kits utilizing the facial recognition technology” in 2023.
“The CAT-2 units are currently deployed at nearly 30 airports nationwide, and will expand to more than 400 federalized airports over the coming years,” the TSA official said, noting that it is currently optional for travelers to participate in facial recognition screenings. Those who decline to do so can notify a TSA agent and go through the standard ID verification process instead.
Some lawmakers, privacy advocates and experts have voiced concerns about the continued expansion of facial recognition, either proposing the implementation of new standards and requirements for the technology’s use or calling for a complete halt to the government’s rollout of the tech for security and law enforcement purposes.
Can the Government Ask Social Media Sites to Take Down COVID Misinformation? SCOTUS Will Weigh In
The Supreme Court will this March hear arguments centered on the government’s role in communicating — and sometimes censoring — pertinent public health information in the midst of a pandemic.
At the core of the lawsuit is whether the federal government’s requests for social media and search giants like Google, Facebook, Twitter, and YouTube to moderate COVID-19 misinformation violated users’ First Amendment rights.
While the suit was originally filed by then-Missouri Attorney General Eric Schmitt — and known as Missouri v. Biden — a range of plaintiffs arguing that the Biden administration suppressed their COVID-19 content later joined. Those include Jay Bhattacharya and Martin Kulldorff, who co-authored a paper, the Great Barrington Declaration, advancing the theory that people could achieve herd immunity without vaccines.
The case is now referred to as Murthy v. Missouri.
AI Is Coming for Big Pharma
If there’s one thing we can all agree upon, it’s that the 21st century’s captains of industry are trying to shoehorn AI into every corner of our world. But for all of the ways in which AI will be shoved into our faces and not prove very successful, it might actually have at least one useful purpose. For instance, by dramatically speeding up the often decades-long process of designing, finding and testing new drugs.
The current process takes “more than a decade and multiple billions of dollars of research investment for every drug approved,” said Dr. Chris Gibson, co-founder of Recursion, another company in the AI drug discovery space.
He says AI’s great skill may be to dodge the misses and help avoid researchers spending too long running down blind alleys. A software platform that can churn through hundreds of options at a time can, in Gibson’s words, “fail faster and earlier so you can move on to other targets.”
OpenAI and Google Will Be Required to Notify the Government About AI Models
OpenAI, Google, and other AI companies will soon have to inform the government about developing foundation models, thanks to the Defense Production Act. According to Wired, U.S. Secretary of Commerce Gina Raimondo shared new details about this impending requirement at an event held by Stanford University’s Hoover Institute last Friday.
“We’re using the Defense Production Act … to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results — the safety data — so we can review it,” said Raimondo.
The new rules are part of President Biden‘s sweeping AI executive order announced last October. Amongst the broad set of mandates, the order requires companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety,” to notify the federal government and share the results of its safety testing.
Foundation models are models like OpenAI’s GPT-4 and Google’s Gemini that power generative AI chatbots. However, GPT-4 is likely below the threshold of computing power that requires government oversight.
Facebook Users in the U.K. Have More Privacy Protections Than in the U.S. Here’s Why.
Facebook is a behemoth so large, so absolute, that just 20 years after its creation, it’s difficult to imagine a world in which its power doesn’t reach the most desolate of civilizations. Because it’s a platform that spans the entire globe, but one that is commanded from Silicon Valley, it’s easy to assume that it looks the same in every place you can access it.
However, legislators in the EU are working to ensure that there are at least some protections for the people who use the platform.
The European Union’s Digital Markets Act is a significant regulation that addresses antitrust concerns with big tech companies, giving the EU regulatory power that has affected the way some social media platforms function. That means that Facebook, and its parent company Meta, look a bit different in Europe than it does in the U.S., including, primarily, its protections for users.
In November 2023, despite Facebook‘s best attempts to stop it, regulators forced Meta to start offering a monthly subscription fee to use its platforms without any ads for users in the EU, EEA, and Switzerland. It costs users €9.99 per month and is entirely optional — you can continue using the app for free and get the ads, or you can pay and have an ad-free experience.
It’s unclear if any of these tools will ever become available for users in the U.S., but it sure would be nice if U.S. regulators started caring about citizens’ digital privacy as much as EU regulators seem to.