Close menu

Big Brother News Watch

Dec 20, 2023

Clear Wants to Scan Your Face at Airports. Privacy Experts Are Worried. + More

Clear Wants to Scan Your Face at Airports. Privacy Experts Are Worried.

The Washington Post reported:

The private security screening company Clear is rolling out facial recognition technology at its expedited airport checkpoints in 2024, replacing the company’s iris-scanning and fingerprint-checking measures. With a presence at more than 50 U.S. airports, Clear’s update is the latest sign in a broader shift toward biometrics in air travel that is raising concerns from some privacy experts and advocates.

Clear’s shift to its new screening technology, which the company is calling NextGen Identity Plus, also includes stronger verification of identity documents by comparing them “back to the issuing source,” the company told The Washington Post. Clear said it has been collaborating with the Department of Homeland Security and TSA since 2020 to make these changes. Members who pay $189 a year for a Clear Plus subscription will be moved to the new technology free of charge.

Clear’s system differs, the company told The Post, in that it only compares live snapshots taken of travelers using the designated Clear airport lane to data from their enrollment in NextGen Identity Plus. Moving from iris and fingerprint scanning to facial scanning should help customers get through Clear’s checkpoints faster.

Clear has long been in the business of biometrics in its screening practices at airports, arenas and other public venues. But a turn to facial recognition may lead to increased risk of surveillance and reduced privacy for travelers, privacy advocates say.

“As someone who flies constantly, I’m really disturbed to see the transformation of airports into biometric surveillance centers,” said Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project (STOP).

This Scary AI Breakthrough Means You Can Run but Not Hide — How AI Can Guess Your Location From a Single Image

TechRadar reported:

There’s no question that artificial intelligence (AI) is in the process of upending society, with ChatGPT and its rivals already changing the way we live our lives. But a new AI project has just emerged that can pinpoint the location of where almost any photo was taken — and it has the potential to become a privacy nightmare.

The project, dubbed Predicting Image Geolocations (or PIGEON for short) was created by three students at Stanford University and was designed to help find where images from Google Street View were taken. But when fed personal photos it had never seen before, it was even able to accurately find their locations, usually with a high degree of accuracy.

Jay Stanley of the American Civil Liberties Union says that has serious privacy implications, including government surveillance, corporate tracking and stalking, according to NPR. For instance, a government could use PIGEON to find dissidents or see whether you have visited places it disapproves of. Or a stalker could employ it to work out where a potential victim lives. In the wrong hands, this kind of tech could wreak havoc.

Motivated by those concerns, the student creators have decided against releasing the tech to the wider world. But as Stanley points out, that might not be the end of the matter: “The fact that this was done as a student project makes you wonder what could be done by, for example, Google.”

Rite Aid Banned From Using AI Facial Recognition

Reuters reported:

Bankrupt U.S. pharmacy chain Rite Aid will be prohibited from using facial recognition technology for surveillance purposes for five years to settle U.S. Federal Trade Commission charges it harmed consumers, the FTC said on Tuesday.

Rite Aid deployed artificial intelligence-based facial recognition technology from 2012 to 2020 in order to identify shoplifters but the company falsely flagged some consumers as matching someone who had previously been identified as a shoplifter, the FTC said.

FTC Unveils Sweeping Plan to Boost Children’s Privacy Online

The Washington Post reported:

The Federal Trade Commission on Wednesday unveiled a major proposal to expand protections for children’s personal data and limit what information companies can collect from kids online, marking one of the U.S. government’s most aggressive efforts to create digital safeguards for children.

Under the proposal, digital platforms would be required to turn off targeted ads to children under 13 by default and prohibited from using certain data to send kids push notifications or “nudges” to encourage them to keep using their products.

The plan, which still needs to be adopted, marks one of the most significant attempts by U.S. regulators to broaden their oversight over children’s online privacy, an issue that has gained bipartisan traction across states and the federal government.

The proposed rulemaking seeks to update the Children’s Online Privacy Protection Act (COPPA), a landmark 1998 law requiring websites and other digital service providers to obtain consent from parents before collecting data from users under 13, among other safeguards. The agency unveiled the long-awaited plan in a call for comment from the public on Wednesday.

TikTok Allowing Under-13s to Keep Accounts, Evidence Suggests

The Guardian reported:

TikTok faces questions over safeguards for child users after a Guardian investigation found that moderators were being told to allow under-13s to stay on the platform if they claimed their parents were overseeing their accounts.

In one example seen by the Guardian, a user who declared themselves to be 12 in their account bio, under TikTok’s minimum age of 13, was allowed to stay on the platform because their user profile stated the account was managed by their parents.

Suspected cases of underage account holders are sent to an “underage” queue for further moderation. Moderators have two options: to ban, which would mean the removal of the account, or to approve, allowing the account to stay on the platform.

A staff member at TikTok said they believed it was “incredibly easy to avoid getting banned for being underage. Once a kid learns that this works, they will tell their friends.”

Missouri Supreme Court Strikes Down Law Against Homelessness, COVID Vaccine Mandates

Associated Press reported:

The Missouri Supreme Court on Tuesday struck down a law that threatened homeless people with jail time for sleeping on state land.

In this case, the sweeping 64-page bill also dealt with city and county governance and banned COVID-19 vaccine requirements for public workers in Missouri.

Judges ruled that the law is “invalid in its entirety,” Judge Paul Wilson wrote in the court’s decision.

AI Image Generators Trained on Pictures of Child Sexual Abuse, Study Finds

The Guardian reported:

Hidden inside the foundation of popular artificial intelligence (AI) image generators are thousands of images of child sexual abuse, according to new research published on Wednesday. The operators of some of the largest and most-used sets of images utilized to train AI shut off access to them in response to the study.

The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. More than 1,000 of the suspected images were confirmed as child sexual abuse material.

The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory’s report, LAION said it was temporarily removing its datasets. LAION, which stands for the non-profit Large-scale Artificial Intelligence Open Network, said in a statement that it “has a zero-tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them”.

While the images account for just a fraction of LAION’s index of about 5.8bn images, the Stanford group says it is probably influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.

Dec 19, 2023

EU Fingerprint Checks for British Travelers to Start in 2024 + More

EU Fingerprint Checks for British Travelers to Start in 2024

The Guardian reported:

A new EU digital border system that will require fingerprints and facial scans to be taken from British travelers on first use is expected to launch next autumn, according to reports. The entry/exit system (EES) is earmarked to start on  October 6, 2024, according to the i and Times newspapers, citing Getlink, the owner of Eurotunnel. The Guardian has contacted Getlink for comment.

Eurotunnel, which runs a car transport service between Folkestone and Calais, is said to be testing the technology, in which personal data will be collected at borders and entered into an EU-wide database.

Under the EES, passengers would have to agree to fingerprinting and facial image capture the first time they arrived on the continent. After that, the data, including any record of refused entry, should allow quicker processing, according to travel bosses.

According to the European Commission, the system will apply when entering 25 EU countries (all member states apart from Cyprus and Ireland) and four non-EU countries (Norway, Iceland, Switzerland and Lichtenstein) that are part of the border-free Schengen area along with most EU member states.

Courts Are Choosing TikTok Over Children

The Atlantic reported:

Some court decisions are bad; others are abysmal. The bad ones merely misapply the law; abysmal decisions go a step further and elevate abstract principles over democratic will and basic morality. The latter’s flaw is less about legal error and more about “a judicial system gone wrong,” as the legal scholar Gerard Magliocca once put it.

In our times, some of the leading candidates for the “abysmal” category are the extraordinarily out-of-touch decisions striking down laws protecting children from social-media harms. The exemplar is NetChoice v. Bonta, in which a U.S. district court in California struck down the state’s efforts to protect children from harm arising from TikTok, Instagram, and other social media firms. In its insensitivity to our moment and elevation of conjectural theory over consequence, NetChoice is a true heir to the Dagenhart tradition.

Social media presents an undoubted public health crisis for the country’s preteens and teens. A surgeon-general report released earlier this year noted that, per a recent study, “adolescents who spent more than 3 hours per day on social media faced double the risk of experiencing poor mental health outcomes including symptoms of depression and anxiety,” compared with their peers who spent less time on such platforms. A particular concern is algorithms that serve content that promotes eating disorders, suicide, and substance abuse, based on close surveillance of a given teenager.

The California law, passed last year, seeks to make social media companies “prioritize the privacy, safety, and well-being of children over commercial interests.” It may not have been a perfect work of draftsmanship, but in its basic form, it sought to protect children by barring companies such as TikTok from profiling children, excessively collecting data, and using those data in ways that are harmful to children.

After the law’s enactment, big tech firms and their lawyers, apparently unafraid of bad publicity, sued the state through an industry group, NetChoice. Their lawyers advanced a theory that collecting data from children is “speech” protected by the First Amendment. To her lasting disgrace, Judge Beth Freeman bought that ridiculous proposition.

Group Representing Social Media Giants Sues Utah Over Parental Consent Law

The Hill reported:

A group representing several social media giants, including Google, Meta, TikTok and X, sued Utah on Monday over the state’s new social media law that requires platforms to verify user ages and obtain parental consent for minors.

The Utah Social Media Regulation Act, which is set to go into effect in March, also requires social media companies to restrict minor access to accounts between 10:30 p.m. and 6:30 a.m. and bars advertising and data collection on their accounts.

NetChoice, a trade association of nearly three dozen internet companies, argued in Monday’s filing that the Utah law violates the First Amendment, representing an “unconstitutional attempt to regulate both minors’ and adults’ access to — and ability to engage in — protected expression.”

The group also argues that the law singles out certain websites, such as YouTube, Facebook and X, for regulation based on a series of “vague definitions and exceptions with arbitrary thresholds.”

Healthcare Industry Fights Back Against Crackdowns on Health Data Tracking

STAT News reported:

Wherever you go on the internet, trackers follow. These ubiquitous bits of code, invisibly embedded in most websites, are powerful tools that can reveal the pages you visit, the buttons you click, and the forms you fill to help advertisers tail and target you across the web.

But put those trackers on a healthcare website, and they have the potential to leak sensitive medical information — a risk that, in the last year, has driven the Department of Health and Human Services and the Federal Trade Commission to crack down on trackers in the websites of hospitals, telehealth companies, and more.

Health systems and companies have scrambled to adapt, many removing the trackers entirely in the face of regulatory enforcement and a growing set of class action lawsuits alleging the disclosure of patients’ protected health information. But another contingent steeling itself for a fight, arguing that regulators have overstepped their authority and hobbled critical healthcare infrastructure by targeting trackers.

Which Apps Are Collecting the Most Data on You?

Fox News reported:

The number of apps that collect detailed personal data might surprise you. That includes some of the top apps on the App Store and Google Play Store. What I love doing most as the CyberGuy is making a difference in informing you about the power you need to protect yourself, especially your privacy.

AtlasVPN has released a new report that lists the shopping apps that collect the most data on you. Topping the chart was eBay. According to AtlasVPN, eBay’s Android app grabs 28 data points. Here’s a look at the top 10: 1. EBay 2. Amazon Shopping 3. Afterpay 4. Lowe’s 5. iHerb 6. Vinted 7. The Home Depot 8. Alibaba 9. Poshmark 10. Nike.

All of these apps collect at least 18 data points on you. While some of that information can be data performance or app activity to help developers, some apps collect financial and personal data.

Think Tank Tied to Tech Billionaires Played Key Role in Biden’s AI Order

Politico reported:

The RAND Corporation — a prominent international think tank that has recently been tied to a growing influence network backed by tech billionaires — played a key role in drafting President Joe Biden’s new executive order on artificial intelligence, according to an AI researcher with knowledge of the order’s drafting and a recording of an internal RAND meeting obtained by POLITICO.

The provisions advanced by RAND in the October executive order included a sweeping set of reporting requirements placed on the most powerful AI systems, ostensibly designed to lessen the technology’s catastrophic risks. Those requirements hew closely to the policy priorities pursued by Open Philanthropy, a group that pumped over $15 million into RAND this year.

Financed by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, Open Philanthropy is a major funder of causes associated with “effective altruism” — an ideology, made famous by disgraced FTX founder Sam Bankman-Fried, that emphasizes a data-driven approach to philanthropy.

EU Parliament Supports Granting Access to Sensitive Health Data Without Asking Patients

Reclaim the Net reported:

The latest European Parliament (EP) efforts, grappling with how to allow access to sensitive medical records without patients’ permission, while still maintaining a semblance of caring for privacy, has had an update.

Last week, a plenary vote in this EU institution revealed that while most EP members (MEPs) want to allow that access — and in that manner — they are also opposed to wholesale, mandatory creation of electronic records for every person in the EU. The scheme is known as the European Health Data Space, and it received support from an EP majority.

As privacy and digital security advocate, lawyer and MEP Patrick Breyer notes on his blog, this database would be accessible remotely and consist of health records of each medical treatment.

It was only thanks to an amendment accepted at the last minute (proposed by Breyer, of Germany’s Pirate Party, and several other EP groups that do not form the EP majority) that nation-states will be able to let their citizens object to having their sensitive health data harvested into this interconnected system of medical records.

TikTok’s Chinese Owner Stole ChatGPT Secrets to Make Copycat AI — Report

Newsweek reported:

ByteDance, the Beijing-based parent company behind the short video app TikTok, has been accused of secretly using ChatGPT to develop its own commercial artificial intelligence for the Chinese market.

OpenAI, which owns the chatbot, suspended the Chinese tech giant‘s developer account following the alleged violation of its terms of service, The Verge reported on Saturday.

The technology in question is the application programming interface, or API, behind ChatGPT, which enables coders to incorporate the AI into other apps. In its terms of service, OpenAI forbids clients from using the chatbot to “develop models that compete with OpenAI.”

The Verge’s Alex Health said he obtained access to internal ByteDance documents that showed the company had used ChatGPT at almost every stage of its own chatbot’s development, from training to performance evaluation.

Dec 18, 2023

Google Will No Longer Hold Onto People’s Location Data in Google Maps — Meaning It Can’t Turn That Info Over to Police + More

Google Will No Longer Hold Onto People’s Location Data in Google Maps — Meaning It Can’t Turn That Info Over to the Police

Insider reported:

Google is making some changes in Google Maps that will increase user privacy. Data from the Timeline feature in Google Maps, which is controlled by the Location History setting and keeps a record of routes and trips users have taken, will soon be stored directly on users’ devices instead of by Google.

That means Google itself will no longer have access to user location history data. And by extension, neither will law enforcement, which has often requested user location data from Google — for example, through “geofence” orders, which request data about every user who was near a specific place at a specific time.

Last July, Google announced it would delete the location history data of users who visited abortion clinics, drug treatment centers, domestic violence shelters, weight loss clinics, and other sensitive health-related locations. The company said that if its systems identified that a user had visited one of these sensitive locations, it would then delete the entry from that user’s location history “soon after they visit.”

Now this control is back in the hands of individual users.

EU Bureaucrats Formally Investigate X Over Lack of ‘Disinformation’ Censorship

Reclaim the Net reported:

Elon Musk’s social media giant X, formerly Twitter, has been targeted by pro-censorship commissioners at the EU. While this has been ongoing for some time, ever since Musk took over the platform last year, and since the introduction of the EU’s new censorship law “The Digital Services Act” came into force this summer, this is the first time X has faced a formal investigation.

The investigation was announced by the EU’s European Commissioner for Internal Market Thierry Breton, an unelected bureaucrat who has been pushing both formally and informally for X to censor more speech over the last year.

Of particular interest to the EU is the spread of alleged “disinformation.” Tackling it is part of the EU’s efforts to control global online speech, by equating “disinformation” with “harm.”

It Sure Looks Like Governments Want to Let AI Surveillance Run Wild

Gizmodo reported:

By all accounts, the European Union’s AI Act seems like something that would make tech ethicists happy. The landmark artificial intelligence law, which enjoyed a huge legislative win this week, seeks to institute a broad regulatory framework that would tackle the harms posed by the new technology. In its current form, the law would go to some lengths to make AI more transparent, inclusive, and consumer-friendly in the countries where it’s in effect. What’s not to like, right?

Among other things, the AI Act would bar controversial uses of AI, what it deems “high-risk” systems. This includes things like emotion recognition systems at work and in schools, as well as the kind of social scoring systems that have become so prevalent in China. At the same time, the law also has safeguards around it that would force companies that generate media content with AI to disclose that use to consumers.

And yet, despite some policy victories, there are major blindspots in the new law — ones that make civil society groups more than a little alarmed.

Indeed, the European Parliament’s own press release on the landmark law admits that “narrow exceptions [exist] for the use of biometric identification systems (RBI)” — police mass surveillance tech, in other words — “in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and for strictly defined lists of crime.”

As part of that exemption, the law allows police to use live facial recognition technology — a controversial tool that has been dubbed “Orwellian” for its ability to monitor and catalog members of the public — in cases where it’s used to prevent “a specific and present terrorist threat” or to ID or find someone who is suspected of a crime.

Howard Zucker to Testify on New York’s Disastrous COVID Response in Front of House Committee

New York Post reported:

Former state Health Commissioner Howard Zucker will be hauled in front of Congress next week to answer questions about the state’s disastrous response to the coronavirus pandemic. Zucker will sit Monday for a closed-door, transcribed interview with members of the House’s Select Subcommittee on the Coronavirus Pandemic.

As former Gov. Andrew Cuomo’s health czar, Zucker was responsible for a March 2020 order that forced Empire State nursing homes to accept coronavirus-positive residents returning from hospitals.

“Howard Zucker had a role in crafting that policy for Governor Cuomo,” said Staten Island GOP Rep. Nicole Malliotakis, the only New Yorker on the committee. “We want to know what he knows in terms of what led to the [order] and why — when they had alternative options such the U.S. Navy Comfort ship and South Beach Psychiatric Center in Staten Island — they continued to mandate these nursing homes take COVID patients.

“I would like to know what was the difference in reimbursements for individuals put in hospitals versus nursing homes,” Malliotakis continued. “Did that financial decision play a role?”

TikTok Requires Users to ‘Forever Waive’ Rights to Sue Over Past Harms

Ars Technica reported:

Some TikTok users may have skipped reviewing an update to TikTok’s terms of service this summer that shakes up the process for filing a legal dispute against the app. According to The New York Times, changes that TikTok “quietly” made to its terms suggest that the popular app has spent the back half of 2023 preparing for a wave of legal battles.

In July, TikTok overhauled its rules for dispute resolution, pivoting from requiring private arbitration to insisting that legal complaints be filed in either the U.S. District Court for the Central District of California or the Superior Court of the State of California, County of Los Angeles. Legal experts told the Times this could be a way for TikTok to dodge arbitration claims filed en masse that can cost companies millions more in fees than they expected to pay through individual arbitration.

Perhaps most significantly, TikTok also added a section to its terms that mandates that all legal complaints be filed within one year of any alleged harm caused by using the app. The terms now say that TikTok users “forever waive” rights to pursue any older claims. And unlike a prior version of TikTok’s terms of service archived in May 2023, users do not seem to have any options to opt out of waiving their rights.

Mortgage Giant Mr. Cooper Hit With Cyberattack Possibly Affecting More Than 14 Million Customers

ABC News reported:

Mortgage and loan giant Mr. Cooper was hit with a cyber breach that involved “substantially all of our current and former customers” sensitive personal information, according to filings with state and federal regulators.

“The personal information in the impacted files included your name, address, phone number, Social Security number, date of birth, and bank account number,” the company said in a filing with the Maine Attorney General’s Office on Monday.

More than 14 million customers could be affected, according to estimates of Mr. Cooper users. The company says it shut down its systems to contain the incident and to protect customer information.

A Digital-Surveillance State Won’t Make Us Any Safer

National Review reported:

On April 18, 2018, Joseph James DeAngelo visited a Hobby Lobby store near his home in Roseville, Calif. As he shopped inside, Sacramento investigators swabbed his car door handle, obtaining a sample of his DNA.

Months later, police arrested DeAngelo under suspicion of being the “Golden State Killer,” a serial murderer and rapist who had evaded capture for 40 years. The swab taken from his car door, coupled with DNA collected from a tissue he discarded from his home, was run through a publicly available DNA database, allowing the cops to construct a family tree of the perpetrator. From there, they narrowed down potential suspects to men of a certain age who lived in the area at the time of the crimes.

This use of DNA wasn’t like the time-honored process of lifting someone’s fingerprints and comparing them with those of known offenders. You leave traces of your genetic makeup everywhere you go, and with the help of forensic genealogy, you can be identified by them. And it’s not just your genetic makeup — you are spreading the DNA of everyone else to whom you are related, which can aid in the identification process whether you know it or not.

The Orwellian end result is, we might all be on the grid at all times. And, of course, DNA identification is a small portion of the way technology is making us perpetually trackable. Of course, everyone wants to be safe, but the ubiquity of cameras, GPS, tracking devices, the internet, and AI is also lulling Americans into a false sense of personal privacy.

How Discord Became a Perfect Conduit for a Leak of Government Secrets

The Washington Post reported:

Jack Teixeira led an unruly online chatroom named after a racist and homophobic reference. In Teixeira’s group on the Discord app, bigoted language was common. Users, many of them teenagers or young adults, fantasized about violence.

Now Teixeira, a member of the Massachusetts Air National Guard, is accused of using Discord chatrooms to share hundreds of classified government documents. It was one of the worst leaks of government secrets in modern American history.

Teixeira pleaded not guilty to six counts of mishandling and disclosing classified information. He remains in jail awaiting trial.

The roots of crime are complicated. But Discord combines anonymity, hands-off monitoring and insular communities in ways that made it an ideal conduit for Teixeira’s alleged leaks, said Samuel Oakford, a reporter with The Washington Post’s visual forensics team.

Britain Weighs New Consultation on Social Media Impact on Teens

Reuters reported:

Britain could look at further measures to protect young teenagers from the risks of social media in the new year following the introduction of new online safety laws focused on children and the removal of illegal content, a minister said.

The Online Safety Act, which became law in October, requires platforms like Meta‘s (META.O) Instagram and Alphabet‘s (GOOGL.O) YouTube to strengthen controls around illegal content and age-checking measures.

Major platforms including Instagram, YouTube and Snapchat require users to be at least 13 years old.

A Bloomberg report said the British government was studying a crackdown on social media access for children under the age of 16, including potential bans.

Dec 14, 2023

Instagram Quietly Rolled Out a Misinformation Feature That Has Sparked Claims of Stealth Censorship + More

Instagram Quietly Rolled Out a Misinformation Feature That Has Sparked Claims of Stealth Censorship

NBC News reported:

A feature meant to give Instagram users control over how Meta’s fact-checking process affects their feeds is sparking backlash and speculation after the company rolled it out quietly with little explanation.

In an update Tuesday to a blog post originally published in July, Meta said Instagram had recently added new user controls to its “Fact-Checked Control program.”

In a statement, a Meta spokesperson said: “We’re giving people on Facebook and Instagram even more power to control the algorithm that ranks posts in their Feed. If someone wants to adjust the demotions on fact-checked content in their Feed, they must change the setting on their own. We’re doing this in response to users telling us that they want a greater ability to decide what they see on our apps.”

Meta has struggled to react to numerous and sometimes conflicting demands around misinformation on its platforms over the years, and the product rollout appears to be an attempt to give users more control over how the company is influencing their feeds.

Congress Approves Extension of Warrantless Surveillance Powers Opposed by Civil Libertarians

ZeroHedge reported:

As expected given there were only a handful of senators and House reps opposing, Congress has approved the short-term extension of the U.S. government’s warrantless surveillance powers.

Here’s what Rep. Chip Roy of Texas had to say: “The fact of the matter is what’s being stated is it is impossible to oppose the National Defense Authorization Act because we put a pay raise in it or because we put something in there that is seemingly so important that we have to ignore the critical destruction of our civil liberties by adding FISA extension right on the top of it without doing the forms necessary to protect the American people.”

He and some others have argued that the FISA issue should be a standalone bill and not part of the NDAA. Naturally, the U.S. intelligence community praised its passage as “necessary” to national security.

At a moment Republicans are continuing to hold out on their refusal to support a massive $111 billion supplemental spending package that Biden wants for Ukraine, Israel, and Taiwan — the Senate did manage to get something big done, namely passage of the mammoth $886 billion 2024 National Defense Authorization Act (NDAA).

It passed on Wednesday, authorizing funding for the Department of Defense for this next year, in a vote of 87-13. Those voting against it included six Republicans, six Democrats, and an Independent. It now heads to the House where a vote is expected Thursday.

ChatGPT Found by Study to Spread Inaccuracies When Answering Medication Questions

Fox News reported:

ChatGPT has been found to have shared inaccurate information regarding drug usage, according to new research. In a study led by Long Island University (LIU) in Brooklyn, New York, nearly 75% of drug-related, pharmacist-reviewed responses from the generative AI chatbot were found to be incomplete or wrong.

In some cases, ChatGPT, which was developed by OpenAI in San Francisco and released in late 2022, provided “inaccurate responses that could endanger patients,” the American Society of Health-System Pharmacists (ASHP), headquartered in Bethesda, Maryland, stated in a press release.

ChatGPT also generated “fake citations” when asked to cite references to support some responses, the same study also found.

In one example lead study author Sara Grossman, PharmD, associate professor of pharmacy practice at LIU, cited from the study, ChatGPT was asked if “a drug interaction exists between Paxlovid, an antiviral medication used as a treatment for COVID-19, and verapamil, a medication used to lower blood pressure.”

The AI model responded that no interactions had been reported with this combination. But in reality, Grossman said, the two drugs pose a potential threat of “excessive lowering of blood pressure” when combined.

Google Is Rolling Out New AI Models for Healthcare. Here’s How Doctors Are Using Them

CNBC reported:

Google on Wednesday announced MedLM, a suite of new healthcare-specific artificial intelligence models designed to help clinicians and researchers carry out complex studies, summarize doctor-patient interactions and more.

The move marks Google’s latest attempt to monetize healthcare industry AI tools, as competition for market share remains fierce between competitors like Amazon and Microsoft. CNBC spoke with companies that have been testing Google’s technology, like HCA Healthcare, and experts say the potential for impact is real, though they are taking steps to implement it carefully.

Dr. Michael Schlosser, senior vice president of care transformation and innovation at HCA, said the fact that AI models can spit out incorrect information is a big challenge, and HCA has been working with Google to come up with best practices to minimize those fabrications. He added that token limits, which restrict the amount of data that can be fed to the model, and managing the AI over time have been additional challenges for HCA.

Carbon Passports: A Climate Measure to Restrict Freedom Movement

The New American reported:

The globalist effort to control the movement of citizens may be gaining another arrow in its quiver. An October report released by The Future Laboratory, a consultancy that purports to “future-proof” organizations, has suggested that carbon passports could be used as a way to stem unnecessary air travel.

The idea is being suggested to promote more “sustainable” travel by issuing “carbon passports” as a means of keeping track of an individual’s personal carbon usage. According to climate alarmists, the average annual per capita carbon footprint needs to drop to under two tons by 2050 if mankind has any hope of keeping a global average temperature increase to 2°C or less. Currently, it is estimated that the average American’s annual carbon footprint is approximately 16 tons.

Even to the untrained eye, these proposed carbon passports sound suspiciously like the vaccine passports that many nations pushed during the COVID-19 pandemic. Were those vaccine passports merely a trial run for an even more insidious digital tracking system?

“The next step is it’s not just your vaccine data or your vaccination record, it’s everything else,” said Dutch political commentator Eva Vlaardingerbroek in an interview on X. “We’re gonna walk straight into a two-tier society, just like we did with COVID, and this time it’s going to be worse.”

U.K. Regulator Probes TikTok Over Parental Controls Information

Financial Times reported:

TikTok is being investigated by the U.K. media regulator over concerns the Chinese-owned video app supplied “inaccurate” information about its parental controls as the watchdog intensifies its efforts to protect children from harmful online material.

Ofcom said on Thursday it had “reasonable grounds for believing that” ByteDance-owned TikTok had breached its legal responsibilities and said it might take enforcement action.

Ofcom had requested information from TikTok to understand and monitor how the viral video platform’s parental controls worked. The regulator on Thursday said the “available evidence suggests that the information provided . . . may not have been complete and accurate.” Ofcom is intensifying its work in protecting children from harm as part of its role as the U.K.’s online safety regulator, following a landmark piece of legislation passed in October.

The U.K.’s legislation is seen as among the strongest online regulations in the world and Ofcom has pushed to hold companies to account for breaches of the law. This month, it issued guidance to porn websites, forcing them to introduce stricter technical measures to ensure that their users are over the age of 18.