Close menu

Big Brother News Watch

May 23, 2023

‘Growing Evidence’ Social Media Harms Kids’ Mental Health, Says Surgeon General + More

Surgeon General: ‘Growing Evidence’ Social Media Harms Kids’ Mental Health

New York Daily News reported:

There is “growing evidence” that the use of social media by kids and adolescents could be detrimental to their mental health, according to the U.S. surgeon general. Dr. Vivek Murthy is sounding the alarm about the negative effects of social media on the mental health and well-being of young people, amid what he calls “a national youth mental health crisis.”

In a new advisory issued Tuesday, Murthy highlighted the “significant” public health challenge posed by the use of social media by U.S. youth — specifically those between the ages of 10 to 19, who “are undergoing a highly sensitive period of brain development.”

“Nearly every teenager in America uses social media, and yet, we do not have enough evidence to conclude that it is sufficiently safe for them, especially at such a vulnerable stage of brain, emotional, and social development,” Murthy said in a statement.

He urged policymakers and tech companies, as well as families of young people in the U.S., to take immediate action to “maximize the benefits and minimize the harms of social media platforms” and create “safer” and “healthier” environments for young users.

Pandemic’s Toll on Reading Still Being Seen in Classrooms Across the Country

Fox News reported:

The desks, once spread apart to fight COVID-19, are back together. Masks cover just a couple of faces. But the pandemic maintains an unmistakable presence.

Look no further than the blue horseshoe-shaped table in the back of the room where Richard Evans calls a handful of students back for extra help in reading — a pivotal subject for third grade — at the end of each day.

Here is where time lost to pandemic shutdowns and quarantines shows itself: in the students who are repeating this grade. In the little fingers slowly sliding beneath words sounded out one syllable at a time. In the teacher’s patient coaching through reading concepts usually mastered in first grade — letter “blends” like “ch” and “sh.”

In a year that is a high-stakes experiment on making up for missed learning, this strategy — assessing individual students’ knowledge and tailoring instruction to them — is among the most widely adopted in American elementary schools. In his classroom of 24 students, each affected differently by the pandemic, Evans faces the urgent challenge of having them all read well enough to succeed in the grades ahead.

TikTok Sues Montana After State Passes Law Banning App

Forbes reported:

TikTok filed a widely expected federal lawsuit against Montana alleging that the state’s new ban on the app — which was signed into law last week and would take effect January 1, 2024 — “unlawfully abridges one of the core freedoms” allowed under the First Amendment by suppressing free speech.

The suit, filed in the U.S. District Court of Montana, seeks to have the Montana law overturned, permanently prevented from implementation and declared unconstitutional; the suit is against the state’s attorney general, Austin Knudsen, who is tasked with enforcing the ban.

Montana Gov. Greg Gianforte signed the first-of-its-kind bill banning the app from being downloaded in the state last week, saying he did so “to protect Montanans’ personal and private data from the Chinese Communist Party.”

Increased Oversight: Discord Tests New Parental Controls for Teens

TechCrunch reported:

New usernames aren’t the only change coming to the popular chat app Discord, now used by 150 million people every month. The company is also testing a suite of parental controls that would allow for increased oversight of Discord’s youngest users, TechCrunch has learned and Discord confirmed.

In a live test running in Discord’s iOS app in the U.S., the company introduced a new “Family Center” feature, where parents will be able to configure tools that allow them to see the names and avatars of their teen’s recently added friends, the servers the teen has joined or participated in and the names and avatars of users they’ve directly messaged or engaged within group chats.

However, Discord clarifies in an informational screen, parents will not be able to view the content of their teen’s messages or calls in order to respect their privacy.

This approach, which toes a fine line between the need for parental oversight and a minor’s right to privacy, is similar to how Snapchat implemented parental controls in its app last year. Like Discord’s system, Snapchat only allows parents insights into who their teen is talking to and friending, not what they’ve typed or the media they’ve shared.

FTC Issues Warning on Misuse of Biometric Info Amid Rise of Generative AI

FOXBusiness reported:

The Federal Trade Commission (FTC) has issued a warning on the potential for consumers’ biometric information to be misused in connection with emerging technologies like generative artificial intelligence (AI) and machine learning.

A policy statement published by the FTC last week warned that the increasingly pervasive use of consumers’ biometric data, including by technologies powered by machine learning and AI, poses risks to consumers’ privacy and data. Biometric information is data that depicts or describes physical, biological or behavioral traits and characteristics, including measurements of an identified person’s body or a person’s voiceprint.

“In recent years, biometric surveillance has grown more sophisticated and pervasive, posing new threats to privacy and civil rights,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “Today’s policy statement makes clear that companies must comply with the law regardless of the technology they are using.”

Mother of Unvaccinated Girl Denied Kidney Transplant by Duke Shares Huge Update

The Daily Wire reported:

Yulia Hicks, a child who was denied a kidney transplant by Duke Children’s Hospital last year, will finally be receiving the life-saving operation at another North Carolina hospital.

The teen has a genetic kidney disorder that requires a transplant, but her family said last year that Duke was refusing to put Yulia on the kidney wait list because she is unvaccinated against COVID. Notably, the family says Yulia has already recovered from the virus.

The Daily Wire reported last year that a phone call, the recording of which was obtained by journalist Alex Berenson, revealed a Duke health official telling the Hicks family that Yulia must get vaccinated against COVID before she could become a candidate for the kidney transplant.

Her family said Yulia has already contracted COVID and recovered, but doctors told them Yulia’s natural immunity was not enough, according to the recorded call.

Alberta Worker Who Refused COVID Shot Wins Case for Another Chance to Appeal for EI Benefits

The Epoch Times reported:

An Alberta woman denied Employment Insurance (EI) benefits for refusing COVID-19 vaccination has appealed the ruling and won reconsideration of her case.

Amanda Michaud, a biomedical equipment technologist with Alberta Health Services (AHS) in Grande Prairie, was placed on an unpaid leave of absence during the pandemic due to non-compliance with her employer’s mandatory vaccination policy. Michaud, with AHS for some 15 years, received a letter from her employer on Dec. 6, 2021, that explicitly stated the leave of absence was not disciplinary.

Her lawyer, James Kitchen, told The Epoch Times that when Michaud applied for EI on Dec. 22, 2021, the Canada Employment Insurance Commission decided that her suspension was for “misconduct” and that she wasn’t entitled to EI benefits as a result.

May 22, 2023

Venmo Teen Accounts Coming Next Month, Despite Privacy Concerns + More

Venmo Teen Accounts Are Coming Next Month

The Verge reported​​:

Venmo is introducing a new service that allows parents to open a Venmo account for children between the ages of 13-17 to send and receive money via the app. Venmo Teen accounts also come with a debit card and controls for parents to monitor transactions and manage their child’s privacy settings — important because Venmo has been routinely criticized for its lack of privacy protections in the past.

Venmo says its new Teen account will begin rolling out to select customers from June 2023 and will be “widely available in the coming weeks.”

Kids require safe digital access to money as stores increasingly phase out physical cash, but Venmo has given plenty for parents to be concerned about. The mobile payment service has been widely criticized over privacy concerns, with one 2022 study from the University of Southern California reporting that two in five Venmo users have exposed their own personal information on the platform.

U.S. Intelligence Building System to Track Mass Movement of People Around the World

Vice reported:

The Pentagon’s intelligence branch is developing new tech to help it track the mass movement of people around the globe and flag “anomalies.”

The project is called the Hidden Activity Signal and Trajectory Anomaly Characterization (HAYSTAC) program and it “aims to establish ‘normal’ movement models across times, locations, and populations and determine what makes an activity atypical,” according to a press release from the Office of the Director of National Intelligence (DNI).

HAYSTAC will be run by the DNI’s Intelligence Advanced Research Projects Activity (IARPA). It’s kind of like DARPA, the Pentagon’s blue-sky research department, but with a focus on intelligence projects. According to the agency, the project will analyze data from internet-connected devices and “smart city” sensors using AI.

Jack Cooper, HAYSTAC’s program manager, also mentioned privacy, or rather a lack of it, as a motivation for thinking about human movement. “Today you might think that privacy means going to live off the grid in the middle of nowhere,” he said. “That’s just not realistic in today’s environment. Sensors are cheap. Everybody’s got one. There’s no such thing as living off the grid.”

Medical AI’s Weaponization

Axios reported:

Machine learning can bring us cancer diagnoses with greater speed and precision than any individual doctor — but it could also bring us another pandemic at the hands of a relatively low-skilled programmer.

Why it matters: The health field is generating some of the most exciting artificial intelligence innovation, but AI can also weaponize modern medicine against the same people it sets out to cure.

Driving the news: The World Health Organization is warning about the risks of bias, misinformation and privacy breaches in the deployment of large language models in healthcare. The big picture: As this technology races ahead, everyone — companies, government and consumers — has to be clear-eyed that it can both save lives and cost lives.

Escaped viruses are a top worry. Around 350 companies in 40 countries are working in synthetic biology. With more artificial organisms being created, there are more chances for accidental release of antibiotic-resistant superbugs, and possibly another global pandemic.

Amazon’s Palm-Scanning Payment Tech Will Now Be Able to Verify Ages, Too

TechCrunch reported:

Amazon One, the retailer’s palm-scanning payment technology, is now gaining new functionality with the addition of age verification services. The company announced today that customers using Amazon One devices will be able to buy adult beverages — like beers at a sports event — just by hovering their palm over the Amazon One device. The first venue to support this feature will be Coors Field, home of the Colorado Rockies MLB team. The technology will roll out to additional venues in the months ahead, Amazon says.

To use the system, customers hold their hand over a reader on the device that identifies multiple aspects of their palm, like lines, ridges and vein patterns, to make the identification. Amazon has argued that palm reading is a more private form of biometrics because you can’t determine someone’s identity just by looking at their palm images. However, the company isn’t just storing palm images — it’s creating a customer database that matches palm images with other information.

The technology has been the subject of privacy concerns since its debut, which already led one early adopter to abandon their plans to use the readers, after receiving pressure from consumer privacy and advocacy groups. Denver Arts and Venues had been planning to leverage Amazon One for ticketless entry at Red Rocks Amphitheater — a big win for Amazon — but it cut ties with the retailer after the publication of an open letter that suggested Amazon could share palmprint data with government agencies and that it could be stolen from the cloud by hackers.

A group of U.S. senators also pressed Amazon for more information about its plans with customer biometrics shortly after the technology’s launch. Plus, Amazon is facing a class action lawsuit over failure to provide proper notice under an NYC biometric surveillance law, related to the use of its Amazon One readers at Amazon Go stores.

That ChatGPT iPhone App Has Serious Privacy Issues You Need to Know About

TechRadar reported:

OpenAI, the company behind ChatGPT, recently brought its artificial intelligence bot to phones with the ChatGPT iPhone app. The mobile version of the chatbot has already climbed the ranks and become one of the most popular free apps on the App Store right now. However, before you jump headfirst into the app, beware of getting too personal with the bot and putting your privacy at risk.

OpenAI’s privacy policy says that when you “use our services, we may collect personal information that is included in the input, file uploads, or feedback you provide”. This basically means that if you ask ChatGPT questions that contain any personal information (read: facts about you which you’d rather not share with a living soul) it’ll be sent to OpenAI and could be read by a human reviewer. And that’s a big deal.

So, you have no idea whether OpenAI is actually reading your conversations, and you have no option to opt out. There is no possible way for the company to read every conversation from every user, but it is something you should keep in mind as you continue to use the app.

The Government Can’t Seize Your Data — but It Can Buy It

TechCrunch reported:

When the Biden administration proposed new protections earlier this month to prevent law enforcement from demanding reproductive healthcare data from companies, they took a critical first step in protecting our personal data. But there remains a different, serious gap in data privacy that Congress needs to address.

While the Constitution prevents the government from compelling companies to turn over your sensitive data without due process, there are no laws or regulations stopping them from just buying it. And the U.S. government has been buying private data for years.

Congress needs to ban the government from buying up sensitive geolocation data entirely — not just preventing its seizure. So long as agencies and law enforcement can legally purchase this sensitive information from data brokers, constitutional limits on the government’s ability to seize this data mean next to nothing.

Meta Slapped With $1.3 Billion Fine for Sending EU User Data to the U.S.

Mashable reported:

Facebook‘s parent company Meta has been fined 1.2 billion euros ($1.3 billion) for breaching the European Union’s data protection rules.

The issue revolves around the way Facebook handled European user data. According to Ireland’s Data Protection Commission (DPC), which announced the results of its inquiry into Meta Ireland on Monday, Facebook’s transferring of user data from Europe to the U.S. was in breach of Europe’s General Data Protection Regulation (GDPR) rules.

In a nutshell, when Facebook takes personal data from its customers in the EU and transfers it over to the U.S., the data can potentially be shared with U.S. intelligence services. A deal called the Privacy Shield used to allow the free transfer of EU user data to companies in the U.S. until 2020 when the EU’s Court of Justice determined it didn’t really protect EU users’ data from U.S. surveillance. But Facebook continued to transfer EU users’ personal data even after the Privacy Shield was invalidated, thus triggering the inquiry by the EU’s regulators.

With this fine, Facebook will be the unwilling record-holder for the biggest ever fine handed by the EU, surpassing Amazon which was slapped with an $886 million fine over (surprise) a GDPR breach in 2021. According to Euronews, Meta plans to appeal the decision.

May 19, 2023

Cotton Warns New CDC Vaccine Schedule Including COVID Jab Could Lead to School Vaccine Mandates + More

Cotton Warns New CDC Vaccine Schedule Including COVID Jab Could Lead to School Vaccine Mandates

Fox News reported:

Arkansas Republican Senator Tom Cotton worries that schools might mandate children receive the COVID-19 vaccine after the Biden administration’s new guidelines, and he is seeking to know whether Congress can do anything to stop it.

Cotton sent a letter to Comptroller General Gene Dodaro Tuesday calling for a Government Accountability Office (GAO) analysis of the Centers for Disease Control and Prevention’s (CDC) new vaccine schedule for kids and teens.

The CDC changed the vaccine schedule in February to recommend COVID-19 vaccine doses for children as young as 6 months old. If determined to be part of the federal rulemaking process, Congress could overturn the new shot schedule.

The Arkansas senator called the change in the CDC schedule “irresponsible” and warned the change could be used to back vaccine mandates in schools. “The CDC’s modification of the vaccine schedule is irresponsible given the low mortality rates of adolescents with COVID-19 and the unknown long-term side effects of the vaccine,” Cotton said.

Gorsuch Slams Pandemic Emergency Power as Intrusion on Civil Liberties

The Hill reported:

Conservative Justice Neil Gorsuch on Thursday slammed the use of emergency power during the pandemic as a mass intrusion on civil liberties.

Gorsuch, in an attached statement to the court’s unsigned order, more broadly railed against the use of emergency powers since COVID-19 shut down normal life, referencing among other things, lockdown orders, a federal ban on evictions and vaccine mandates.

“Since March 2020, we may have experienced the greatest intrusions on civil liberties in the peacetime history of this country. Executive officials across the country issued emergency decrees on a breathtaking scale,” Gorsuch wrote.

The conservative justice throughout the pandemic has been a skeptic of how officials leveraged laws that grant additional executive authority during emergencies, often lamenting that COVID-19 was being used as merely a pretext.

White House Announces New Funding for Teen Mental Health Crisis: ‘Will Help Save Lives’

Fox News reported:

As millions of Americans, particularly our young people, continue to struggle with worsening mental health challenges, the White House announced on Thursday — the National Day of Mental Health Action — how the Biden administration plans to tackle the crisis.

“It is quite clear that America is in a mental health crisis,” Susan Rice, domestic policy adviser, said during a call with the media.  “We already had a major challenge on our hands. And then the pandemic, the increased isolation, the burnout and trauma of COVID-19, have contributed to increased depression and anxiety, now affecting as many as two in five American adults.”

In Thursday’s announcement, the White House highlighted moves to help schools provide better mental healthcare for students. “The Department of Education will propose a new rule that would streamline medical billing permissions for schools, while Health and Human Services will issue additional guidance with new, easier medical billing steps,” Rice said on the media call.

By streamlining and reducing the amount of parental consent required to bill for Medicaid services, senior administration officials said they anticipate the new rule will impact about 300,000 children with disabilities who are enrolled in Medicaid.

“It means more children will have better access to preventive care, like mental health assessments and counseling, as well as physical healthcare services like vaccines and hearing and vision screenings,” said Rice.

Montana TikTok Users File Lawsuit Challenging Ban

The Verge reported:

A group of TikTok creators has sued to block a recently signed law that bans the app’s operation in Montana. The suit, filed last night and announced today, alleges that Montana’s SB 419 is an unconstitutional and overly broad infringement of their right to speech.

This week’s lawsuit attacks the Montana law on several fronts. It argues that Montana is depriving state residents of a forum for sharing and receiving speech, violating their First Amendment rights. It also argues that SB 419 violates the Commerce Clause by effectively restricting interstate commerce. And it says the law is preempted by federal sanctions powers.

New York City Schools Lift Ban on ChatGPT, Say Initial Fear ‘Overlooked the Potential’ of AI

Gizmodo reported:

After making a big statement about how ChatGPT could negatively impact student learning, New York City public schools are taking a step back and stating the AI chatbot isn’t so bad.

David Banks, the head of the city’s public schools, announced on Thursday that the system would lift the ban on ChatGPT it enacted in January.

At the time, New York City public schools appeared to be concerned that OpenAI’s chatbot would lead to a barrage of cheating by students and affect their critical thinking and problem-solving skills. Now, though, the system’s “knee-jerk fear and risk” response has evolved to one of cautious exploration, according to Banks.

The about-face of New York City’s public school system comes during a time of intense debate about what role, if any, AI should play in schools. Earlier this week, a professor at Texas A&M University-Commerce gave a group of students an “X” in his course because ChatGPT told him it had written their last three essay assignments of the semester, Rolling Stone reported.

Samsung Signs Up to Assist With CBDC Rollouts

Reclaim the Net reported:

​​We often mention Big Tech and other corporate giants — and not very often in a positive way — but South Korea’s Samsung somehow manages to slip under this radar. Nonetheless, it’s a veritable business behemoth, with things as diverse as shipbuilding, insurance, and, of course, Android phones.

And Samsung is not a local, but a true global phenomenon in terms of the sheer size of its business interests. So when this particular conglomerate speaks and makes its stance known on any issue — it’s worth listening.

This time, Samsung has spoken about CBDCs (central bank-issued digital currencies). As part of the super-powerful and rich global elites, nobody should be shocked to learn that Samsung likes the idea — and has in fact pledged to help South Korea’s government with offline CBDC payments associated with the program.

Vax Mandate Orders Muddy President’s Procurement Act Authority

Bloomberg Law reported:

A patchwork of different federal court rulings on the president’s authority to mandate vaccines for the employees of federal contractors provides ammunition to challenge future White House procurement policies.

President Joe Biden’s COVID-19 vaccine mandate for federal contractors expired last week with the end of the pandemic health emergency. The administration never had a chance to enforce it because it was paused amid a string of lawsuits claiming that the president overstepped his authority under the Procurement Act.

The mandate’s sunset makes it highly unlikely that the U.S. Supreme Court will resolve a split among four federal appeals courts that have addressed the issue, legal scholars told Bloomberg Law.

Lingering doubts over that authority will make any president’s future procurement-related executive orders more vulnerable to court challenges, attorneys said.

Apple Restricts Employees From Using ChatGPT Over Fear of Data Leaks

The Verge reported:

Apple has restricted employees from using AI tools like OpenAI’s ChatGPT over fears confidential information entered into these systems will be leaked or collected.

According to a report from The Wall Street Journal, Apple employees have also been warned against using GitHub’s AI programming assistant Copilot. Bloomberg reporter Mark Gurman tweeted that ChatGPT had been on Apple’s list of restricted software “for months.”

Apple has good reason to be wary. By default, OpenAI stores all interactions between users and ChatGPT. These conversations are collected to train OpenAI’s systems and can be inspected by moderators for breaking the company’s terms and services.

Apple is far from the only company instituting such a ban. Others include JP Morgan, Verizon, and Amazon. Apple’s ban, though, is notable given OpenAI launched an iOS app for ChatGPT this week. The app is free to use, supports voice input, and is available in the US. OpenAI says it will be launching the app in other countries soon, along with an Android version.

U.K. Court Tosses Class-Action Style Health Data Misuse Claim Against Google DeepMind

TechCrunch reported:

Google has prevailed against another U.K. class-action style privacy lawsuit after a London court dismissed a lawsuit filed last year against the tech giant and its AI division, DeepMind, which had sought compensation for misuse of NHS patients’ medical records.

The decision underscores the hurdles facing class-action style compensation claims for privacy breaches in the U.K.

The complainant had sought to bring a representative claim on behalf of the approximately 1.6 million individuals whose medical records were — starting in 2015 — passed to DeepMind without their knowledge or consent — seeking damages for unlawful use of patients’ confidential medical data.

The Google-owned AI firm had been engaged by the Royal Free NHS Trust which passed it patient data to co-develop an app for detecting acute kidney injury. The U.K.’s data protection watchdog later found the Trust had lacked a lawful basis for the processing.

May 18, 2023

Fertility App Fined $200,000 for Leaking Customers’ Health Data + More

Fertility App Fined $200,000 for Leaking Customers’ Health Data

CNN Business reported:

The company behind a popular fertility app has agreed to pay $200,000 in federal and state fines after authorities alleged that it had shared users’ personal health information for years without their consent, including to Google and to two companies based in China.

The app, known as Premom, will also be banned from sharing personal health information for advertising purposes and must ensure that the data it shared without users’ consent is deleted from third-party systems, according to the Federal Trade Commission, along with the attorneys general of Connecticut, the District of Columbia and Oregon.

The sharing of personal data allegedly affected Premom’s hundreds of thousands of users from at least 2018 until 2020, and violated a federal regulation known as the Health Breach Notification Rule, according to an FTC complaint against Easy Healthcare, Premom’s parent company.

Montana Is First State to Ban TikTok Over National Security Concerns

Ars Technica reported:

Montana became the first state to ban TikTok yesterday. In a press release, the state’s Republican governor, Greg Gianforte, said the move was a necessary step to keep Montanans safe from Chinese Communist Party surveillance. The ban will take effect on January 1, 2024.

Prior to signing Montana Senate Bill 419 into law, critics reported that banning TikTok in the state would likely be both technically and legally unfeasible. Technically, since Montana doesn’t control all Internet access in the state, the ban may be difficult to enforce. And legally, it must hold up to First Amendment scrutiny, because Montanans should have the right to access information and express themselves using whatever communications tool they prefer.

There are also possible complications with the ban because it prevents “mobile application stores from offering TikTok within the state.” Under the law, app stores like Google Play or the Apple App Store could be fined up to $10,000 a day for allowing TikTok downloads in the state. To many critics, that seems like Montana is trying to illegally regulate interstate commerce. And a trade group that Apple and Google help fund has recently confirmed that preventing access to TikTok in a single state would be impossible, The New York Times reported.

Supreme Court Hands Twitter, Google Wins in Internet Liability Cases

The Hill reported:

The Supreme Court on Thursday punted the issue of determining when internet companies are protected under a controversial liability shield, instead resolving the case on other grounds. The justices were considering two lawsuits in which families of terrorist attack victims said Google and Twitter should be held liable for aiding and abetting ISIS, leading to their relatives’ deaths.

Google asserted that Section 230 of the Communications Decency Act, enacted in 1996 to prevent internet companies from being held liable for content posted by third parties, protected the company from all of the claims.

But rather than wading into the weighty Section 230 dispute — which internet companies say allows them to serve users and offers protection from a deluge of litigation — the court Thursday found neither company had any underlying liability to need the protections.

Section 230 protects internet companies, of all sizes, from being held legally responsible for content posted by third parties. The protection has faced criticism from both sides of the aisle, with Democrats largely arguing it allows tech companies to host hate speech and misinformation without consequences and some Republicans alleging it allows tech companies to make content moderation decisions with an anti-conservative bias.

AI Is Getting Better at Reading Our Minds

Mashable reported:

AI is getting way better at deciphering our thoughts, for better or worse. Scientists at the University of Texas published a study in Nature describing how they used functional magnetic resonance (fMRI) and an AI system preceding ChatGPT called GPT-1, to create a non-invasive mind decoder that can detect brain activity and capture the essence of what someone is thinking.

To train the AI, researchers placed three people in fMRI scans and played entertaining podcasts for them to listen to, including The New York Times’ Modern Love, and The Moth Radio Hour. The scientists used transcripts of the podcasts to track brain activity and figure out which parts of the brain were activated by different words.

The decoder, however, is not fully developed yet. The AI only works if it’s trained with data from the brain activity of the person it is used on, which limits its distribution possibilities. There’s also a barrier with the fMRI scans, which are big and expensive. Plus, scientists found that the decoder can get confused if people decide to ‘lie’ to it by choosing to think about something different than what is required.

These obstacles may be a positive, as the potential to create a machine that can decode people’s thoughts raises serious privacy concerns; there’s currently no way to limit the tech’s use to medicine, and just imagine if the decoder could be used as surveillance or an interrogation method. So, before AI mind-reading develops further, scientists and policymakers need to seriously consider the ethical implications, and enforce laws that protect mental privacy to ensure this kind of tech is only used to benefit humanity.

How Addictive Tech Hacks Your Brain

Gizmodo reported:

Addictive cravings have become an everyday part of our relationship with technology. At the same time, comparing these urges to drug addictions can seem like hyperbole. Addictive drugs, after all, are chemical substances that need to physically enter your body to get you hooked — but you can’t exactly inject an iPhone. So how similar could they be?

More than you might think. From a neuroscientific perspective, it’s not unreasonable to draw parallels between addictive tech and cocaine. That’s not to say that compulsively checking your phone is as harmful as a substance use disorder, but the underlying neural circuitry is essentially the same, and in both situations, the catalyst is — you guessed it — dopamine.

We’ve established that the cycle of cocaine addiction is kicked off by chemically hotwiring our reward system. But your phone and computer (hopefully) aren’t putting any substances into your body. They can’t chemically hotwire anything.

Instead, they hook us by targeting our natural triggers for dopamine release. In contrast to hotwiring a car, this is like setting up misleading road signs, tricking a driver into unwittingly turning in the direction you indicate. Research into how this works on a biological level is still in its infancy, but there are a number of mechanisms that seem plausible.

In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels

The New York Times reported:

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the system’s underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software — computer code that can be freely copied, modified and reused — providing outsiders with everything they needed to quickly build chatbots of their own.

Its actions contrast with those of Google and OpenAI, the two companies leading the new A.I. arms race. Worried that A.I. tools like chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their A.I. products.

Colorado Senator Proposes Special Regulator for Big Tech and AI

The Hill reported:

Colorado Sen. Michael Bennet (D) will introduce legislation Thursday to establish a new regulator for the tech industry and the development of artificial intelligence (AI), which experts predict will have wide-ranging impacts on society.

Bennet introduced his Digital Platform Commission Act a year ago, but he has updated it to make its coverage of AI even more explicit.

The updated bill requires the commission to establish an age-appropriate design code and age verification standards for AI.

It would establish a Federal Digital Platform Commission to regulate digital platforms consistent with the public interest to encourage the creation of new online services and to provide consumer benefits, prevent harmful concentrations of private power and protect consumers from deceptive, unfair or abusive practices.

AI Pioneer Yoshua Bengio: Governments Must Move Fast to ‘Protect the Public’

Financial Times reported:

Advanced artificial intelligence systems such as OpenAI’s GPT could destabilize democracy unless governments take quick action and “protect the public”, an AI pioneer has warned.

Yoshua Bengio, who won the Turing Award alongside Geoffrey Hinton and Yann LeCun in 2018, said the recent rush by Big Tech to launch AI products had become “unhealthy,” adding he saw a “danger to political systems, to democracy, to the very nature of truth.”

Bengio is the latest in a growing faction of AI experts ringing alarm bells about the rapid rollout of powerful large language models. His colleague and friend Hinton resigned from Google this month to speak more freely about the risks AI poses to humanity.

In an interview with the Financial Times, Bengio pointed to society’s increasingly indiscriminate access to large language models as a serious concern, noting the lack of scrutiny currently being applied to the technology.

Aviation Advocacy Group Files Class Action Lawsuit Against Feds Over Mandatory COVID Shots

The Epoch Times reported:

Free to Fly Canada, an advocacy group for pilots and aviation employees, has filed a class action lawsuit against the federal government over mandatory workplace vaccination policies.

The organization said in a May 17 news release that it has chosen representative plaintiffs Greg Hill, Brent Warren, and Tanya Lewis, and will argue that the rights of thousands of Canadian aviation employees were violated by Transport Canada’s regulation, Interim Order Respecting Certain Requirements for Civil Aviation Due to COVID-19, No. 43. The legal action names as defendants the federal government and the minister of transportation.

The class action is open to unvaccinated employees who were affected by the Transport Canada Order, whether they were suspended, put on unpaid leave, fired, or coerced into early retirement, said Free to Fly. Interested aviation employees can sign up on the Free to Fly website.

The group said this is the first time a case of this type has been brought in Canadian courts. The class action now has to be certified by a court to proceed, which is standard procedure.