Elon Musk Wants to Merge Humans With AI. How Many Brains Will Be Damaged Along the Way?
Launched in 2016, Neuralink revealed in 2019 that it had created flexible “threads” that can be implanted into a brain, along with a sewing-machine-like robot to do the implanting. The idea is that these threads will read signals from a paralyzed patient’s brain and transmit that data to an iPhone or computer, enabling the patient to control it with just their thoughts — no need to tap, type or swipe.
So far, Neuralink has only done testing on animals. But in May, the company announced it had won FDA approval to run its first clinical trial in humans. Now, it’s recruiting paralyzed volunteers to study whether the implant enables them to control external devices. If the technology works in humans, it could improve the quality of life for millions of people. Approximately 5.4 million people are living with paralysis in the U.S. alone.
But helping paralyzed people is not Elon Musk’s end goal. That’s just a step on the way to achieving a much wilder long-term ambition.
That ambition, in Musk’s own words, is “to achieve a symbiosis with artificial intelligence.” His goal is to develop a technology that helps humans “merg[e] with AI” so that we won’t be “left behind” as AI becomes more sophisticated.
But it’s important to understand that this technology comes with staggering risks. Former Neuralink employees as well as experts in the field alleged that the company pushed for an unnecessarily invasive, potentially dangerous approach to the implants that can damage the brain (and apparently has done so in animal test subjects) to advance Musk’s goal of merging with AI.
How Stores Are Spying on You Using Creepy Facial Recognition Technology Without Your Consent
Have you ever wondered if the stores where you shop are watching you? Not just with security cameras. With something more advanced and creepy.
Something that can recognize your face and identify who you are, where you live, what you like and what you buy. Something that can track your every move and use your data for their own benefit.
Well, guess what? They are. That’s right, some of the biggest retailers in this country are secretly using sneaky facial recognition technology in their stores.
Facial recognition technology is a type of biometric identification that uses cameras and software to analyze and match your facial features. You may already be using this type of tech to unlock your phone or verify your identity. What you might not know is that some stores are using facial recognition technology to monitor you and your behavior without your permission or knowledge.
Google Asks Congress to Not Ban Teens From Social Media
Google responded to congressional child online safety proposals with its own counteroffer for the first time Monday, urging lawmakers to drop problematic protections like age-verification tech.
In a blog post, Google released its “Legislative Framework to Protect Children and Teens Online.” The framework comes as more lawmakers, like Sen. Elizabeth Warren (D-MA), are lining up behind the Kids Online Safety Act, a controversial bill intended to protect kids from dangerous content online.
In the framework, Google rejects state and federal attempts at requiring platforms to verify the age of users, like forcing users to upload copies of their government IDs to access an online service. Some states have recently gone as far as passing laws requiring platforms to obtain parental consent before anyone under 18 is allowed to use their services. Google dismisses these consent laws, arguing that they bar vulnerable teens from accessing helpful information.
YouTube published its own set of principles for protecting kids on Monday, laying out how the platform implements some of the guidance from Google’s policy framework. In a blog post, YouTube CEO Neal Mohan said the platform doesn’t serve personalized ads to kids and provides parents with a set of family controls.
Meta Confesses It’s Using What You Post to Train Its AI
How would you feel if your social media posts were used to train a virtual assistant without your consent? That is exactly what is happening to millions of people who belong to Facebook and Instagram.
Meta, the parent company of Facebook, admits that it is using public posts from both Instagram and Facebook members to train its new artificial intelligence assistant, Meta AI.
According to Nick Clegg, Meta’s president of global affairs, the tech giant is using both text and photos from public posts from people’s Instagram and Facebook to train Meta AI. He says the posts are selected based on their popularity and engagement and that they are stripped of any personal details before being fed to the AI system. He also says Meta has built safeguards into Meta AI to prevent misuse and abuse, such as filtering out harmful or offensive content.
Some of the people who use the platforms have raised concerns about the privacy and ethical implications of using their public posts to train Meta AI. They argue that Meta did not obtain explicit consent from them to use their posts and that they have not been made aware of how their data is being used.
School Forcing Students With COVID to Leave Sparks Republican Anger
The Republican-led Select Subcommittee on the Coronavirus Pandemic launched a probe this week into the University of Maryland’s COVID-19 policy for students.
“Select Subcommittee on the Coronavirus Pandemic Chairman Brad Wenstrup (R-Ohio) has joined forces with all Majority Members to shed light on coercive and potentially harmful COVID-19 policies that are reemerging at the University of Maryland,” the subcommittee wrote in a press release on Friday. “Under the University’s new directive, Maryland students who test positive for COVID-19 are to be immediately removed from their dorms and forced into isolation, either at a nearby hotel or by boarding a flight home — presumably at their own expense.”
According to the University of Maryland’s university health center, students who test positive and are living in “residence halls or university-owned fraternity and sorority houses will need to isolate at their permanent home or another off-campus location if they test positive.”
Australia Fines X, Accusing It of ‘Empty Talk’ on Fighting Child Sexual Abuse Online
Australia issued a fine of $610,500 Australian dollars ($386,000) on Monday against the company formerly known as Twitter for “falling short” in disclosing information on how it tackles child sex abuse content, in yet another setback for the Elon Musk-owned social media platform.
Just days earlier, the European Commission formally opened an investigation into X after issuing a previous warning about disinformation and illegal content on its platform linked to the Israel-Hamas war.
Australia’s e-Safety Commission, the online safety regulator, said in a statement Monday that X had failed to adequately respond to a number of questions about the way it was dealing with the problem of child abuse materials. The commission accused the platform of not providing any response to some questions, leaving some sections entirely blank or providing answers that were incomplete or inaccurate.
“Twitter/X has stated publicly that tackling child sexual exploitation is the number 1 priority for the company, but it can’t just be empty talk, we need to see words backed up with tangible action,” eSafety Commissioner Julie Inman Grant said in the statement.