Big Brother News Watch
Humanity ‘Not on Track to Handle Well’ the Risks AI Poses, Tech Experts Say + More
Humanity ‘Not on Track to Handle Well’ the Risks AI Poses, Tech Experts Say
Twenty-three academics and tech experts have signed on to a new document raising concerns about rapid advancements in AI. In the document, titled “Managing AI Risks in an Era of Rapid Progress,” they proposed a few policies for major tech companies and governments that they said could help “ensure responsible AI development.” Geoffrey Hinton, the computer scientist known as the “Godfather of AI” who recently warned there is a risk AI could one day “take over” from humans, is one of the document’s credited authors.
The document was published on Tuesday, just days before the United Kingdom (U.K.) is set to host the world’s first global AI Safety Summit. The two-day summit, which begins on November 1 in England’s Bletchley Park, is expected to focus on emerging AI tools known collectively as “frontier AI,” according to the U.K.’s Department for Science, Innovation & Technology (DSIT).
Summit organizers have said attendees will explore both the potential benefits and risks of AI and how international collaboration could help a world grappling with the uncertainties surrounding AI’s future. “The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well,” experts wrote for Tuesday’s document.
Moving forward, the authors recommended that major tech companies working on AI tools reserve at least one-third of their research and development budgets for AI safety and ethical use. They urged governments to create AI oversight procedures and “set consequences” for harms attributed to AI. Frontier AI should also be audited before it is set loose in the world, and AI developers should be held responsible for “reasonably seen and prevented” harm caused by AI.
Amazon Brings Conversational AI to Kids With Launch of ‘Explore With Alexa’
Amazon’s Echo devices will now allow kids to have interactive conversations with an AI-powered Alexa via a new feature called “Explore with Alexa.” First announced in September, the addition to the Amazon Kids+ content subscription allows children to have kid-friendly conversations with Alexa, powered by generative AI, but in a protected fashion designed to ensure the experience remains safe and appropriate.
Though there are already some AI experiences that cater to younger users like the AI chatbots from Character.ai and other companies, including Meta, Amazon is among the first to specifically look to generative AI to develop a conversational experience for kids under the age of 13.
That also comes with constraints, however, as generative AI can be led astray or “hallucinate” answers, while kids could ask inappropriate questions. To address these potential problems, Amazon has put guardrails into place around its use of gen AI for kids.
In terms of privacy, the company notes it’s not training its LLM on kids’ answers. In addition, the “Explore with Alexa” experience and any future LLM-backed features will continue to follow the same data handling policies of “classic Alexa” (non-AI Alexa). That means the Alexa app will include a list of the questions asked by kids in the household (those with a kids’ profile) and the response Alexa provided. That history can be stored or deleted either manually or automatically, depending on your settings.
Here’s How a Children’s Privacy Law Figures Into That Big Legal Effort Against Meta
A bipartisan group of 42 attorneys general are suing Meta, saying that it collects children’s data in a way that violates a federal privacy law as part of a broader complaint against the social media company that it builds addictive features into Facebook and Instagram.
“Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health,” my colleagues Cristiano Lima and Naomi Nix reported.
One of the chief claims of the attorneys general is that Meta runs afoul of the 1998 Children’s Online Privacy Protection Act or COPPA.
“The Children’s Online Privacy Protection Act of 1998 (COPPA) protects the privacy of children by requiring technology companies like Meta to obtain informed consent from parents prior to collecting the personal information of children online,” according to the complaint.
“Meta routinely violates COPPA in its operation of Instagram and Facebook by collecting the personal information of children on those Platforms without first obtaining (or even attempting to obtain) verifiable parental consent, as required by the statute.”
Meta’s Harmful Effects on Children Is One Issue That Unites Republicans and Democrats
While Republican and Democratic lawmakers appear more incapable than ever of working together to pass legislation, they largely agree on one thing: Meta’s negative impact on children and teens.
A bipartisan coalition of 33 attorneys general filed a joint federal lawsuit on Tuesday, accusing Facebook’s parent of knowingly implementing addictive features across its family of apps that have detrimental effects on children’s mental health and contribute to problems like teenage eating disorders.
Another nine attorneys general are also filing lawsuits in their respective states.
“Kids and teenagers are suffering from record levels of poor mental health and social media companies like Meta are to blame,” Attorney General Letitia James, a Democrat, said in a statement. “Meta has profited from children’s pain by intentionally designing its platforms with manipulative features that make children addicted to their platforms while lowering their self-esteem.”
White House to Unveil Sweeping AI Executive Order Next Week, Tackling Immigration, Safety
The Biden administration on Monday is expected to unveil a long-anticipated artificial intelligence executive order, marking the U.S. government’s most significant attempt to date to regulate the evolving technology that has sparked fear and hype around the world.
The administration plans to release the order two days before government leaders, top Silicon Valley executives and civil society groups gather in the United Kingdom for an international summit focused on the potential risks that AI presents to society, according to four people familiar with the matter, who spoke on the condition of anonymity to discuss the private plans.
The White House is taking executive action as the European Union and other governments are working to block the riskiest uses of artificial intelligence. Officials in Europe are expected to reach a deal by the end of the year on the E.U. AI Act, a wide-ranging package that aims to protect consumers from potentially dangerous applications of AI. Lawmakers in the U.S. Congress are still in the early stages of developing bipartisan legislation to respond to the technology.
Judge Advances Lawsuit Against Apple Studios Over COVID Vaccine Mandate
The Hollywood Reporter reported:
Apple Studios might have discriminated against Brent Sexton when it pulled an offer for him to star in Manhunt after he refused the COVID-19 vaccine due to potential health complications, a judge has ruled.
Los Angeles Superior Court Judge Michael Linfield declined Apple’s move to dismiss the lawsuit on free speech grounds, finding that the company’s mandatory vaccination policy may have been unconstitutional. The order issued on Oct. 19 marks one of the few rulings advancing a lawsuit from an actor who took issue with a studio’s refusal to provide accommodations for refusing to receive the COVID-19 vaccine.
At the time, Apple didn’t require employees at corporate headquarters or retail stores to get the vaccine, allowing them to get daily or weekly tests. Apple Studios, however, was among the majority of studios in Hollywood that implemented vaccine mandates for a production’s main actors, as well as key crewmembers who work closely with them in the highest-risk areas of the set.
Sexton’s deal on the show fell apart after he refused to get immunized, citing a prior health condition that his doctor said made it dangerous for him to receive the vaccine. He sued after Apple refused to provide accommodations, arguing the company’s vaccine policy is unconstitutional.
COVID Passports Convinced Few People to Get Vaccinated in Quebec, Ontario: Study
COVID-19 vaccine passports in Quebec and Ontario did little to convince the unvaccinated to get the jab and did not significantly reduce inequalities in vaccination coverage, a new peer-reviewed study has found.
The passports, which forced people to show proof of vaccination to enter places such as bars and restaurants, were directly responsible for a rise of 0.9% in the vaccination rate in Quebec and 0.7% in Ontario, says Jorge Luis Flores, a research assistant at McGill University and lead author of the paper published Tuesday in the CMAJ Open journal.
The passports were discontinued across Canada by the spring of 2022.
In the 11 weeks after the provinces announced the passports, vaccination rates in both provinces rose by five percentage points. But after considering the uptake trends, researchers concluded the passports were directly responsible for a rise of less than one percent in vaccination rates, says Mathieu Maheu-Giroux, study co-author and McGill University professor who studies public health.
Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces + More
Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces
PimEyes, a public search engine that uses facial recognition to match online photos of people, has banned searches of minors over concerns it endangers children, reports The New York Times.
At least, it should. PimEyes’ new detection system, which uses age detection AI to identify whether the person is a child, is still very much a work in progress. After testing it, The New York Times found it struggles to identify children photographed at certain angles. The AI also doesn’t always accurately detect teenagers.
PimEyes chief executive Giorgi Gobronidze says he’d been planning on implementing such a protection mechanism since 2021. However, the feature was only fully deployed after New York Times writer Kashmir Hill published an article about the threat AI poses to children last week. According to Gobronidze, human rights organizations working to help minors can continue to search for them, while all other searches will produce images that block children’s faces.
In the article, Hill writes that the service banned over 200 accounts for inappropriate searches of children. One parent told Hill she’d even found photos of her children she’d never seen before using PimEyes. In order to find out where the image came from, the mother would have to pay a $29.99 monthly subscription fee.
PimEyes is just one of the facial recognition engines that have been in the spotlight for privacy violations. In January 2020, Hill’s New York Times investigation revealed how hundreds of law enforcement organizations had already started using Clearview AI, a similar face recognition engine, with little oversight.
Instagram Linked to Depression, Anxiety, Insomnia in Kids — U.S. States’ Lawsuit
Dozens of U.S. states are suing Meta Platforms (META.O) and its Instagram unit, accusing them of contributing to a youth mental health crisis through the addictive nature of their social media platforms.
In a complaint filed in the Oakland, California, federal court on Tuesday, 33 states including California and Illinois said Meta, which also operates Facebook, has repeatedly misled the public about the substantial dangers of its platforms and knowingly induced young children and teenagers into addictive and compulsive social media use.
“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint said.
The lawsuit is the latest in a string of legal actions against social media companies on behalf of children and teens. ByteDance’s TikTok and Google‘s YouTube are also the subjects of the hundreds of lawsuits filed on behalf of children and school districts about the addictiveness of social media.
The lawsuit alleges that Meta also violated a law banning the collection of data of children under the age of 13. The state action seeks to patch holes left by the U.S. Congress’s inability to pass new online protections for children, despite years of discussions.
Fed Governor Admits CBDCs Pose ‘Significant’ Privacy Risks
Federal Reserve Governor Michelle Bowman has raised valid concerns about the grave risks and privacy dangers that the introduction of a central bank digital currency (CBDC) might lead to in an appearance at a Harvard Law School Program on October 17.
The concept of a CBDC creation promises no certain benefit, remarks Bowman, but notably hints towards potential “unintended consequences” for the industry of finance.
Drawing from the contentions raised by one of the major participants in the regulation of domestic payment systems and banking, Governor Bowman underscored the trade-offs and risks that a digital dollar could entail. With the “considerable consumer privacy concerns” that a U.S. CBDC implementation might entail, any plausible merits of such a currency, Bowman points out, remain largely elusive.
Notwithstanding the grand promises of hassle-free payment systems or greater financial inclusion, there appears to be a significant lack of persuasive proof that a CBDC would actually contribute to these ends or furnish public access to secure central bank money. Yet, the argument here is not for the halt of research on this subject; according to Bowen, a continuous study of a digital dollar’s technical abilities and potential risks linked with CBDCs could foster a progressive attitude towards such future developments.
AI Firms Must Be Held Responsible for Harm They Cause, ‘Godfathers’ of Technology Say
Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.
Tuesday’s intervention was made as international politicians, tech companies, academics and civil society figures prepare to gather at Bletchley Park next week for a summit on AI safety.
“It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”
He added: “There are more regulations on sandwich shops than there are on AI companies.”
Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI,” who won the ACM Turing Award — the computer science equivalent of the Nobel prize — in 2018 for their work on AI.
FTC Plans to Hire Child Psychologist to Guide Internet Rules
The Federal Trade Commission plans to hire at least one child psychologist who can guide its work on internet regulation, Democratic Commissioner Alvaro Bedoya told The Record in an interview published Monday.
FTC Chair Lina Khan backs the plan, Bedoya told the outlet, adding that he hopes it can become a reality by next fall, though the commission does not yet have a firm timeline.
The FTC’s plan is indicative of a broader push across the U.S. government, focusing on online protections for kids and teens. Federal and state lawmakers have proposed new legislation they believe will make the internet safer by mandating stronger age authentication or placing more responsibility on tech companies to design safe products for young users. The U.S. Surgeon General issued an advisory in May that young people’s social media use poses significant mental health risks.
Bedoya envisions an in-house child psychologist to be a helpful resource for commissioners like himself. Those experts could bring important insights that can link a cause to alleged harm and inform the appropriate damages the agency seeks, Bedoya said. He added that child psychologists could help the FTC evaluate allegations of how social media may affect mental health, as well as assess the effect of dark patterns or other deceptive features.
A Controversial Plan to Scan Private Messages for Child Abuse Meets Fresh Scandal
Danny Mekić, an Amsterdam-based Ph.D. researcher, was studying a proposed European law meant to combat child sexual abuse when he came across a rather odd discovery. All of a sudden, he started seeing ads on X, formerly Twitter, that featured young girls and sinister-looking men against a dark background, set to an eerie soundtrack. The advertisements, which displayed stats from a survey about child sexual abuse and online privacy, were paid for by the European Commission.
Mekić thought the videos were unusual for a governmental organization and decided to delve deeper. The survey findings highlighted in the videos suggested that a majority of EU citizens would support the scanning of all their digital communications.
Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.
‘Invasive’ Google Keyword Search Warrants Get Court Greenlight + More
‘Invasive’ Google Keyword Search Warrants Get Court Greenlight. Here’s Everything You Need to Know
Colorado’s Supreme Court this week had the opportunity to hand down a historic judgment on the constitutionality of “reverse keyword search warrants,” a powerful new surveillance technique that grants law enforcement the ability to identify potential criminal suspects based on broad, far-reaching internet search results.
Police say the creative warrants have helped them crack otherwise cold cases. Critics, which include more than a dozen rights organizations and major tech companies, argue the tool’s immense scope tramples on innocent users’ privacy and runs afoul of Fourth Amendment Protections against unreasonable searches by the government.
Civil liberties and digital rights experts speaking with Gizmodo described the court’s “confusing” decision to punt on the constitutionality of reverse keyword search this week as a major missed opportunity and one that could inevitably lead to more cops pursuing the controversial tactics, both in Colorado and beyond.
Critics fear these broad warrants, which compel Google and other tech companies to sift through its vast cornucopia of search data to sniff out users who’ve searched for specific keywords, could be weaponized against abortion seekers, political protestors, or even everyday internet users who inadvertently type a result that could someday be used against them in court.
Supreme Court Will Hear Biden Social Media Case This Term
The Supreme Court said Friday it will consider a social media censorship case brought against Biden administration officials in its next term, setting up a legal battle with resounding implications for online speech.
The high court also issued a stay in an injunction ordered by the 5th U.S. Circuit Court of Appeals, pausing its effect until the justices decide the case on its merits. Justices Samuel Alito, Clarence Thomas and Neil Gorsuch dissented in their decision to stay the order.
“At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news,” Alito wrote in his dissenting opinion. “That is most unfortunate.”
Missouri Attorney General Andrew Bailey, one of the attorneys general who brought the lawsuit, called the high court’s decision to stay the order “the worst First Amendment violation in our nation’s history.”
“We look forward to dismantling Joe Biden’s vast censorship enterprise at the nation’s highest court,” Bailey said in a statement.
The U.S. Has Failed to Pass AI Regulation. New York City Is Stepping Up
As the U.S. federal government struggles to meaningfully regulate AI — or even function — New York City is stepping into the governance gap.
The city introduced an AI Action Plan this week that Mayor Eric Adams calls a first of its kind in the nation. The set of roughly 40 policy initiatives is designed to protect residents against harm like bias or discrimination from AI. It includes the development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.
New York’s AI regulation could soon expand still further. City council member Jennifer Gutiérrez, chair of the body’s technology committee, today introduced legislation that would create an Office of Algorithmic Data Integrity to oversee AI in New York.
Several U.S. senators have suggested creating a new federal agency to regulate AI earlier this year, but Gutiérrez says she’s learned that there’s no point in waiting for action in Washington, DC. “We have a unique responsibility because a lot of innovation lives here,” she says. “It’s really important for us to take the lead.”
Americans Are Concerned About AI Data Collection, New Poll Shows
Most Americans who have an awareness of emerging artificial intelligence (AI) technology are worried that companies won’t use AI tools responsibly, according to survey results released this week by Pew Research Center.
There has been an increase in public discourse about AI this year due in part to the wide adoption of ChatGPT, a chatbot unveiled last November by the AI company OpenAI. Users are able to communicate with ChatGPT after initiating conversations through textual, visual and audio prompts. Global monthly web visits to the ChatGPT website were estimated to be at 1.43 billion in August, according to Reuters.
Technology leaders say AI development poses positive potential, particularly in the healthcare, drug development and transportation industries. But there is also risk and uncertainty associated with AI, as no one knows for certain what it could one day become.
Nearly half of American adults don’t want social media companies using their personal data for personalized user experiences, and 44% don’t like the idea of AI being used to identify people through voice analysis, according to the poll’s results.
Young People Are Increasingly Worried About Privacy in the AI Age
Younger consumers are more likely to exercise Data Subject Access Rights, according to a new Cisco study which found nearly half (42%) of 18-24 year-olds have done so compared with just 6% of 75 year-olds and older.
The number is on the up, too, at four percentage points higher in 2023 compared with 2022, suggesting growing concerns over data privacy. Of the 2,600 consumer participants, almost two-thirds (62%) expressed their concern about how organizations could be using their personal data for AI.
Overall, the study suggests that consumer lack of trust is on the rise, and companies need to act to better inform their users. The blame isn’t entirely on large corporations, though, according to VP and Chief Privacy Officer Harvey Jang: “As governments pass laws and companies seek to build trust, consumers must also take action and use technology responsibly to protect their own privacy.”
AI Makes Hiding Your Kids’ Identity on the Internet More Important Than Ever. But It’s Also Harder to Do.
The New York Times via The Seattle Times reported:
Historically, the main criticism of parents who overshare online has been the invasion of their progeny’s privacy, but advances in artificial intelligence-based technologies present new ways for bad actors to misappropriate the online content of children.
Among the novel risks are scams featuring deepfake technology that mimic children’s voices and the possibility that a stranger could learn a child’s name and address from just a search of their photo.
Amanda Lenhart, the head of research at Common Sense Media, a nonprofit that offers media advice to parents, pointed to a recent public service campaign from Deutsche Telekom that urged more careful sharing of children’s data.
The video featured an actress portraying a 9-year-old named Ella, whose fictional parents were indiscreet about posting photos and videos of her online. Deepfake technology generated a digitally aged version of Ella who admonishes her fictional parents, telling them that her identity has been stolen, her voice has been duplicated to trick them into thinking she’s been kidnapped and a nude photo of her childhood self has been exploited.
Empty Classroom Seats Reveal ‘Long Shadow’ of COVID Chaos on Britain’s Children
While this paints a picture of chaotic decision-making and rancorous divisions at the top of government, none of it is surprising. By far the most important testimony so far — much more essential than who said what about who on WhatsApp — came from England’s former children’s commissioner Anne Longfield, who told the inquiry children will be living under the “long shadow” of the pandemic for two decades to come.
It may seem odd to think of COVID in terms of silver linings. But I’ve often pondered how lucky we were that, unlike many pandemic-causing infectious diseases that carry the highest risk of death among the very young and very old, COVID was generally associated with mild symptoms in children.
But the government squandered this precious silver lining from the start. After the decision to close schools in March 2020, they should have been the first thing to reopen as infection rates started to fall in May that year.
Instead, they remained mostly closed as pubs and restaurants were allowed to reopen. Rishi Sunak threw almost a billion pounds at subsidizing people to eat out in August but couldn’t find the cash to put on outdoor enrichment activities over the summer for children stuck at home for months on end. Boris Johnson delayed imposing the social restrictions later that year to the extent that he was forced to take more drastic action when it eventually acted, again closing schools for weeks.
Stay in EU, Comply With EU Law: EU’s Digital Chief Warns X’s Musk
X owner Elon Musk will have to comply with European Union law and clamp down on illegal content on the social network if it wants to keep on doing “good business” in the region, the EU’s digital chief Věra Jourová said today.
The tech mogul denied a report last week that he was considering pulling X out of Europe to avoid new requirements for digital platforms. X is used by over 101 million Europeans in the bloc. Under the EU’s Digital Services Act (DSA), the company must swiftly take down content and ensure the network limits disinformation and cyberviolence.
Musk does “good business in [the] European Union, but it will be his decision and if he decides to stay in as well, he will have to comply with the EU law,” Jourová said.
Their Kids Died After Buying Drugs on Snapchat. Now the Parents Are Suing + More
Their Kids Died After Buying Drugs on Snapchat. Now the Parents Are Suing
Hanh Badger was working from home on the morning of June 17, 2021. She went to the kitchen to grab a second cup of coffee and noticed her daughter’s bedroom door was still shut. Badger found Brooke, 17, pale and motionless in bed.
In the ensuing days, Badger’s husband and son were able to gain access to Brooke’s computer and, with it, her Snapchat account. They found screenshots of what looked like a menu of narcotics, and conversations with a drug dealer showing Brooke had purchased what she believed to be Roxicet, a prescription medication containing acetaminophen and oxycodone typically prescribed for pain relief. Instead, the substance was a counterfeit pill that held a lethal dose of fentanyl.
Across the U.S., young people are dying from fentanyl in record numbers, even as overall drug use is on the decline. Nationally, the number of opioid overdose deaths for people 24 and under nearly doubled from 2019-2021. And according to the National Institute on Drug Abuse, the number of overdoses attributed to synthetic opioids like fentanyl dwarfs that of any other substance.
In California, where Brooke lived, fentanyl-related overdose deaths among 15- to 19-year-olds surged by nearly 800% between 2018 and 2021, according to data from the California Overdose Surveillance Dashboard. Many are young victims poisoned by counterfeit pills that have been pressed to look like legitimate prescription drugs, but that are laced with fentanyl, an opioid that is deadly even in granular quantities. Typically, those teenagers acquired what they believed to be Percocet, Xanax or other pharmaceuticals online through social media.
In their grief, victims’ parents are motivated to end this crisis to prevent another family’s suffering while also giving meaning to their loss. Many have launched awareness campaigns, founded educational programs and advocated for legislative change. And now, some parents are taking to the civil courts, targeting the tech giants whose platforms facilitated their children’s purchases of pills that killed them.
With New Declaration, Luminaries Warn That Online Censorship Is Destroying Freedom
Since the COVID pandemic, authoritarians in the U.S. and around the world have cynically used claims of “disinformation” to censor ordinary people and stifle dissent about everything from the efficacy of masks and vaccines to the war in Ukraine, the Middle East situation, and Hunter Biden’s laptop.
Whatever your political bent, this new form of speech control is a threat to you. Only by debating freely in a rapidly fragmenting world can we resolve differences without resorting to violence.
To that end, a group of 136 academics, historians and journalists from the left, right and center of the political spectrum have come together to warn President Biden that this rapidly growing censorship regime “undermines the foundational principles of representative democracy.” In their “Westminster Declaration,” released Wednesday, the international group points out that the best way to combat actual disinformation is with free speech.
The eclectic group that has signed the declaration to fight censorship includes Canadian psychologist Jordan Peterson, U.K. biologist Richard Dawkins, NYU social psychologist Jonathan Haidt, Julian Assange, the Australian founder of WikiLeaks, actor Tim Robbins, evolutionary biologist Bret Weinstein, economist Glenn Loury, filmmaker Oliver Stone, whistleblower Edward Snowden, British comedian John Cleese, Slovenian philosopher Slavoj Žižek, British journalist Matt Ridley, Stanford professor Jay Bhattacharya, Harvard professor of medicine Martin Kulldorf, Australian journalist Adam Creighton, French science journalist Xavier Azalbert and German filmmaker Robert Cibis.
AI Is Becoming More Powerful — but Also More Secretive
When OpenAI published details of the stunningly capable AI language model GPT-4, which powers ChatGPT, in March, its researchers filled 100 pages. They also left out a few important details — like anything substantial about how it was actually built or how it works.
That was no accidental oversight, of course. OpenAI and other big companies are keen to keep the workings of their most prized algorithms shrouded in mystery, in part out of fear the technology might be misused but also from worries about giving competitors a leg up.
A study released by researchers at Stanford University this week shows just how deep — and potentially dangerous — the secrecy is around GPT-4 and other cutting-edge AI systems. Some AI researchers I’ve spoken to say that we are in the midst of a fundamental shift in the way AI is pursued. They fear it’s one that makes the field less likely to produce scientific advances, provides less accountability, and reduces reliability and safety.
Threads ‘Temporarily’ Blocking COVID-Related Search Terms
Threads search is blocking terms like “COVID,” “vaccines,” “long COVID,” and others, but it is reportedly only temporary according to Instagram’s head, Adam Mosseri. This comes a week after he said the app would not “amplify news.”
When Thread users input COVID-related search terms for news articles or other information, the platform blocks any information from pulling up, barring a link to the Centers for Disease Control and Prevention website. The Washington Post first reported on the move back in September, and Threads’ parent company, Meta, acknowledged that it is intentionally blocking the search terms.
Mosseri posted on Threads Monday that the company doesn’t have a timeline for when it will reverse the block but says the move is temporary and being worked on. He pointed to the war in Israel and Gaza, saying responsibly managing that content takes precedence and is “the biggest safety focus right now.” It could take weeks or months to reverse the block, according to Mosseri.
Threads entered the fray as an alternative to X, formerly called Twitter, back in July. Meta CEO Mark Zuckerberg promoted Threads as going back to the basics saying, “The vision for Threads is to create an option and friendly public space for conversation.”
Millions More 23andMe Records Leaked Online
Another database belonging to genetic testing website 23andMe has allegedly been published on a dark net forum, just days after an initial leak was revealed.
An individual going by the alias Golem published a database on BreachForums containing sensitive information on four million users.
Subsequent TechCrunch investigations confirmed that at least some of the data published matched known and public information. Roughly two weeks ago, Golem announced stealing sensitive user data from 23andMe, claiming to have done so by means of credential stuffing.
The database Golem posted most recently contains records on four million users, reports said. The hacker said the data includes information on British individuals, including some of the “wealthiest people living in the U.S. and Western Europe.” A company spokesperson told TechCrunch that the company is aware of the news and is currently “reviewing the data to determine if it is legitimate.”
Zuckerberg and Chan Announce a New York Biohub to Build Disease-Fighting Cellular Machines
Meta founder Mark Zuckerberg and his wife, pediatrician and philanthropist Priscilla Chan, announced on Wednesday plans to invest $250 million over 10 years to establish a new “biohub” in New York City focused on building a new class of cellular machines that can surveil the body and snuff out disease.
The new initiative, publicly revealed at the 2023 STAT Summit and previewed exclusively to STAT, is the latest program from the Chan Zuckerberg Initiative, or CZI, a company the couple founded in 2015 to help cure, prevent, or manage all disease by 2100. It joins the original San Francisco Biohub founded in 2016 and a Chicago Biohub founded earlier this year.
The initial biohub in CZI’s network was diffuse in ambition but developed tools for analyzing large troves of data, particularly from single-cell sequencing, and did direct work on infectious diseases. For the new hubs, CZI solicited specific grand challenges to be achieved in 10 to 15 years. The Chicago researchers will try to use tiny devices to decode the secrets of inflammatory disease — such as how immune cells malfunction to wreak havoc.
The new hub, consisting of researchers at Yale, Columbia, and Rockefeller, will essentially attempt the opposite: decode precisely how immune cells successfully sense and snuff out cellular fires, such as cancer and Alzheimer’s, in hopes of engineering cells that can detect and eventually treat conflagrations the immune system can’t see or can’t put out.
If everything goes right, it would eventually amount to an internal lab-built police, fire, and medical corps to stamp out disease before symptoms appear.
Shasta County Appoints Public Health Officer Who Fought COVID Vaccine Mandates
The Shasta County Board of Supervisors has appointed an outspoken critic of COVID-19 vaccine mandates to be the county’s new public health officer.
The hiring of Redding family physician James Mu comes 17 months after the board fired its previous public health officer, Karen Ramstrom, whom supervisors had criticized for following state mandates requiring masks and vaccinations during the pandemic.
“I would like to follow evidence-based policy. However, if there are medical dogma that are not good for the population in our county, then I will question and even challenge them,” said Mu, who also opposed vaccines and masks for children, after the supervisors voted to hire him Tuesday night.
Mu was one of 12 Shasta County physicians who, in February 2022, publicly signed an “Open Letter on COVID-19” that decried vaccine mandates, the testing of asymptomatic people and “the physical, psychological, and social impacts of mask and vaccine mandates on children.” The letter touted the benefits of natural immunity and promoted the use of “early treatments” to prevent the disease.
Amazon Will Start Testing Drones That Will Drop Prescriptions on Your Doorstep, Literally
Amazon will soon make prescription drugs fall from the sky when the e-commerce giant becomes the latest company to test drone deliveries for medications.
The company said Wednesday that customers in College Station, Texas, can now get prescriptions delivered by a drone within an hour of placing their order.
Amazon says customers will be able to choose from more than 500 medications, a list that includes common treatments for conditions like the flu or pneumonia, but not controlled substances.