Big Brother News Watch
Biden Censors Battered — Expect an Epic Supreme Court Showdown + More
Biden Censors Battered — Expect an Epic Supreme Court Showdown
Federal judges hammered fresh nails into the coffin of the Biden censorship regime Thursday in New Orleans. The thrashing the administration received will likely set up an epic Supreme Court battle that could help redefine freedom for our era.
The Biden administration rushed to sway the appeals court to postpone enforcement of the injunction and then sought to redefine all its closed-door shenanigans as public service.
At least two of the three judges on last week’s panel will likely uphold all or part of the injunction against federal censorship. The Biden administration will probably speedily appeal the case to the Supreme Court, setting up an epic showdown.
If Team Biden can destroy freedom of speech by renaming censorship “content moderation,” what other freedoms will it destroy with rhetorical scams? If endless demands by the FBI and other agencies don’t amount to “coercion,” then it is folly to expect the feds to ever admit how they are decimating Americans’ rights and liberties.
The Kids Online Safety Act Isn’t All Right, Critics Say
Debate continues to rage over the federal Kids Online Safety Act (KOSA), which seeks to hold platforms liable for feeding harmful content to minors. KOSA is lawmakers’ answer to whistleblower Frances Haugen’s shocking revelations to Congress. In 2021, Haugen leaked documents and provided testimony alleging that Facebook knew that its platform was addictive and was harming teens — but blinded by its pursuit of profits, it chose to ignore the harms.
Sen. Richard Blumenthal (D-Conn.), who sponsored KOSA, was among the lawmakers stunned by Haugen’s testimony. He said in 2021 that Haugen had shown that “Facebook exploited teens using powerful algorithms that amplified their insecurities.” Haugen’s testimony, Blumenthal claimed, provided “powerful proof that Facebook knew its products were harming teenagers.”
But when Blumenthal introduced KOSA last year, the bill faced immediate and massive blowback from more than 90 organizations — including tech groups, digital rights advocates, legal experts, child safety organizations, and civil rights groups. These critics warned lawmakers of KOSA’s many flaws, but they were most concerned that the bill imposed a vague “duty of care” on platforms that was “effectively an instruction to employ broad content filtering to limit minors’ access to certain online content.”
The fear was that the duty of care provision would likely lead platforms to over-moderate and imprecisely filter content deemed controversial — things like information on LGBTQ+ issues, drug addiction, eating disorders, mental health issues, or escape from abusive situations.
Not all critics agreed that recent changes to the bill go far enough to fix its biggest flaws. In fact, the bill’s staunchest critics told Ars that the legislation is incurably flawed — due to the barely changed duty of care provision — and that it still risks creating more harm than good for kids. These critics also warn that all Internet users could be harmed, as platforms would likely start to censor a wide range of protected speech and limit user privacy by age-gating the Internet.
A Huge Scam Targeting Kids With Roblox and Fortnite ‘Offers’ Has Been Hiding in Plain Sight
Thousands of websites belonging to U.S. government agencies, leading universities, and professional organizations have been hijacked over the last half decade and used to push scammy offers and promotions, new research has found. Many of these scams are aimed at children and attempt to trick them into downloading apps, malware, or submitting personal details in exchange for nonexistent rewards in Fortnite and Roblox.
For more than three years, security researcher Zach Edwards has been tracking these website hijackings and scams. He says the activity can be linked back to the activities of affiliate users of one advertising company. The U.S.-registered company acts as a service that sends web traffic to a range of online advertisers, allowing individuals to sign up and use its systems. However, on any given day, Edwards, a senior manager of threat insights at Human Security, uncovers scores of .gov, .org, and .org domains being compromised.
The schemes and ways people make money are complex, but each of the websites is hijacked in a similar way. Vulnerabilities or weaknesses in a website’s backend, or its content management system, are exploited by attackers who upload malicious PDF files to the website. These documents, which Edwards calls “poison PDFs,” are designed to show up in search engines and promote “free Fortniteskins,” generators for Roblox’s in-game currency, or cheap streams of Barbie, Oppenheimer, and other popular films. The files are packed with words people may search for on these subjects.
When someone clicks the links in the poison PDFs, they can be pushed through multiple websites, which ultimately direct them to scam landing pages, says Edwards, who presented the findings at the Black Hat security conference in Las Vegas. There are “lots of landing pages that appear super targeted to children,” he says.
Mental Health & Social Media: What Message Prevails?
Mental health is a topic never far from the minds of girls as young as 11-15 years old in the United States, according to a recent study by Common Sense Media, an online parental guidance platform.
In real life, nearly seven in ten girls reported having had exposure to helpful mental health content and information in real life each month. But on the flip side, just under half (45%) said they heard or saw harmful content about suicide or self-harm while just under four in ten (38%) said the same of harmful content on eating disorders.
The report found that girls report that exposure to both topics is prevalent across TikTok, Instagram, YouTube, Snapchat and even messaging apps, with more than one in three girls reporting that they “hear or see things about suicide or self-harm that is upsetting to [them]” at least monthly on all platforms, with 15% of girls who use TikTok and Instagram saying they come across this type of content on the platforms on a daily basis. The figures are even higher for those with depressive symptoms.
Some 75% of the girls who reported moderate to severe depressive symptoms who use Instagram said they come across harmful suicide-related content on the platform at least once a month. This is nearly three times the likelihood of the girls without depressive symptoms who come across the content at the same frequency (26%).
According to the report, a similar pattern emerges for TikTok users (69% of girls with moderate to severe depressive symptoms see the harmful content versus 27% of girls without depressive symptoms) as well as for the other platforms.
TikTok Is Letting People Shut Off Its Infamous Algorithm — and Think for Themselves
TikTok recently announced that its users in the European Union will soon be able to switch off its infamously engaging content-selection algorithm. The EU’s Digital Services Act (DSA) is driving this change as part of the region’s broader effort to regulate AI and digital services in accordance with human rights and values.
TikTok’s algorithm learns from users’ interactions — how long they watch, what they like, when they share a video —to create a highly tailored and immersive experience that can shape their mental states, preferences, and behaviors without their full awareness or consent. An opt-out feature is a great step toward protecting cognitive liberty, the fundamental right to self-determination over our brains and mental experiences.
Rather than being confined to algorithmically curated For You pages and live feeds, users will be able to see trending videos in their region and language, or a “Following and Friends” feed that lists the creators they follow in chronological order. This prioritizes popular content in their region rather than content selected for its stickiness. The law also bans targeted advertisements to users between 13 and 17 years old and provides more information and reporting options to flag illegal or harmful content.
A well-structured plan requires a combination of regulations, incentives, and commercial redesigns focusing on cognitive liberty. Regulatory standards must govern user engagement models, information sharing, and data privacy. Strong legal safeguards must be in place against interfering with mental privacy and manipulation. Companies must be transparent about how the algorithms they’re deploying work, and have a duty to assess, disclose, and adopt safeguards against undue influence.
Government Targeting U.K. Minorities With Social Media Ads Despite Facebook Ban
Government agencies and police forces are using hyper-targeted social media adverts to push messages about migration, jobs and crime to minority groups. Many of the ads are targeted using data linked to protected characteristics including race, religious beliefs and sexual orientation. Stereotypes about interests and traits such as music taste and hair type are also widely used.
In one case, a government campaign aimed at helping young people off benefits was targeted at Facebook users with interests including “afro-textured hair” and the “West Indies cricket team”.
The “microtargeting” is revealed in an analysis of more than 12,000 ads that ran on Facebook and Instagram between late 2020 and 2023. Supplied to U.K. academics by Facebook’s parent company Meta, and shared with the Observer, the data gives an insight into the use of targeted advertising by the state based on profiling by the world’s biggest social media company.
In 2021, Facebook announced a ban on targeting based on race, religion and sexual orientation amid concerns about discrimination, which led to the removal of several interest categories that had been used by advertisers to reach and exclude minority groups. But the latest analysis suggests interest labels assigned by Facebook based on web browsing and social media activity are routinely used as a proxy.
New Zealand, Whose Pandemic Response Was Closely Watched, Removes Last of COVID Restrictions
New Zealand on Monday removed the last of its remaining COVID-19 restrictions, marking the end of a government response to the pandemic that was watched closely around the world.
Prime Minister Chris Hipkins said the requirement to wear masks in hospitals and other healthcare facilities would end at midnight, as would a requirement for people who caught the virus to isolate themselves for seven days.
Reflecting on the government’s response to the virus over more than three years, Hipkins said that during the height of the pandemic he had longed for the day he could end all restrictions, but now it felt anticlimactic.
VR Headsets Give Enough Data for AI to Accurately Guess Ethnicity, Income and More + More
VR Headsets Give Enough Data for AI to Accurately Guess Ethnicity, Income and More
Blending virtual reality with artificial intelligence could turn into a privacy nightmare. By analyzing how people moved while wearing virtual reality headsets, researchers said, a machine learning model accurately predicted their height, weight, age, marital status and more the majority of the time. The work exposes how artificial intelligence could be used to guess personal data, without users having to directly reveal it.
In one study at the University of California, Berkeley, in February, researchers could pick out a single person from more than 50,000 other VR users with more than 94% accuracy. They achieved that result after analyzing just 200 seconds of motion data. In a second June study, researchers figured out a person’s height, weight, foot size and country with more than 80% accuracy using data from 1,000 people playing the popular VR game Beat Saber. Even personal information like marital status, employment status and ethnicity could be identified with more than 70% accuracy.
Nearly half of the participants in both studies used Meta Platforms Inc.’s Quest 2, 16% used the Valve Index and the remaining participants used other headsets such as the HTC Vive or Samsung Windows Mixed Reality.
Virtual reality headsets capture data that wouldn’t be available through a traditional website or app, such as a user’s gaze, body language, body proportions and facial expressions, said Jay Stanley, senior policy analyst at the American Civil Liberties Union. “It brings together a whole bunch of other privacy issues, but also intensifies them.”
Biden Administration Defends Communications With Social Media Companies in High-Stakes Court Fight
The Biden administration on Thursday defended its communications with social media giants in court, arguing those channels must stay open so that the federal government can help protect the public from threats to election security, COVID-19 misinformation and other dangers.
The closely watched court fight reflects how social media has become an informational battleground for major social issues. It has revealed the messy challenges for social media companies as they try to manage the massive amounts of information on their platforms.
In oral arguments before a New Orleans-based federal appeals court, the U.S. government challenged a July injunction that blocked several federal agencies from discussing certain social media posts and sharing other information with online platforms, amid allegations by state governments that those communications amounted to a form of unconstitutional censorship.
The appeals court last month temporarily blocked the injunction from taking effect. But the outcome of Thursday’s arguments will determine the ultimate fate of the order, which placed new limits on the Departments of Homeland Security, Health and Human Services and other federal agencies’ ability to coordinate with tech companies and civil society groups.
During more than an hour of oral arguments Thursday, the three judges handling the appeal gave little indication of how they would rule in the case, with one judge asking just a couple of questions during the hearing. The other two spent much of the time pressing attorneys for the Biden administration and the plaintiffs in the case on issues concerning the scope of the injunction and whether the states even had the legal right — or standing — to bring the lawsuit.
Robotaxi Fight Intensifies as California Approves San Francisco Expansion
The California Public Utilities Commission voted on Thursday to let self-driving cars transport paying customers around San Francisco, overriding vociferous local and labor opposition in a preview of larger battles over technology that could reshape cities and workforces.
Driverless vehicles are a common sight on the streets of technology-focused San Francisco, and operators Waymo and Cruise sought state approval to deploy those vehicles for paid rides at any time of the day or night. San Francisco city officials and firefighters pushed back, warning autonomous vehicles had blocked emergency vehicles and driven erratically as the parent companies withheld vital data — a point echoed by Los Angeles counterparts.
The vehicles are “a menace to public safety that benefits private corporations at the expense of the public good,” San Francisco resident Joshua Babcock testified.
AI Can Be a Force for Good or Ill in Society, so Everyone Must Shape It, Not Just the ‘Tech Guys’
Superpower. Catastrophic. Revolutionary. Irresponsible. Efficiency-creating. Dangerous. These terms have been used to describe artificial intelligence over the past several months. The release of ChatGPT to the general public thrusts AI into the limelight, and many are left wondering: how is it different from other technologies, and what will happen when the way we do business and live our lives changes entirely?
First, it is important to recognize that AI is just that: a technology. As Amy Sample Ward and I point out in our book, The Tech That Comes Next, technology is a tool created by humans, and therefore subject to human beliefs and constraints. AI has often been depicted as a completely self-sufficient, self-teaching technology; however, in reality, it is subject to the rules built into its design.
Whereas designers have a great deal of power in determining how AI tools work, industry leaders, government agencies and nonprofit organizations can exercise their power to choose when and how to apply AI systems.
Generative AI may impress us with its ability to produce headshots, plan vacation agendas, create work presentations, and even write new code, but that does not mean it can solve every problem. Despite the technological hype, those deciding how to use AI should first ask the affected community members: “What are your needs?” and “What are your dreams?” The answers to these questions should drive constraints for developers to implement and should drive the decision about whether and how to use AI.
Hospital Bosses Love AI. Doctors and Nurses Are Worried.
Mount Sinai is among a group of elite hospitals pouring hundreds of millions of dollars into AI software and education, turning their institutions into laboratories for this technology. They’re buoyed by a growing body of scientific literature, such as a recent study finding AI readings of mammograms detected 20% more cases of breast cancer than radiologists — along with the conviction that AI is the future of medicine.
But the advances are triggering tension among front-line workers, many of whom fear the technology comes at a strong cost to humans. They worry about the technology making wrong diagnoses, revealing sensitive patient data and becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.
Most of all, they say software can’t do the work of a human doctor or nurse.
Though AI can analyze troves of data and predict how sick a patient might be, Michelle Mahon, the assistant director of nursing practice at the National Nurses United Union, has often found that these algorithms can get it wrong. Nurses see beyond a patient’s vital signs, she argues. They see how a patient looks, smell unnatural odors from their body and can use these biological data points as predictors that something might be wrong. “AI can’t do that,” she said.
The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly
Geoffrey Hinton, perhaps the world’s most celebrated artificial intelligence researcher, made a big splash a few months ago when he publicly revealed that he’d left Google so he could speak frankly about the dangers of the technology he helped develop. His announcement did not come out of the blue.
Late 2022 was all about the heady discovery of what AI could do for us. In 2023, even as we GPT’d and Bing chat-ed, the giddiness was washed down with a panic cocktail of existential angst. So it wasn’t a total shock that the man known as the “Godfather of AI” would share his own thoughtful reservations.
Hinton took pains to say that his critique was not a criticism of the search giant that had employed him for a decade; his departure simply avoided any potential tensions that come from critiquing a technology that your company is aggressively deploying.
Hinton’s basic message was that AI could potentially get out of control, to the detriment of humanity. In the first few weeks after he went public, he gave a number of interviews, including with WIRED’s own Will Knight, about those fears, which he had come to feel only relatively recently, after seeing the power of large language models like that behind OpenAI’s ChatGPT.
Vaccine Manufacturer Is Reprimanded for Online Censorship Attempts + More
Vaccine Manufacturer Is Reprimanded for Online Censorship Attempts
In a stern measure that has been widely hailed by free speech advocates, the German Council for Public Relations, a standards-setting body, has officially reprimanded BioNTech, a partner of Pfizer, for attempting to silence critics on X.
As reported by Lee Fang, BioNTech’s attempted censorship was aimed at activists who have been critical of the pharmaceutical industry’s reluctance to share intellectual property rights, thus hindering the production of generic, low-cost COVID-19 vaccines. The company reached out to Twitter executives back in 2020, seeking to suppress these voices.
Further investigation into the relationship between social media and the pharmaceutical industry has exposed the extent to which companies have sought to influence online discourse. Pfizer and other pharmaceutical giants financed a group known as Public Good Projects (PGP) through the Biotechnology Innovation Organization.
The PGP was granted special access to Twitter and even advised the company on which forms of information to censor. Emails reveal that PGP frequently flagged tweets and accounts as dangerous “misinformation,” including those that merely criticized vaccine policies or mandates.
The House Judiciary Committee’s request for documents from Pfizer’s CEO, Albert Bourla, is expected to uncover more about the BioNTech censorship request and initiatives like the Pfizer-funded PGP campaign.
Colorado’s Childhood Immunization Rates Decline as Exemptions Rise
Immunizations among school-age children continue to decline in Colorado, falling below 90% for the second year in a row.
What’s happening: More parents are exempting their children from the required vaccines, up to 4% at the kindergarten level, on religious or medical grounds.
The intrigue: Gov. Jared Polis, a Democrat who opposes vaccine mandates, is at the center of the debate. He worked to defeat a bill in 2019 that would have made it harder for parents to claim an exemption from required immunizations. Instead, he signed an executive order to increase education and study the issue.
Elon Musk Accuses Australia’s ABC of Embracing Censorship After It Shut Down Twitter/X Accounts
Elon Musk has accused the ABC of embracing censorship after Australia’s public broadcaster drastically reduced its presence on X, the social media platform formerly known as Twitter.
“Well of course they prefer censorship-friendly social media,” Musk posted on X in reply to an ABC news report about the move. “The Australian public does not.”
The ABC’s managing director, David Anderson, on Wednesday, said the broadcaster was shutting down almost all of its official accounts on X. He cited “toxic interactions” on the social media site as a reason for the decision, along with the cost and better interactions with ABC content on other platforms.
Anderson said the vast majority of the ABC’s social media audience was located on official sites on YouTube, Facebook, Instagram and TikTok.
FTC Health Data Breach Rule Scrutinized
In May, the Federal Trade Commission proposed a sweeping expansion of health data privacy rules, and now, the period for the public to weigh in has ended.
While many comments were supportive, others were concerned that the FTC was overstepping its authority, opening itself up to litigation, and urged more clarity.
What’s new in the rule: The proposal would clarify that health app developers would be subject to regulations requiring them to notify customers if their identifiable data is accessed by hackers or business partners or shared for marketing without patient approval. The rule would include those offering health services and supplies — broadly defined to include fitness, sleep, diet and mental health products and services, among a laundry list of categories.
The proposal aims to clarify how the FTC plans to expand its use of a 14-year-old rule. Earlier this year, it used the rule in relation to sharing data with business partners for the first time against GoodRx, settling for $1.5 million and accusing the site of sharing data with Google, Facebook and other firms.
U.K. Defends Plan to Demand Access to Encrypted Messages to Protect Children
British technology minister Michelle Donelan defended plans to require messaging apps to provide access to encrypted private messages when needed to protect children from abuse, which major platforms say would undermine the privacy of their users.
Donelan told the BBC that the government was not against encryption, and the access would only be requested as a last resort, under Britain’s Online Safety Bill which is expected to become law later this year.
Meta-owned (META.O) WhatsApp, Signal and other messaging apps have opposed the plan, arguing that the law could give an “unelected official the power to weaken the privacy of billions of people around the world.”
The dispute is part of a wider debate between large tech companies, which say they are protecting free speech, and governments which say they are defending citizens from harmful content online.
‘Think Again’: Rubio Warns Americans Against TikTok, Wants It Banned
In his latest step in his fight against the pernicious influence of TikTok, Florida GOP Senator Marco Rubio called for the banning of Chinese-controlled TikTok in an opinion piece published Tuesday.
“TikTok’s public-policy chief blatantly lied under oath when he denied US data is stored in China,” Rubio wrote in his piece published by The New York Post. “ByteDance, TikTok’s China-based parent company, was caught in October using the app to spy on American journalists.”
“TikTok’s public-policy chief blatantly lied under oath when he denied US data is stored in China,” Rubio wrote in his piece published by The New York Post. “ByteDance, TikTok’s China-based parent company, was caught in October using the app to spy on American journalists.”
AI Is Being Used to Give Dead, Missing Kids a Voice They Didn’t Ask For + More
AI Is Being Used to Give Dead, Missing Kids a Voice They Didn’t Ask For
These are some of the world’s most high-profile criminal cases involving children. These are stories of abuse, abduction, torture and murder that have long haunted the countries where the crimes occurred.
Now, some content creators are using artificial intelligence to recreate the likeness of these deceased or missing children, giving them “voices” to narrate the disturbing details of what happened to them. Experts say that while technological advances in AI bring creative opportunity, they also risk presenting misinformation and offending the victims’ loved ones. Some creators have defended their posts as a new way to raise awareness.
Despite TikTok’s attempts to remove such videos, many can still be found on the platform, some of which have generated millions of views.
Zoom Says It Won’t Use Your Calls to Train AI ‘Without Your Consent’ After Its Terms of Service Sparked Backlash and Prompted People to Talk About Ditching the Service
Zoom has responded to backlash over a part of its user agreement that seemed to say the video communications company could use customers’ meetings to train AI.
“You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law,” section 10.2 says, in part. One such purpose listed in this section is “machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models).”
Zoom users promptly bashed the site online and threatened to take their calls elsewhere.
Following the backlash, Zoom chief product officer Smita Hashim wrote in a blog post on Monday that the company added a sentence to its terms of service to clarify that “we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”
Wisconsin Hospital Reaches $2 Million Settlement for MyChart Pixels
Becker’s Hospital Review reported:
Milwaukee-based Froedtert Health has agreed to pay a $2 million settlement after a patient-led lawsuit accused the health system of sharing patient data put into MyChart with Facebook, Milwaukee Business Journal reported on Aug. 7.
The lawsuit alleged that Froedtert Health installed pixels, dubbed Meta Pixel, onto its website and patient portal that “automatically transmits to Facebook every click, keystroke and intimate detail about their medical treatment,” according to the publication.
Although the health system admitted no wrongdoing and has denied the allegations, it agreed to the $2 million settlement.
Under the settlement agreement, all persons, including employees, who logged into a MyChart patient portal between Feb. 1, 2017, and May 23, 2022, will receive a payout.
Facebook Execs Felt ‘Pressure’ From Biden White House to Censor COVID Vaccine Skepticism, Emails Show: Report
Facebook executives reportedly felt “pressure” to censor skepticism towards the COVID vaccine on its platform by the Biden White House and even predicted such actions would backfire, according to newly surfaced emails.
Public, the Substack newsletter founded by independent journalist Michael Shellenberger, reported Tuesday on emails from the Facebook Files, the internal documents from the Meta-owned platform obtained by House Republicans.
The report showed that Facebook’s Director of Strategic Response Rosa Birch attempting to push back on the vaccine-skeptic censorship requests, saying it would ” prevent hesitant people from talking through their concerns online and reinforce the notion that there’s a cover-up.”
Citing an April 2021 email to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg, Birch wrote, “We are facing continued pressure from external stakeholders, including the White House and the press, to remove more COVID-19 vaccine-discouraging content.”
Majority of Americans Are Concerned About Rapidly Developing AI: Poll
Most Americans across party lines say they are concerned about rapidly developing artificial intelligence (AI) technology, according to a new poll released Wednesday.
In a survey of 1,001 registered voters in the United States, 62% of respondents said they were mostly or somewhat concerned about growth in AI, while 21% said they were mostly or somewhat excited about it, and 16% said they were “totally neutral.
Voters affiliated with both political parties said they thought AI could eventually pose a threat to the existence of the human race, at 76% of all respondents, including 75% of Democrats and 78% of Republicans. Seventy-two percent of voters surveyed also said they preferred slowing down the development of AI.
Survey results also showed broad policy consensus in favor of regulating the AI industry. The vast majority of respondents, at 82%, said they don’t trust tech company executives to self-regulate, and 56% of voters said they would support having a federal agency regulate the use of AI — compared to 14% who would oppose a federal agency and 30% who were unsure.
Most Girls Get Unsolicited Messages on Social Media
More than half of 11- to 15-year-old girls using Instagram and Snapchat in the United States have been contacted by strangers in a way that made them feel uncomfortable, according to a report by Common Sense Media, a nonprofit organization that reviews and provides ratings for media and technology in order to safeguard children.
Meanwhile, as Statista’s Anna Fleck reports, some 48% of teen girls in the U.S. said they had been sent unsolicited messages over a messaging app, as 46% were contacted over TikTok and 30% on YouTube.
The report also reveals figures on how nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended at least weekly.
In terms of the most “addictive,” or the highest share of users who reported using it more than intended at least weekly, the order is as follows: Snapchat (37%), YouTube (34%), Instagram (33%) and then Messaging apps (30%).
AI Is Building Highly Effective Antibodies That Humans Can’t Even Imagine
At an old biscuit factory in South London, giant mixers and industrial ovens have been replaced by robotic arms, incubators, and DNA sequencing machines. James Field and his company LabGenius aren’t making sweet treats; they’re cooking up a revolutionary, AI-powered approach to engineering new medical antibodies.
The LabGenius approach yields unexpected solutions that humans may not have thought of, and finds them more quickly: It takes just six weeks from setting up a problem to finishing the first batch, all directed by machine learning models.
LabGenius has raised $28 million from the likes of Atomico and Kindred, and is beginning to partner with pharmaceutical companies, offering its services like a consultancy. Field says the automated approach could be rolled out to other forms of drug discovery too, turning the long, “artisanal” process of drug discovery into something more streamlined.
More and More Businesses Are Blocking ChatGPT on Work Devices
Organizations are increasingly banning the use of generative AI tools such as ChatGPT citing concerns over privacy, security and reputational damage.
In a new report published by BlackBerry, 66% of organizations it surveyed said that they will be prohibiting the infamous AI writer and others at the workplace, with a further 76% of IT decision-makers accepting that employers are allowed to control what software workers can use for their job.
What’s more, 69% of those organizations implementing bans said that they would be permanent or long term, such is the risk of harm the tools pose to company security and privacy.