Big Brother News Watch
Get a Clue, Says Panel About Buzzy AI Tech: It’s Being ‘Deployed as Surveillance’ + More
Get a Clue, Says Panel About Buzzy AI Tech: It’s Being ‘Deployed as Surveillance’
Earlier today at a Bloomberg conference in San Francisco, some of the biggest names in AI turned up, including, briefly, Sam Altman of OpenAI, who just ended his two-month world tour, and Stability AI founder Emad Mostaque. Still, one of the most compelling conversations happened later in the afternoon, in a panel discussion about AI ethics.
Featuring Meredith Whittaker, the president of the secure messaging app Signal; Credo AI co-founder and CEO Navrina Singh; and Alex Hanna, the director of Research at the Distributed AI Research Institute, the three had a unified message for the audience, which was: Don’t get so distracted by the promise and threats associated with the future of AI. It is not magic, it’s not fully automated and — per Whittaker — it’s already intrusive beyond anything that most Americans seemingly comprehend.
The comments made separately by Whittaker — who previously worked at Google, co-founded NYU’s AI Now Institute and was an adviser to the Federal Trade Commission — were even more pointed (and also impactful based on the audience’s enthusiastic reaction to them). Her message was that enchanted as the world may be now by chatbots like ChatGPT and Bard, the technology underpinning them is dangerous, especially as power grows more concentrated by those at the top of the advanced AI pyramid.
There’s much more that everyday people don’t understand about what’s happening, Whittaker suggested, calling AI “a surveillance technology.” Facing the crowd, she elaborated, noting that AI “requires surveillance in the form of these massive datasets that entrench and expand the need for more and more data and more and more intimate collection. The solution to everything is more data, and more knowledge pooled in the hands of these companies.
“But these systems are also deployed as surveillance devices. And I think it’s really important to recognize that it doesn’t matter whether output from an AI system is produced through some probabilistic statistical guesstimate, or whether it’s data from a cell tower that’s triangulating my location. That data becomes data about me. It doesn’t need to be correct. It doesn’t need to be reflective of who I am or where I am. But it has power over my life that is significant, and that power is being put in the hands of these companies.”
Child Predators Are Using Discord, a Popular App Among Teens, for Sextortion and Abductions
Discord launched in 2015 and quickly emerged as a hub for online gamers, growing through the pandemic to become a destination for communities devoted to topics as varied as crypto trading, YouTube gossip and K-pop. It’s now used by 150 million people worldwide.
But the app has a darker side. In hidden communities and chat rooms, adults have used the platform to groom children before abducting them, trade child sexual exploitation material (CSAM) and extort minors whom they trick into sending nude images.
In a review of international, national and local criminal complaints, news articles and law enforcement communications published since Discord was founded, NBC News identified 35 cases over the past six years in which adults were prosecuted on charges of kidnapping, grooming or sexual assault that allegedly involved communications on Discord.
Discord isn’t the only tech platform dealing with the persistent problem of online child exploitation, according to numerous reports over the last year. But experts have suggested that Discord’s young user base, decentralized structure and multimedia communication tools, along with its recent growth in popularity, have made it a particularly attractive location for people looking to exploit children.
What In-House Counsel Should Take Away From a Pair of Children’s Privacy Mega-Settlements
Regulators are playing hardball on kids’ privacy, with two recent settlements with tech giants giving in-house counsel in the industry plenty of reasons to review their own data collection and storage policies.
Last month, Amazon agreed to pay $25 million to settle allegations from the Federal Trade Commission and Department of Justice that it illegally retained voice recordings from children who used their Alexa devices, and then deceived parents about it.
According to the charges, Amazon used the recordings to improve its algorithm and kept transcripts of what kids had said even after parents requested the recordings be deleted.
Less than a week later, Microsoft agreed to pay $20 million to settle FTC allegations that it illegally collected personal information from children who signed up for its Xbox gaming system without notifying their parents or obtaining their consent, and then illegally retained children’s personal information.
FDIC Mistakenly Releases Confidential Information on Silicon Valley Bank Depositors, Revealing Major Tech Giants That Benefitted From Government Help
After Silicon Valley Bank failed in March, U.S. officials — including President Joe Biden and Treasury Secretary Janet Yellen — described its rescue as a necessary step to protect small businesses.
While many early-stage startups banked with the SVB, new documents obtained by Bloomberg show that several global tech giants with significant deposits also benefitted from the government’s intervention.
Sequoia Capital, a tech firm that backs giants like Apple and Google, for instance, had $1 billion of its $85 billion in assets at SVB, according to Bloomberg. Altos Lab, Inc., a life sciences startup that’s received billions from funders like Jeff Bezos, also had $680.3 million in the failed bank.
Bloomberg obtained the new documents through a Freedom of Information Act request filed with the FDIC. The independent government agency accidentally turned over the documents without first redacting some key details. Bloomberg published some of those details despite requests from the agency to withhold the information, the outlet reported.
Tom Morello, Zack de la Rocha, and Boots Riley Boycotting Venues That Use Face-Scanning Technology
Over 100 artists including Rage Against the Machine co-founders Tom Morello and Zack de la Rocha, along with Boots Riley and Speedy Ortiz, have announced that they are boycotting any concert venue that uses facial recognition technology, citing concerns that the tech infringes on privacy and increases discrimination.
The boycott, organized by the digital rights advocacy group Fight for the Future, calls for the ban of face-scanning technology at all live events. Several smaller independent concert venues across the country, including the House of Yes in Brooklyn, the Lyric Hyperion in Los Angeles, and Black Cat in DC, also pledged to not use facial recognition tech for their shows. Other artists who said they would boycott include Anti-Flag, Wheatus, Downtown Boys, and over 80 additional artists. The full list of signatories is available here.
“Surveillance tech companies are pitching biometric data tools as ‘innovative’ and helpful for increasing efficiency and security. Not only is this false, but it’s also morally corrupt,” Leila Nashashibi, campaigner at Fight for the Future, said in a statement.
“For starters, this technology is so inaccurate that it actually creates more harm and problems than it solves, through misidentification and other technical faultiness. Even scarier, though, is a world in which all facial recognition technology works 100% perfectly — in other words, a world in which privacy is nonexistent, where we’re identified, watched, and surveilled everywhere we go.”
Amazon Is Putting Up $100 Million to Battle Microsoft and Google for the Next Generation of AI
While many companies are focused on bringing their versions of AI tools and AI writers to market, Amazon’s cloud division has been focused on helping startups reach that point instead, and the launch of its Generative AI Innovation Center takes that one step further.
The program is designed to connect AI and ML experts with AWS customers to help them design and deploy new generative AI products and has just received $100 million in funding to kick-start it which represents more than a quarter of a century of AI investments by the firm.
Key Figure Departs From Biden Administration, Accused of Leading ‘Vast Censorship Enterprise’
President Joe Biden’s digital director Rob Flaherty, a central figure in the administration’s efforts to shape social media narratives as part of a censorship-by-proxy effort, is leaving the White House by the end of the month, a court filing shows.
Flaherty, a veteran of Biden’s 2020 campaign, has been overseeing the biggest-ever White House digital team as the director of the Office of Digital Strategy.
The evidence presented by the plaintiffs — Louisiana attorney general Jeff Landry and Missouri attorney general Eric Schmitt — led to the judge overseeing the case approving the depositions of eight officials believed to be or confirmed to be part of the administration’s censorship campaign, with Flaherty one of those flagged for deposition.
However, due to Flaherty’s imminent departure, his as yet unnamed successor will take his place as a defendant in the lawsuit, per the court document, which was filed in the U.S. District Court Western District of Louisiana on June 16.
Biden Violated First Amendment by Pressing Big Tech on COVID Misinfo + More
GOP: Biden Violated First Amendment by Pressing Big Tech on COVID Misinfo
House Republicans on Wednesday used a hearing on the U.S. government’s COVID-19 policies to highlight a lawsuit accusing the Biden administration of using social media to censor Americans’ First Amendment speech rights.
The hearing featured testimony from Missouri Attorney General Andrew Bailey, who with his Louisiana counterpart is suing in federal court to block the White House from working with tech companies like Google and Meta to restrict what Americans can say on social media sites. A Trump-appointed Louisiana federal judge has allowed the case to move forward, raising the possibility of another high-profile legal clash in higher courts over online speech.
The hearing and lawsuit are part of a years-long political tug-of-war over what Americans can and can’t say on social media — and who gets to decide. As social media sites like YouTube, Facebook and Twitter have worked to remove or reduce the spread of messages they deem offensive, harmful or even dangerous, conservatives have pushed back, saying the efforts target conservative opinion and infringe on Americans’ freedom of expression.
House Republicans Look to Aid Troops Kicked Out for Refusing Former Pentagon Vaccine Mandate
Several Pentagon policies meant to protect troops penalized under the Defense Department’s since-repealed COVID-19 vaccine mandate have made it into the House Armed Services Committee’s annual defense policy bill.
The panel, which held its markup for the annual National Defense Authorization Act on Wednesday, adopted five separate GOP-offered amendments on how to treat service members and military academy cadets kicked out for refusing the vaccine after the mandate was put in place in August 2021.
The first amendment, offered by Rep. Jim Banks (R-Ind.), passed 32-26. It would prohibit any adverse action for troops who did not receive the vaccine and allow those kicked out for refusing it to be reinstated without any detriment to their career.
Banks offered two more amendments that passed, including one that would require the military services’ boards of corrections to prioritize cases for troops who didn’t receive the vaccine and want to rejoin the ranks, and another that requires the DOD to inform those who were separated on how to rejoin if they choose to do so.
Unvaccinated Canadian Woman Denied Organ Transplant Finds U.S. Hospital to Perform Surgery
An Alberta woman who was removed from a high-priority organ transplant list for not receiving a COVID-19 vaccine has found a hospital in the United States that is willing to perform the surgery.
Sheila Annette Lewis was diagnosed with a terminal illness in 2018 and was told she would not survive without an organ transplant. She was placed on an organ waiting list in 2020, but in 2021 she was informed that a COVID-19 vaccine was required to receive the transplant.
With the help of her friends, Lewis has now found a hospital in the U.S. that would not require her to be vaccinated for COVID-19 to be a transplant recipient. The testing is estimated to cost $100,000 and, after Lewis finds a suitable transplant donor, the surgery will cost another estimated $500,000.
Microsoft Wants Your Company to Feed Its Private Data Into ChatGPT
Microsoft has revealed a new launch giving firms the ability to hand over their corporate data to its Azure OpenAI Service in order to get better results when querying the AI chatbot.
OpenAI is the company behind the ever-popular ChatGPT, and Microsoft has been one of its biggest investors, pouring billions of dollars into its development; ChatGPT is hosted on Azure. Since then, the Redmond giant has been integrating the AI models behind it — the latest being GPT-4 — into many of its products and services.
Although there have been numerous privacy and regulatory concerns about ChatGPT since its release — due to the amount of data it gathered from countless sources in its initial training and continues to collect from users— Microsoft seems to have gone the other way, with Andy Beatman, senior product marketing manager for Azure AI, saying that the new data hand-over feature is a “highly requested customer capability.”
So Microsoft’s argument appears to be that in handing over your company’s data, you’ll be able to get those tailored answers without needing to customize an AI model yourself.
Social Media Execs Face U.K. Criminal Liability Over Data of Deceased Children
Social media executives could face criminal liability if they refuse to share information about deceased children’s online activity with authorities, according to changes to the U.K.’s Online Safety Bill announced by the government today.
The changes would empower Ofcom to demand relevant data from social media companies, including information about how a child interacted with content online and how that content was promoted by the company’s algorithms, in instances where online harms are suspected to have played a part in a child’s death.
Details of how Ofcom would legally compel companies headquartered outside of the U.K. to hand over the relevant data are still to be worked out. Parkinson said the government would “seek to engage our American counterparts” to ensure U.S. data laws did not inadvertently prevent compliance with the new U.K. legislation.
TechCrunch Disrupt’s Security Stage Highlights the Risks of Spyware, Government Surveillance
Governments all over the world, authoritarian and democratic, use spyware to hack the phones of activists, journalists, and political rivals who are critical of their governments.
Initially, the spyware industry consisted of a few known actors, like Hacking Team and FinFisher.
But over the past decade — as the technology evolved and smartphones and computers became ubiquitous — the industry has ballooned in size. Can this industry operate legally and ethically? If not, what can we do to counter state-backed abuse of spyware and its violent consequences, including harassment, arbitrary detention, and killings?
Governments using spyware that exploits flaws found in billions of phones put everyone at risk. Should there be a vulnerabilities equities process to ensure serious vulnerabilities are reported and disclosed to the relevant technology companies affected, the way that U.S. intelligence does now?
Europe’s Digital ‘Enforcer’ Takes EU Tech Rulebook to Silicon Valley
The European Union is firing a last warning shot at Silicon Valley titans ahead of the incoming rules to police social media platforms.
Europe’s digital Commissioner Thierry Breton is set to give Meta and Twitter‘s top management an in-person reminder that the clock is ticking to comply with the European Union’s Digital Services Act (DSA). The law starts applying in late August and will oblige major tech platforms to fight back against online hate speech, illegal content and disinformation.
While Twitter’s Elon Musk and Meta’s Mark Zuckerberg square off on social media over a prospective cage fight, Breton, who as the internal market commissioner oversees EU services regulating Big Tech, is starting a two-day trip to California to push the EU’s new rules.
The Economic Consequences for Students During Pandemic ‘Could Be Far Greater Than Those of Great Recession’ + More
The Economic Consequences for Students During the Pandemic ‘Could Be Far Greater Than Those of the Great Recession’
The COVID-19 public health emergency may be officially over, but a new report shows that we’re only starting to see its impact on children’s education.
Standardized testing scores are down significantly, children in some districts are years behind on reading and math, and many have missed out on key periods of socialization, ProPublica reported. Remote learning and trauma relating to COVID-19 deaths both played a role in the setbacks.
As a result of this so-called learning loss, the economic fallout for those who went to school during the pandemic years could be worse than for people who worked during the Great Recession, Stanford economist Eric Hanushek said while presenting research at an event earlier this year, according to ProPublica. These students will be “punished throughout their lifetime,” he said at the event.
Hanushek said that those in Grades 1 through 12 during the pandemic could expect 3% lower incomes over their lifetimes due to learning loss, in a report published May 2020 at the onset of the pandemic.
Spike in Teen Depression Aligns With Rise of Social Media, New Poll Suggests: ‘It’s Not Going Anywhere’
“I can’t do anything right.” “I do not enjoy life.” “My life is not useful.” The share of teens who agree with these phrases has doubled over the past decade, according to an annual poll conducted by the University of Michigan — and one expert asserts that the increase in depressive symptoms is tied to the rise of social media.
In her book, “Generations: The Real Differences Between Gen Z, Millennials, Gen X, Boomers and Silents — and What They Mean for America’s Future,” Dr. Jean Twenge, a psychologist and a professor at San Diego State University, highlighted the poll’s results as a means of linking the spike in teen depression to the increase in social media use.
Since 1991, the University of Michigan has polled 50,000 students in 8th, 10th and 12th grades about their level of agreement with those three questions. After 2012, the number of students expressing agreement with those sentiments started to climb.
Dr. Zachary Ginder, a psychological consultant and doctor of clinical psychology at Pine Siskin Consulting, LLC in Riverside, California, was also not involved in the poll but said the correlation between depression and social media use aligns with previous research.
Amazon Duped Millions Into Enrolling in Prime, U.S. Regulator Says in Lawsuit
The U.S. Federal Trade Commission has sued Amazon for what it called a years-long effort to enroll consumers without consent into its paid subscription program, Amazon Prime, and making it hard for them to cancel.
The FTC, the U.S. agency charged with consumer protection, filed a federal lawsuit in Seattle, where Amazon is headquartered, alleging that the tech behemoth “ knowingly duped millions of consumers into unknowingly enrolling in Amazon Prime” through a secret project internally called “Iliad.”
In its complaint, the FTC said Amazon used “manipulative, coercive or deceptive user-interface designs known as ‘dark patterns’ to trick consumers into enrolling in automatically renewing Prime subscriptions.”
When College Students Cut Back on Social Media, They Got Happier: Study
U.S. News & World Report reported:
Cutting back social media to a spare 30 minutes per day could be the key to reducing anxiety, depression, loneliness and feelings of fear of missing out, researchers say.
That was true for college students in a new study who self-limited social media — often successfully and sometimes squeezing in just a bit more time — for two weeks.
The study dovetailed with recent health advisories from the U.S. Surgeon General and the American Psychological Association, which warned that young people’s mental health has suffered as their use of social media has surged.
Chuck Schumer Calls on Congress to Pick up the Pace on AI Regulation
Senate Majority Leader Chuck Schumer is launching a new “all-hands-on-deck” effort Wednesday to regulate artificial intelligence, aiming to strike a balance between economic competitiveness and safety.
Schumer laid out his vision in a speech at a Washington think tank on Wednesday, calling on his Senate colleagues to create new rules regulating the emerging AI industry. The plan, the SAFE (Security, Accountability, Foundations, Explain) Innovation Framework, doesn’t provide specific policy requests or define the boundaries of “AI.”
Instead, it asks lawmakers to work together to address a variety of AI’s potential risks, from national security and job loss to misinformation, bias, and copyright.
Congress has struggled to regulate the tech industry, failing to pass long-debated legislation on data privacy and competition. But AI is different, according to Schumer, and presents new threats that lawmakers should address with urgency. To help quicken Congress’ pace on AI rules, Schumer said he would convene a series of “AI Insight Forums” later this year. These panels are intended to bring experts and lawmakers together to help form regulations.
The Federal Government Has a New Cyber Player
The federal government just got a new cyber player: a section of the Justice Department wholly devoted to disrupting and prosecuting cyberthreats to national security.
Matthew Olsen, DOJ’s assistant attorney general for national security, announced the National Security Division-housed National Security Cyber Section — NatSec Cyber for short — at the Hoover Institution think tank on Tuesday, and shared some additional details with me in an interview.
The person who is taking the job of acting head of the section is Sean Newell, who has been serving in the office of Deputy Attorney General Lisa Monaco. Monaco is one of the Justice Department officials who has been emphasizing disruptive operations against malicious hackers, such as botnet takedowns and recovery of ransom payments, that was also a point of emphasis in the Biden administration national cybersecurity strategy.
Olsen was one of the Biden administration figures who testified before the Senate Judiciary Committee last week in support of renewing the surveillance tools known as Section 702 of the Foreign Intelligence Surveillance Act, set to expire at the end of this year. A prominent privacy concern that emerged at that hearing is the FBI warrantlessly accessing the Section 702 database of collected communications with searches using Americans’ identifiers, such as their names or email addresses.
Doctors Increasingly Using AR Smart Glasses in Operating Room: ‘Potential to Revolutionize Surgeries’
As artificial intelligence and other technologies continue to move into the medical field, a growing number of doctors are showing interest in how these innovations can transform all aspects of patient care — including surgery.
Augmented reality (AR) smart glasses are wearable devices that enhance how people interact with the world around them. This is one such technology that’s seeing wider use.
In the operating room, smart glasses allow surgeons to access important information they need in a real-time, hands-free environment — without having to look away from the procedure to check a computer screen.
A quarter of U.S. surgeons have already started using AR smart glasses. Meanwhile, an additional 31% of surgeons are considering using them, according to a study by global research firm Censuswide, which gathered insights from over 500 surgeons across America.
AI May Be Able to Predict Your Political Views Based on How Attractive You Are, a Recent Study Found
AI may be able to predict your political views based on how you look — and that could cause issues down the line, new research suggests.
A team of researchers based in Denmark and Sweden conducted a study to see if “deep learning techniques,” like facial recognition technology, and predictive analytics can be used on faces to predict a person’s political views.
The purpose of the March study, researchers wrote, “was to demonstrate the significant privacy threat posed by the intersection of deep learning techniques and readily-available photographs.”
Judge to Decide if Biden Administration Improperly Censored Social Media Users + More
Judge to Decide if Biden Administration Improperly Censored Social Media Users
A federal judge will decide whether President Joe Biden’s administration violated the First Amendment by censoring users on social media over topics like COVID and election security — and if so, what to do about it.
The Republican attorneys general of Missouri and Louisiana brought the lawsuit last year, alleging that the Biden administration fostered a sprawling “federal censorship enterprise” that pressured social-media platforms to scrub away dissenting views, including criticism of mask mandates and objections to COVID-19 vaccination.
The Louisiana judge presiding over the case — former President Trump appointee Terry A. Doughty — is considering whether to intervene in communications between the U.S. government and top social media sites like Instagram, Twitter, Facebook, YouTube and LinkedIn, among others, court documents say.
The case is among the most potentially consequential First Amendment battles pending in the courts, testing the limits on government policing of social-media content.
AI-Generated Child Sex Images Spawn New Nightmare for the Web
The revolution in artificial intelligence has sparked an explosion of disturbingly lifelike images showing child sexual exploitation, fueling concerns among child-safety investigators that they will undermine efforts to find victims and combat real-world abuse.
Generative AI tools have set off what one analyst called a “predatory arms race” on pedophile forums because they can create within seconds realistic images of children performing sex acts, commonly known as child pornography.
Thousands of AI-generated child-sex images have been found on forums across the dark web, a layer of the internet visible only with special browsers, with some participants sharing detailed guides for how other pedophiles can make their own creations.
The flood of images could confound the central tracking system built to block such material from the web because it is designed only to catch known images of abuse, not detect newly generated ones. It also threatens to overwhelm law enforcement officials who work to identify victimized children and will be forced to spend time determining whether the images are real or fake.
Everyone Says Social Media Is Bad for Teens. Proving It Is Another Thing
There have been increasingly loud public warnings that social media is harming teenagers’ mental health — most recently from the U.S. surgeon general — adding to many parents’ fears about what all the time spent on phones is doing to their children’s brains.
Although many scientists share the concern, there is little research to prove that social media is harmful — or to indicate which sites, apps or features are problematic. There isn’t even a shared definition of what social media is. It leaves parents, policymakers and other adults in teenagers’ lives without clear guidance on what to be worried about.
YouTube illustrates the challenge. It’s the most popular site among teenagers by far: 95% use it, and almost 20% say they do so “almost constantly,” Pew Research Center found. It has all the features of social media, yet it hasn’t been included in most studies.
Experts said they would like to see research that examines specific types of social media content, and things such as how social media use in adolescence affects people in adulthood, what it does to neural pathways and how to protect youth against negative effects.
Spies, Foreign Governments Amassing Data on Every American at Alarming Rate, Privacy Experts Warn
Following the release of a bombshell report detailing how the U.S. government buys data to spy on its own citizens, privacy experts warn foreign intelligence agencies are snapping up the same information and have the potential to build detailed profiles of every American consumer.
Specialists are calling on the government to stop buddying up with big tech companies like Google, Facebook and Apple and instead work on a privacy law to keep its citizens safe from cyber attacks, online identity theft and fraud.
“Frankly, it’s shameful that we have not had meaningful privacy legislation to this point when … every other democracy and advanced economy in the world has something like this and the United States doesn’t,” said Neil Richards, a Koch Distinguished Professor in Law at Washington University in St. Louis, Missouri.
The kind of data which is for sale includes geo-specific locations, spending habits and online search behavior of U.S. citizens and is being harvested by shadowy companies based in foreign countries, largely through cellphone apps, with little regulation or control.
Generative AI Could Add Up to $4.4 Trillion to the Global Economy Annually, McKinsey Report Says
The hype around generative AI has reached a fever pitch in recent months and for good reason as the industry has the potential to add $4.4 trillion to the global economy annually, a new McKinsey report argues.
The report, which looks at the economic potential of generative AI, says it could add between $2.6 to $4.4 trillion to the global economy through “63 generative AI use cases spanning 16 business functions,” which is roughly the same amount as the U.K.’s GDP in 2021.
Generative AI refers to conversational AI tools like OpenAI’s ChatGPT released in November, which impressed the world with its wide-ranging abilities including creating content, generating music, and writing code.
The impact of generative AI is expected to be instrumental across all industries, especially in banking, high-tech, pharmaceuticals and medical products, and retail, McKinsey’s report says. The technology could add $200 billion to $340 billion in value to the banking industry, and $240 to $390 billion in value in retail.
Asset Managers Pressure Tech Companies Over Possible AI Misuse
Big institutional investors are increasing pressure on technology companies to take responsibility for the potential misuse of artificial intelligence as they become concerned about the liability for human rights issues linked to the software.
The Collective Impact Coalition for Digital Inclusion of 32 financial institutions representing $6.9 trillion in assets under management — including Aviva Investors, Fidelity International and HSBC Asset Management — is among those leading the push to influence technology businesses to commit to ethical AI.
Aviva Investors has held meetings with tech companies, including chipmakers, in recent months to warn them to strengthen protections on human rights risks linked to AI, including surveillance, discrimination, unauthorized facial recognition and mass lay-offs.
Cal Poly Student Sues University, SLO County Health Officials Over COVID Restrictions
A Cal Poly student who was barred from attending class in person after refusing to comply with COVID-19 regulations is suing the university and local health authorities.
Elijah Behringer claimed the university and San Luis Obispo County Public Health Department officials violated his federal and state rights “under sham application of state law and authority,” according to a lawsuit filed May 23.
In his lawsuit, Behringer questioned the legitimacy of the COVID-19 pandemic and the right of organizations such as the World Health Organization to dictate state and federal health regulations. It also claimed Cal Poly failed to get informed consent from Behringer and others for the use of masks, vaccines and testing.
He skipped his enrollment in September 2021 — partially due to the COVID-19 regulations — with a plan to return in January 2022, he said. But after the university did not grant his requests for exemptions to the vaccination, testing and mask-wearing regulations, Behringer claimed in his lawsuit he was effectively “suspended” from campus, preventing him from returning to classes.
LA Master Chorale Singer Sues Over COVID Vaccine Mandate
A singer is suing the Los Angeles Master Chorale, alleging her requests to her employer to be tested for the coronavirus rather than take vaccinations due to religious and medical objections have been wrongfully denied.
Virenia Lind’s Los Angeles Superior Court lawsuit alleges religious discrimination, failure to accommodate religious belief or observance, failure to accommodate disability and medical condition, failure to engage in an interactive process to determine a reasonable accommodation for disability and medical condition, discrimination based on disability and medical condition and failure to prevent discrimination.
Lind seeks unspecified compensatory and punitive damages. An LAMC representative did not immediately reply to a request for comment on the suit brought Friday.
Lind maintains that LAMC forced the soprano into unpaid leave status in 2021, the year the company mandated that all employees be vaccinated against the coronavirus.