Big Brother News Watch
Lawmakers Urge Biden to End ‘Warrantless Government Surveillance’ of Sensitive Health Data + More
Lawmakers Urge Biden to End ‘Warrantless Government Surveillance’ of Sensitive Health Data
More than three dozen members of Congress have signed a letter pressuring the Biden administration to step up and close an oversight in federal privacy law that allows law enforcement to obtain access to abortion records and other sensitive health data without a warrant.
Lawmakers writing the letter say additional protections are needed to strengthen the Health Insurance Portability and Accountability Act (HIPAA) to protect abortion seekers and other Americans from “warrantless government surveillance.”
Biden’s Department of Health and Human Services previously proposed adding a new rule to HIPPA that would prohibit doctors or healthcare providers from discussing patients’ protected health records following the Supreme Court’s overturning of Roe V. Wade.
In their letter, Sens. Ron Wyden, Patty Murray, and Rep. Sara Jacobs say the proposed expansions are “woefully insufficient” and urged the Biden administration to go a step further and ensure that all patients’ protected health information receives the same level of protection as text messages, calls, and location data.
In-N-Out Burger Bans Employees From Wearing Masks in Five States
In-N-Out Burger is prohibiting employees in five states from wearing masks unless they have a valid medical note starting on August 14.
Locations in Arizona, Colorado, Texas, Nevada and Utah will prohibit masks. The popular burger chain noted in its guidelines that those who wear a mask for medical reasons may only wear company-approved N95 masks. Valid doctors’ notes must include the medical diagnosis, the reason for exemption and the estimated duration if applicable.
The memo was posted to social media from the In-N-Out Burger Associate Notifications email list. The new policy was created to create “clear and effective” customer and employee interaction.
If employees don’t follow the new company policy, they could face disciplinary action that could include being fired depending on the severity and frequency of the violation.
The HPV Vaccine Is a ‘Lifesaver,’ but Too Many Kids Aren’t Getting It. It’s Time for a Mandate.
The Philadelphia Inquirer reported:
The most recent vaccination data show that the number of teens getting the vaccine has slowed, or even declined, over the last few years, and is much worse than other routine and required childhood vaccines.
In New Jersey, for instance, the rate of the first HPV vaccine dose has decreased since 2020, and only slightly more than half of N.J. teens are fully vaccinated against HPV. Pennsylvania’s rate — 69% — isn’t much better.
If New Jersey and Pennsylvania were to create an HPV vaccine school mandate, they wouldn’t be the first: Virginia, Hawaii, Rhode Island, and the District of Columbia have already done so.
So what’s stopping other jurisdictions from moving forward with new regulations? For one, there is general resistance to childhood vaccination among a large segment of the population, which hasn’t been helped by the COVID-19 pandemic. Requirements for COVID-19 vaccination among adults have caused a backlash on mandates for any population, including school-age children.
B.C. Maintains COVID Vaccine Mandates for Healthcare Workers
B.C.’s Ministry of Health confirms that the COVID-19 vaccine requirement for healthcare workers in the public system and care homes stands, a clarification it’s making in the wake of confusion over a bureaucratic change.
On Friday, the office of the Provincial Health Officer published a notice that regulators and colleges of healthcare professionals are not required to get the vaccination status of nurses, dentists, psychologists, doctors and others, but that doesn’t change the fact the workers must be vaccinated.
British Columbia is one of the few provinces to maintain a requirement for publicly-employed healthcare professionals to be vaccinated against the virus, though it dropped the requirement for other public employees.
HBO Dismissed From Class Action Lawsuit Over Sharing Subscribers’ Viewing History With Facebook
The Hollywood Reporter reported:
HBO won’t have to face a class action lawsuit accusing it of sharing subscribers’ personal viewing history with Facebook in violation of a federal data privacy law.
In a notice to the court, Max subscribers who brought the suit moved to dismiss it on Tuesday “without prejudice,” meaning they can refile or alter the claims. The plaintiffs dropped their case after a federal judge dismissed a similar suit against Scripps Network brought by the same firm that sued HBO on behalf of subscribers.
The legal challenges all center on allegations that companies using Meta’s Pixel tool, which allows advertisers to track users on websites to measure the effectiveness of ads and serve targeted ads, violates the Video Privacy Protection Act. The law carries damages of up to $2,500 per class member and allows consumers to sue for disclosure of information about their watching habits even without sustaining an injury.
Max subscribers Angel McDaniel and Constance Simon alleged in a suit filed last year that HBO discloses to Facebook the content they watch on top of other personally identifiable information without consent. Meta hasn’t been named in any of the complaints.
How to Make Big Tech Pay … Literally
A lifetime ago, I worked as a federal affairs manager at the center-right organization Americans for Tax Reform. My area of focus was technology policy, advocating for as much of a hands-off approach as possible by opposing net neutrality, and supporting online privacy and personal data protection.
It wasn’t the most exciting job in the world, but I believe it was important and still is. I may not work in that arena anymore, but I still follow the issues because today, everyone has to. When it comes to the digital world, the fight for freedom and privacy is never-ending, and all the concepts are just as important today as they were in 2007
Those “free” apps, websites and services everyone loves are never actually free. Online, if you are not the paying customer, you are the product. Those apps are fishing nets for data — your data. That data is then refined into gold for the companies that collect it.
Being the product brings with it an ever-increasing attempt to get more information from you, which creates a never-ending kabuki dance between Big Tech companies and the public. One thing the internet pioneered was the endlessly long, overly lawyered, confusing and always changing user agreements. When was the last time you read one of those? If you’re like me, it’s never.
Apple Spikes to All-Time High, Gains $60 Billion in Seconds on Report It’s Working on Own ChatGPT Tool
At a time when absolutely everyone (and their kitchen sink, cat and uncle) is working on their own version of chatGPT — because who wouldn’t want to be the next “biggest and best” AI frontrunner leading to an avalanche of term sheets from idiot investors such as Soft Bank — moments ago Apple, already the world’s largest company by market cap, just spiked 2%, gaining more than $60 billion in market cap in seconds, after Bloomberg reported that Apple is “quietly” (really? quietly?) working on artificial intelligence tools that could challenge those from (Microsoft‘s) OpenAI, Google and others … even though the company has “yet to devise a clear strategy for releasing the technology to consumers.”
Apple is doing to openAI what Facebook is doing to Twitter: copying and pasting with zero value added and zero creativity. Yet this is good enough to push the stock up $60 billion.
And this is where we are in the market: a place where what is obvious not only passes as news — because it would be news if Apple was not working on its own version of chatGPT as it has now become obvious even to 5-year-olds that AI is this generation’s blockchain/3Dprinter/cannabis etc — but results in ridiculous market cap gains for some, and losses for others (such as the value stocks which form the pair trade with tech).
AI to Predict Your Health Later in Life — All at the Press of a Button
Thanks to artificial intelligence, we will soon be able to predict our risk of developing serious health conditions later in life, at the press of a button.
Researchers from Edith Cowan University’s (ECU) School of Science and School of Medical and Health Sciences have collaborated to develop software that can analyze scans much, much faster: roughly 60,000 images in a single day.
Researcher and Heart Foundation Future Leader Fellow Associate Professor Joshua Lewis said this significant boost in efficiency will be crucial for the widespread use of AAC in research and for helping people avoid developing health problems later in life.
Teladoc Expands Microsoft Tie-Up to Document Patient Visits With AI
Teladoc Health (TDOC.N) is expanding a partnership with Microsoft (MSFT.O) to use the tech giant’s artificial intelligence services to automate clinical documentation on the telehealth platform, lifting its shares 6% in premarket trade.
The integration of AI including Microsoft’s services with technology from OpenAI, owner of the viral chatbot ChatGPT, will help ease the burden on healthcare staff during virtual exams, Teladoc said on Tuesday.
The companies have been collaborating since the height of the COVID-19 pandemic in 2021 when Teladoc integrated its Solo virtual healthcare platform into Microsoft Teams.
The use of AI is being actively discussed by hospitals and other healthcare providers that suffered from attrition caused by pandemic fatigue. Industries across the board have been looking at integrating AI into their businesses after the launch of OpenAI’s ChatGPT in November fueled interest in the breakthrough technology.
Jim Jordan Considers Holding Zuckerberg in Contempt of Congress + More
Jim Jordan Considers Holding Zuckerberg in Contempt of Congress
House Judiciary Committee Chairman Jim Jordan (R-Ohio) is considering holding Meta CEO Mark Zuckerberg in contempt of Congress, a source familiar with the situation confirmed to The Hill Monday. Fox Business was the first to report on Jordan’s potential move, with sources telling the news outlet Meta has not provided any internal communications on its censorship processes.
Zuckerberg was among five tech company heads who received subpoenas in February from the House Judiciary Panel to turn over “documents and communications relating to the federal government’s reported collusion with Big Tech to suppress free speech,” along with any documents related to their content moderation measures, the committee said at the time.
“Given that Meta has censored First Amendment-protected speech as a result of government agencies’ requests and demands in the past, the Committee is concerned about potential First Amendment violations that have occurred or will occur on the Threads platform,” Jordan wrote.
According to sources with direct knowledge of the situation, Jordan is considering holding Zuckerberg in contempt of Congress this month, because the documentation Meta has provided so far under the committee’s original subpoena has been insufficient.
Rival U.S. Lawmakers Mobilize to Stop Police From Buying Phone Data
United States lawmakers are moving with uncommon speed to close a loophole in federal law that police and intelligence agencies use to collect sensitive information on U.S. citizens — up to and including their physical whereabouts — all without the need for a warrant.
The Federal Bureau of Investigation (FBI) and the Defense Intelligence Agency are among several government entities known to have solicited private data brokers to access information for which a court order is generally required. A growing number of lawmakers have come to view the practice as an end run around the U.S. Constitution’s Fourth Amendment guarantees against unreasonable government searches and seizures.
“This unconstitutional mass government surveillance must end,” Warren Davidson, a Republican congressman from Ohio, says.
Members of the House Judiciary Committee, led by Ohio’s Jim Jordan, a Republican, will hold a markup hearing tomorrow to consider a Davidson bill aimed at restricting purchases of Americans’ data without a subpoena, court order, or warrant. If passed into law, the legislation’s restrictions would apply to federal agencies as well as state and local police departments. Known as the Fourth Amendment Is Not For Sale Act, the bill is cosponsored by four Republicans and four Democrats, including the committee’s ranking member, Jerry Nadler, a Democrat, who first introduced it alongside California Democrat Zoe Lofgren in 2021.
Notably, the bill’s protections extend to data obtained from a person’s account or device even if hacked by a third party, or when disclosure is referenced by a company’s terms of service. The bill’s sponsors note this would effectively prohibit the government from doing business with companies such as Clearview AI, which has admitted to scraping billions of photos from social media to fuel a facial recognition tool that’s been widely tested by local police departments.
BofA and Others Share Records With FBI Without a Warrant ’All the Time’, Director Wray Admits
Among the doublespeak and other bombshells disclosed by FBI Director Christopher Wray during testimony to the House Judiciary Committee oversight hearing, one particularly disturbing detail he released was that the FBI “regularly obtains innocent Americans’ personal data from companies with the intent of potentially charging them with crimes,” the Federalist reported last week.
At a time when many Americans are wondering why they should stick with Elon Musk at Twitter instead of joining Mark Zuckerberg over at the newly minted Threads, here’s one potential reason. Wray told Congress that Bank of America provided them with a “huge list” of financial records for Americans who used B of A cards around the capitol on January 6th. That’s right: no warrants, no court hearings, just blindly turning over records when the FBI asks.
Republican Rep. Thomas Massie asked Wray at the hearing: “George Hill, former FBI supervisory intelligence analyst in the Boston field office, told us that the Bank of America, with no legal process, gave to the FBI gun purchase records with no geographical boundaries for anybody that was a Bank of America customer. Is that true?”
To which Wray replied: “A number of business community partners all the time, including financial institutions, share information with us about possible criminal activity, and my understanding is that that’s fully lawful.”
Instagram Agrees to Pay $68.5 Million in Illinois Biometric Privacy Settlement
Millions of Illinois Instagram users may be eligible for a cut of a new $68.5 million class-action biometric privacy settlement. The lawsuit alleges facial recognition technology used on the app until November 2021 violated Illinois’ biometric privacy law, which is considered the strictest in the nation.
The Instagram deal is the latest in a string of settlements by Big Tech companies over alleged violations of the Illinois Biometric Information Privacy Act. The law passed in 2008, prohibits companies from collecting or saving biometric information, such as fingerprints, without prior consent.
Facebook, now Meta, along with Google and Snapchat parent Snap Inc. have all settled biometric privacy cases in Illinois in recent years. Facebook, whose parent company also owns Instagram, settled its case for $650 million, with individual payouts topping $400 for some class members. Google and Snap settled their cases for smaller amounts at $100 million and $35 million, respectively.
“Upon information and belief, Meta also captured its Instagram users’ protected biometrics without their informed consent and without informing users of its practice,” the complaint alleges.
Amazon’s In-Van Surveillance Footage of Delivery Drivers Is Leaking Online
An influx of videos taken from Amazon’s in-van surveillance cameras has been published on Reddit in recent weeks, sparking fresh concerns about the privacy of delivery drivers being monitored for their entire shifts.
Drivers have expressed concerns about their privacy since these cameras were installed, likening the experience to being watched by “Big Brother.” Similar surveillance systems were a bargaining point in the negotiations between UPS and the International Brotherhood of Teamsters trade union earlier this month, with the courier company now tentatively agreeing to shut off its in-vehicle cameras.
Amazon started installing the AI-enabled cameras — supplied by Netradyne Driveri — back in 2021 to analyze drivers as they operate the vehicle and deliver packages. Amazon delivery drivers were later made to sign a biometric consent form to allow the company to collect information like photographs, vehicle location, speed, acceleration, “potential traffic violations,” and “potentially risky driver behavior” or lose their jobs.
The surveillance cameras record on-device “100% of the time,” and the AI system uploads specific clips to the “secure servers” of Amazon or its partners when it detects a class of safety-related issues or opportunities to improve maps and routing. Uploads can also happen when Amazon, a DSP, or a driver makes a request.
Facebook’s Twitter Alternative Isn’t Different. It Steals Self-Worth.
It feels good until the brain fog rolls in. Being praised, finding inspiration, rejecting bad actors — social media peddles connection, esteem and instant justice. The catch is that it requires reducing complex humans to selfies and slogans.
The new social network Threads is posing as an alternative to toxic Twitter. In fact, the platform launched by Meta, the parent company of Facebook and Instagram, is just another drug feeding the same addiction. The endless scrolls fleetingly boost users’ self-worth but cumulatively erode their humanity.
No one needs more platforms. We need ways to cope with the ones we have. The fix must go beyond government regulation. To quit the craving, people need to understand the emotions that social media masks — and amplifies. Then, users must treat themselves with compassion. Again, and again.
As Fear Rises Over AI, Google and Epic Fight Stronger Regulation of the Technology in Healthcare
Big businesses poised to profit from the advance of artificial intelligence in healthcare are pushing back against newly proposed federal rules meant to increase oversight and fairness of AI tools used to help make decisions about patient care.
The opposition, which includes Google and Amazon as well as large healthcare providers, insurers, and medical software vendors, is focused on an attempt to put tighter guardrails around the use of AI by the Office of the National Coordinator for Health Information Technology.
The agency wants to require developers of electronic health records and other health software to provide users with details about the training and testing of predictive models that draw data from their systems or are hosted within them. It also wants developers to assess and publicly disclose potential risks and create a way for clinicians and other users to report problems.
Facebook to Make Its AI Free to Use, Expanding Access to Powerful Tech
Facebook will make its cutting-edge artificial intelligence technology freely available to the public to use for research and building new moneymaking products, doubling down on an “open source” approach to the tech that has garnered both praise and criticism.
Facebook’s Llama 2 is a “large language model” — a highly complex algorithm trained on billions of words scraped from the open internet. It’s Facebook’s answer to Google’s Palm-2, which powers its AI tools, and OpenAI’s GPT4, the tech behind ChatGPT. App developers will be able to download the model directly from Facebook or access it through cloud providers including Microsoft, Amazon and open-source AI start-up Hugging Face.
But critics say open-sourced AI models could lead to the technology being misused. Earlier this year, Meta released Llama to a select group of researchers only for the model to be leaked and later used for applications ranging from drug discovery to sexually explicit chatbots.
More Than 28,000 Convicted of COVID Rule Breaches in England and Wales
More than 28,000 people in England and Wales have been convicted of breaches of COVID-19 regulations, despite the government’s insistence that it never intended to criminalize people for minor infractions during the pandemic.
The convictions are for COVID-related offenses, such as attendance at gatherings during lockdowns or arriving at airports without the proper evidence of a coronavirus test. Almost 16,000 of the convictions — or 55% — involved people under 30.
The figures, which were obtained by the Guardian through analysis of data from the Ministry of Justice, are considerably higher than any previous estimate.
They reveal how tens of thousands of mostly young people have been severely penalized for relatively minor infractions of COVID rules that have left them with damaging fines and, in many cases, criminal records.
Norway Has Had It With Meta, Threatens $100K Fines for Data Violations
Meta‘s data privacy woes in Europe continue as Norway has announced an immediate ban on “behavioral advertising” on Facebook and Instagram. Until Meta makes some big changes, it will be fined $100,000 daily for Norwegian user privacy breaches, the Norwegian Data Protection Authority, Datatilsynet, said yesterday.
“Meta tracks in detail the activity of users of its Facebook and Instagram platforms,” Datatilsynet’s press release said. “Users are profiled based on where they are, what type of content they show interest in, and what they publish, amongst others.
“These personal profiles are used for marketing purposes — so-called behavioral advertising. The Norwegian Data Protection Authority considers that the practice of Meta is illegal and is therefore imposing a temporary ban of behavioral advertising on Facebook and Instagram.”
Norway has not banned the apps. Its ban is focused on restricting data collection for behavioral advertising and starts August 4. The temporary ban could drag on for three months unless Meta takes remedial action sooner.
UN Warns Unregulated Neurotechnology Threatens ‘Freedom of Thought’ + More
UN Warns Unregulated Neurotechnology Threatens ‘Freedom of Thought’
The UN is advising against neurotechnology using unregulated AI chip implantations, saying it poses a grave risk to people’s mental privacy. Unregulated neurotechnology could pose harmful long-term risks, the UN says, such as shaping the way a young person thinks or accessing private thoughts and emotions.
It specified its concerns centered around “unregulated neurotechnology,” and did not mention Neuralink, which received FDA approval in May to conduct microchip brain implant trials on humans.
Elon Musk, who co-founded Neuralink, has made big claims, saying the chips will cure people of lifelong health issues, allowing the blind to see and the paralyzed to walk again. But the implications of people using unregulated forms of this technology could have disastrous consequences by accessing the thoughts of those who use it, the UN said in a press release.
“Neurotechnology could help solve many health issues, but it could also access and manipulate people’s brains, and produce information about our identities, and our emotions,” UNESCO Director-General Audrey Azoulay said in the release. “It could threaten our rights to human dignity, freedom of thought, and privacy. There is an urgent need to establish a common ethical framework at the international level, as UNESCO has done for artificial intelligence.”
If the brain chips are implanted in children while they are still neurologically developing, it could disrupt the way their brain matures, making it possible to transform their minds and shape their future identity permanently.
AI Microdirectives Could Soon Be Used for Law Enforcement
Imagine a future in which AIs automatically interpret — and enforce — laws. All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online — if you’re in any situation that might have legal implications, you’re told exactly what to do, in real-time.
Imagine that the computer system formulating these personal legal directives at a mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.
In New York, A.I. systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar A.I.-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.
Key Republicans Ask for Details on Threads Content Moderation
House Republicans asked Meta on Monday about content moderation on its new platform Threads, citing concerns about free speech violations.
House Judiciary Chairman Jim Jordan (R-Ohio) asked Meta, the parent company of Facebook and Instagram, to send documents about Threads’s content moderation practices to the committee by the end of July. Jordan cited a subpoena sent to Meta in February, which he said now covers material related to Threads.
Threads launched earlier this month as an alternative to Twitter, the text-based platform now under the control of Tesla and SpaceX CEO Elon Musk. Jordan wrote that the committee is “concerned about potential First Amendment violations that have occurred or will occur on the Threads platform.”
The GOP’s latest request to Meta is an extension of the panel’s investigation into tech platforms’ content moderation policies and how the companies interact with the government, specifically the Biden administration. And in addition to the House GOP’s probe, tech companies are facing courtroom hurdles limiting how they communicate with the government.
Common Sense Media, a Popular Resource for Parents, to Review AI Products’ Suitability for Kids
Common Sense, a well-known nonprofit organization devoted to consumer privacy, digital citizenship and providing media ratings for parents who want to evaluate the apps, games, podcasts, TV shows, movies, and books their children are consuming, announced this morning it will introduce another type of product to its ratings and reviews system: AI technology products.
The organization says it will build a new rating system that will assess AI products across a number of dimensions, including whether the tech takes advantage of “responsible AI practices” as well as its suitability for children.
The decision to include AI products in its lineup came about following a survey it performed in conjunction with Impact Research which found that 82% of parents were looking for a rating system that would help them to evaluate whether or not new AI products, like ChatGPT, were appropriate for children.
Over three-quarters of respondents (77%) also said they were interested in AI-powered products that could help children learn, but only 40% said they knew of a reliable resource they could use to learn more about AI products’ appropriateness for their kids.
With the Rise of AI, Social Media Platforms Could Face Perfect Storm of Misinformation in 2024
Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 U.S. Presidential election in ways that could confuse or mislead voters.
A new crop of AI tools offers the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 U.S. election.
Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.
And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the U.S. election rolls around next year.
Elon Musk’s xAI and OpenAI Are Going Head-to-Head in the Race to Create AI That’s Smarter Than Humans
On Saturday, Musk said on Twitter Spaces that his new company, xAI, is “definitely in competition” with OpenAI. He was outlining plans for developing “good” advanced AI — also called superintelligence.
Referring to superintelligence as Artificial General Intelligence, or AGI, Musk said: “It really seems that at this point it looks like AGI is going to happen so there are two choices, either be a spectator or a participant. As a spectator, one can’t do much to influence the outcome.”
Over a 100-minute discussion that drew over 1.6 million listeners, Musk explained his plan for xAI to use Twitter data to train superintelligent AI that is “maximally curious” and “truth-seeking.”
The Twitter owner’s comments come mere days after OpenAI said they’re creating a new team dedicated to controlling superintelligence and ensuring that this advanced AI aligns with human interests.
Appeals Court Pauses Order Limiting Biden Administration Contact With Social Media Companies + More
Appeals Court Pauses Order Limiting Biden Administration Contact With Social Media Companies
A federal appeals court Friday temporarily paused a lower court’s order limiting executive branch officials’ communications with social media companies about controversial online posts.
Biden administration lawyers had asked the 5th U.S. Circuit Court of Appeals in New Orleans to stay the preliminary injunction issued on July 4 by U.S. District Judge Terry Doughty. Doughty himself had rejected a request to put his order on hold pending appeal.
Friday’s brief 5th Circuit order put Doughty’s order on hold “until further orders of the court.” It called for arguments in the case to be scheduled on an expedited basis.
Mississippi, Under Judge’s Order, Starts Allowing Religious Exemptions for Childhood Vaccinations
Mississippi is starting the court-ordered process of letting people cite religious beliefs to seek exemptions from state-mandated vaccinations that children must receive before attending daycare or school. In April, U.S. District Judge Sul Ozerden ordered Mississippi to join most other states in allowing religious exemptions from childhood vaccinations.
His ruling came in a lawsuit filed last year by several parents who said their religious beliefs have led them to keep their children unvaccinated and out of Mississippi schools. The lawsuit, funded by the Texas-based Informed Consent Action Network, argued that Mississippi’s lack of a religious exemption for childhood vaccinations violates the U.S. Constitution.
Ozerden set a deadline of this Saturday for the state to comply with his order. The Mississippi State Department of Health website will publish information on that day about how people can seek religious exemptions, according to court papers filed on behalf of Dr. Daniel Edney, the state health officer.
Under Mississippi’s new religious exemption process, state health officials cannot question the sincerity of a person’s religious beliefs. The exemption must be granted if forms are properly filled out, Michael J. Bentley, an attorney representing the health officer, wrote.
U.S. Data Privacy: ‘Stark Imbalance’ of Protection Across States
When it comes to data privacy protection, the U.S. isn’t exactly a superpower. The American Data Privacy and Protection Act (ADPPA) could be the first comprehensive federal privacy legislation that citizens, experts, and digital rights activists have been calling for. Unfortunately, the Act is under review and nowhere near being implemented anytime soon.
Americans enjoy vastly different privacy protections based on where they live. This creates opportunities for data breaches and unscrupulous companies to exploit customers’ most personal and confidential information. Especially now, one year after Roe vs Wade fell, people in the U.S. need to know how to protect the privacy of their digital lives.
“In an increasingly digital era where so much information about our lives is now online, it is vital that all states recognize the importance of digital privacy for their citizens,” said Charlotte Scott, Digital Rights Advocate at PIA.
At the bottom of the rankings with the worst privacy protection in all of the U.S. are Arkansas, Mississippi, and Louisiana. Researchers observed a lack of progress in enacting new laws to protect citizen data and privacy.
How a Combination of COVID Lawsuits and Media Coverage Keeps Misinformation Churning
Over the course of the pandemic, lawsuits came from every direction, questioning public health policies and hospitals’ authority. Petitioners argued for care to be provided in a different way, they questioned mandates on mask and vaccine use, and they attacked restrictions on gatherings.
Even as COVID-19 wanes, lawyers representing the healthcare sector predict their days in court aren’t about to end soon. A group of litigators and media companies, among others, are eyeing policy changes and even some profits from yet more lawsuits.
Lawyers are organizing to promote their theories. Late in March, a group of them gathered in Atlanta for a debut COVID Litigation Conference to swap tips on how to build such cases. “Attention, Atlanta lawyers!” proclaimed an ad promoting the event. “Are you ready to be a part of the fastest-growing field of litigation?”
The conference was sponsored in part by the Vaccine Safety Research Foundation, which was established on vaccine-skeptical views. The gathering promised to share legal strategies for suing federal and state public health agencies over COVID policies, as well as hospitals and pharmaceutical firms for alleged malfeasance.
Striking SAG Actors in Disbelief Over Studios’ Dystopian AI Proposal
Hollywood is officially a Black Mirror episode come to life. That was the sentiment several members and non-members of SAG-AFTRA shared with Rolling Stone following Thursday’s announcement that the 160,000-member union would join the WGA union on the picket lines after failing to secure a new contract with movie studio and streaming service executives.
Both SAG-AFTRA and WGA — which has been on strike since May 2 — mark the first time since 1960 that both unions have been on strike simultaneously. One of the major points of contention for both groups has been the rapid development and implementation of AI and fears of how it could potentially replace writers and actors.
And their concern was justified, as chief negotiator Duncan Crabtree-Ireland laid bare the Alliance of Motion Picture and Television Producers’s (AMPTP) so-called “groundbreaking AI proposal,” which holds the potential to wipe out an entire pathway to breaking into the industry, as well as a reliable source of income for many.
The reported proposal hinged on the ability for background actors to be “scanned, get paid for one day’s pay” and for that company to “own that scan of their image, their likeness, and to be able to use it for the rest of eternity in any project they want with no consent and no compensation.”
The Key to Protecting Privacy Is Locked in an Underfunded Government Agency
Amazon told its users they could delete voice recordings gathered by the Alexa smart speaker and location data gathered by the Alexa app — but Amazon actually kept some of that data for years. Even worse, Amazon kept voice recordings from children indefinitely. That’s all according to the Federal Trade Commission, which, along with the Department of Justice, recently charged Amazon for deceiving parents and violating a children’s privacy law.
The complaint against Amazon follows a slate of recent FTC enforcement actions, each targeting violations of Americans’ privacy and abuses of their data. Last August, the agency sued location-data broker Kochava for selling mobile location data that could be used to track hundreds of millions of people (a saga that is still unfolding, after a court dismissed the lawsuit and the FTC refiled).
In February, the FTC proposed a $1.5 million fine for telehealth and prescription drug company GoodRx, which the agency says shared consumers’ health data with Facebook, Google, and other companies; in March, it proposed a $7.8 million settlement with online counseling service BetterHelp, which it says secretly shared consumers’ health data — including identifiable mental health data, such as experiences with depression and thoughts of suicide — with third parties. The privacy enforcement actions keep coming.
But there’s a little-known fact: The team doing much of this work is comprised of just a few dozen people. Among tech-focused government organizations, the FTC’s privacy team punches well above its weight, rolling out enforcement actions covering data about health, mental health, children, geolocation, and more. Congress, meanwhile, has stalled on passing comprehensive federal privacy legislation. While that debate continues, Congress can do something much quicker to help further protect Americans’ privacy in the short term, in a way that will complement a future privacy law: give more money to the FTC’s privacy and enforcement staff.
Brevard GOP Moves to Ban COVID Vaccine, Calling It a ‘Biological Weapon’
The Brevard Republican Executive Committee has joined a growing list of Florida GOP chapters calling on Gov. Ron DeSantis to ban the COVID-19 vaccine, which it called a “biological weapon” in a resolution this week.
The nonbinding resolution was passed by a supermajority vote of committee membership Thursday. It now goes to DeSantis, Brevard County’s legislative delegation and state party leaders, joining similar motions of support from committees in more than half a dozen other counties.
“Strong and credible evidence has recently been revealed that COVID-19 and COVID-19 injections are biological and technological weapons,” the Brevard draft resolution says, citing claims that have been disproven and disputed by respected medical groups.
It calls on DeSantis to ban the sale and distribution of the vaccine “and all related vaccines,” and for Florida Attorney General Ashley Moody to seize all remaining doses in the state for safety testing, “on behalf of the preservation of the human race,” it says.