Pensacola Furniture Store Ordered to Pay $110K to Former Manager Who Refused COVID Vaccine
Pensacola News Journal reported:
Following a lawsuit by the United States Equal Employment Opportunity Commission, Pensacola store Hank’s Fine Furniture (Hank’s Furniture Inc.) settled for a six-figure sum payout to a former manager fired for refusing a mandatory COVID-19 vaccination.
Federal court records indicate a settlement had been reached Monday, and federal Judge M. Casey Rodgers ordered Thursday that the store pay former manager known as “K.M.O.” $110,000 after she denied a companywide COVID vaccine mandate based on religious beliefs.
Rodgers also wrote that HFI cannot require proof that an employee’s or applicant’s religious objection to an employer requirement be an official tenet or endorsed teaching of said religious belief.
The furniture store must also, within 30 days, adopt, implement and disseminate a written policy to all employees that HFI “will not require any employee to violate sincerely held religious beliefs, including those pertaining to vaccinations, as a condition of his/her employment.”
Are We Witnessing the Weaponization of AI?
OpenAI recently announced the appointment of retired General Paul Nakasone to its board. This marks a major shift in the company’s alignment toward national security issues, a development that should concern us all.
Tasked with advising on safety and security, Nakasone’s influence signals a deeper integration of OpenAI‘s interests with those of the U.S. government. This development is not an isolated incident but rather a continuation of a well-trodden path by tech giants like Amazon, Google, and Microsoft, who have increasingly aligned themselves with governmental and military agendas under the guise of “security” and “keeping Americans safe.”
These platforms, once lauded for their potential to democratize information and connect the world, gradually transformed into tools of surveillance and control. With OpenAI, the trajectory seems alarmingly similar.
Initially focused on cybersecurity and public safety, OpenAI’s collaborations with government agencies are poised to deepen. Advanced artificial intelligence (AI) systems, originally intended for defensive purposes, are likely to evolve into tools for mass surveillance. Under the pretext of combating terrorism and cyber threats, these systems could monitor citizens’ online activities, communications, and even predict behaviors. This encroachment into privacy will likely be justified by calls to protect national security. OpenAI will likely capitalize on its data analytics capabilities to shape public discourse. In fact, some suggest that this is already occurring.
Health System’s Tech Vulnerabilities Exposed Again
The CrowdStrike internet meltdown that wreaked havoc with some health systems’ procedures and billing on Friday could be a harbinger of future threats and disruptions to medical facilities, experts said.
Why it matters: The U.S. health system is still dealing with the fallout from the massive Change Healthcare ransomware attack and other incidents that have underscored the sector’s reliance on a few key technology companies to meet their IT needs.
Catch up quick: The outage resulted early Friday morning from a faulty software update pushed out by CrowdStrike.
The issue, which CrowdStrike said was not a malicious cyberattack, affected devices using Microsoft Windows operating system. Users saw the dreaded “blue screen of death” and were essentially locked out of their systems until they found another way in.
With AI, Jets and Police Squadrons, Paris Is Securing the Olympics — and Worrying Critics
A year ago, the head of the Paris Olympics boldly declared that France’s capital would be “ the safest place in the world ” when the Games open this Friday. Tony Estanguet’s confident forecast looks less far-fetched now with squadrons of police patrolling Paris’ streets, fighter jets and soldiers primed to scramble, and imposing metal-fence security barriers erected like an iron curtain on both sides of the River Seine that will star in the opening show.
Olympic organizers also have cyberattack concerns, while rights campaigners and Games critics are worried about Paris’ use of AI-equipped surveillance technology and the broad scope and scale of Olympic security.
Campaigners for digital rights worry that Olympic surveillance cameras and AI systems could erode privacy and other freedoms, and zero in on people without fixed homes who spend a lot of time in public spaces.
NIDA Should Beware of Funding Companies That Violate People’s Privacy
In a ground-breaking settlement with the Federal Trade Commission, two online addiction and mental health treatment companies, Monument and Cerebral, admitted to deceptively and widely sharing sensitive personal and health information with third-party advertising platforms including Meta (Facebook) and Google. They aren’t alone.
Our research at the Opioid Policy Institute has found more than a dozen other online addiction treatment companies engaging in similar deceptive behavior that contradicts their claims of private, secure, or confidential services. Perhaps the most shocking aspect of these business practices is the role of federal funding for these services.
The National Institute on Drug Abuse (NIDA) is the premier drug addiction research branch of the National Institutes of Health. For decades, NIDA’s work to reduce opioid overdose deaths has been stymied by long-standing gaps in treatment, with fewer than 25% of people receiving evidence-based medications for opioid addiction. One way NIDA has been working to address addiction treatment gaps is through encouraging grant proposals for digital health.
Digital Rights Groups Rally Against UN Convention That Threatens Free Speech and Privacy
The Electronic Frontier Foundation (EFF), a digital rights group, joined by nearly two dozen other similar civil organizations, has appealed to the EU Commission (EC) regarding the developments around the UN’s Cybercrime Convention.
The reason for turning to the EU mere days ahead of the finalization of the controversial text, criticized as a threat to free speech and privacy, is the bloc’s own data protection framework (concerning personal data transfers).
“Despite the latest modifications, the revised draft fails to address our concerns and continues to risk making individuals and institutions less safe and more vulnerable to cybercrime, thereby undermining its very purpose,” the letter reads.
The EC and member states are urged to act with the goal of properly addressing these issues during the final negotiating session, in order to introduce what the rights groups see as necessary changes. Alternatively, they want the EU to block the treaty from reaching the UN General Assembly for adoption.