Big Brother News Watch
EU’s Plan to Mass Surveil Private Chats Has Leaked + More
EU’s Plan to Mass Surveil Private Chats Has Leaked
The latest version of the proposed European Parliament (EP) and EU Council regulation to adopt new rules related to combating child sexual abuse has been made available online.
Despite its declared goal, the proposal, which first saw the light of day in May 2022 and is referred to by opponents as “chat control” is in fact a highly divisive draft of legislation that aims to accomplish the stated objective through mass surveillance of citizens’ private communications.
German EP member (MEP) Patrick Breyer and long-time vocal critic of the proposal said on his blog that the text would be discussed by a law enforcement working party at the Council on Wednesday, with the target date for adoption being sometime in June.
He went on to explain that the upcoming regulation is set up in a way that will result in the end of the privacy of people’s digital communications, since the subject of content searches will be “millions” of chats and photos, including those belonging to persons who have no links to child sexual abuse.
Meta Overhauls Rules on Deepfakes, Other Altered Media
Facebook owner Meta (META.O) announced major changes to its policies on digitally created and altered media on Friday, ahead of U.S. elections poised to test its ability to police deceptive content generated by new artificial intelligence technologies.
The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on its platforms, expanding a policy that previously addressed only a narrow slice of doctored videos, Vice President of Content Policy Monika Bickert said in a blog post.
Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance,” regardless of whether the content was created using AI or other tools.
The new approach will shift the company’s treatment of manipulated content. It will move from one focused on removing a limited set of posts toward one that keeps the content up while providing viewers with information about how it was made.
Google Incognito Data to Be Erased — but What Happens Next?
Incognito — it’s an evocative word, right? It conjures up images of disguises, trenchcoats, and undercover adventures, with Google‘s “Spy Guy” Incognito icon doubling down on the imagery. Unfortunately, though, the reality of Google‘s Incognito Mode doesn’t really leave up to its name choice.
News broke earlier this week that Google will delete data records from Incognito Mode sessions as part of a lawsuit settlement. The lawsuit points the finger at the Big Tech giant, claiming that it’s been collecting and storing this data since, probably, the initial launch of Incognito Mode.
That’s a huge amount of pilfered information, ironically collected when users sought to keep their browsing data private. Even worse, perhaps, that’s just the latest in a long string of privacy fails involving the Big Tech giant. So, is there anything we can do to ensure the harvesting ends there?
The settlement brings this most recent round of Google privacy blunders to a close — but will Google stop collecting Incognito Mode session data? The grim truth is, unfortunately, that the company could continue to monitor the habits and activity of people relying on Incognito Mode for its apparent privacy and just not tell us.
Meta Pushes Back on U.S. FTC’s Bid to Amend 2020 Privacy Settlement
Meta Platforms (META.O) has rebuffed an attempt by the U.S. Federal Trade Commission to amend a 2020 privacy settlement, noting that it had voluntarily disclosed two technical errors related to its Messenger Kids app to the agency.
Meta disclosed the bugs in July 2019, the Facebook parent said in a filing on Thursday, adding it had spent $5.5 billion on its privacy program and related privacy initiatives.
In question is an existing 2020 Facebook privacy settlement to ban profiting from minors’ data and expand curbs on facial recognition technology. The FTC has said it wants to tighten the settlement.
The agency has accused Meta of misleading parents about protections for children.
Italy Considers Law Against Sharenting to Protect Children’s Privacy
Parents in Italy may have to think twice before posting images and videos of their kids on social media.
On March 21, 2024, a two-party coalition presented a draft bill to the House of Representatives (Camera dei Deputati) to protect children’s privacy online and their right to their own image.
Echoing a recent French law, the proposal aims to regulate a growing digital issue known as sharenting — a contraction between share and parenting, that indicates the practice of oversharing content portraying children on social media platforms. Legislators seek to open up this debate, they said, and mitigate the security risks and psychological impacts the trend causes to youngsters.
New Zealand’s Privacy Watchdog Investigating Facial Recognition, Promises Tougher Regulation
New Zealand’s privacy watchdog wants tougher regulation covering the use of biometrics and AI technology such as facial recognition.
New legislation may also be on the table. Privacy Commissioner Michael Webster promised to publish a draft biometrics code this autumn, according to Radio New Zealand (RNZ).
The move comes amid a record surge in privacy complaints of 79% within the last financial year. One of the most high-profile cases is facial recognition technology (FRT) trials conducted by grocery cooperative Foodstuffs which plans to implement the technology in 25 stores over a period of six months to combat retail crime.
On Thursday, Privacy Commissioner Michael Webster launched an investigation into Foodstuff’s facial recognition trial, examining its compliance with the country’s Privacy Act.
Two Years Later, Ontario and BC Medics Still Need Vaccination Proof + More
CARPAY: Two Years Later, Ontario and BC Medics Still Need Vaccination Proof
For some healthcare professionals, the need for proof of vaccination remains a job requirement. Years after our governments first closed schools and locked down our society and economy in March 2020, some governments in Canada continue to wage a fanatical war against COVID-19. And as of April 2024, thousands of doctors, nurses and other healthcare workers in BC are still prevented from returning to work because of their personal medical decision in 2021 not to get injected with the COVID vaccine.
Today, hospitals in Ontario refuse to hire qualified nurses who chose not to get injected in 2021. Meanwhile, the BC and Ontario governments complain publicly about a shortage of healthcare workers. Is it ideology? Or is it pure vindictiveness that would cause the BC and Ontario governments to prevent qualified healthcare professionals from working in 2024? Perhaps both? Where is the science?
Absent any evidence the COVID-19 vaccine stopped viral spread, thousands of Canadians were nevertheless forced into unemployment over a legitimate personal decision not to get injected with a brand-new vaccine that in the fall of 2021 was still in clinical trials.
Terminated employees were denied EI benefits. Students were kicked out of universities and colleges. Millions of Canadians were denied their right to participate in sports, eat in restaurants, enjoy movie theatres, use gyms, visit their elderly parents in nursing homes and travel outside of Canada.
What is inexcusable today is the vindictive, ideological insistence by the Ontario and BC governments that doctors and nurses cannot return to work, in April 2024, over exercising their Charter right to bodily autonomy.
COVID Subcommittee Chair Asks Top Science Journal Editors to Testify on Relationship With Federal Government
Rep. Brad Wenstrup (R-Ohio), chair of the House Select Subcommittee on the Coronavirus Pandemic, issued letters to the editors of three major science journals on Tuesday, asking them to testify on the relationship between their publications and the federal government.
Wenstrup sent letters to the editors-in-chief of The Lancet, Science and Nature science journals requesting their testimony for a hearing on April 16. The hearing will be titled “Academic Malpractice: Examining the Relationship Between Scientific Journals, the Government, and Peer Review.”
In his letters, Wenstrup stated that the hearing would be to examine “whether these journals granted the federal government inappropriate access into the scientific review or publishing process.”
These journals were in contact with top White House health officials like Anthony Fauci and Francis Collins according to Freedom of Information Act requests, Wenstrup wrote. He did not cite any specific reports or studies in his letters to the editors-in-chief.
A search for research articles containing the term “COVID-19” on the websites of the three science journals results in nearly 19,000 results.
Ministry of Truth: Hawaii Lawmakers Call for Set Standards for Ethical News Sources
Lawmakers in the U.S. state of Hawaii are trying to get a journalist association there to come up with and adhere to a new “process” that would make sure their sources are “ethical and objective.”
This week, the resolution was passed by the Judiciary Committee with no votes against or abstainees and is now headed for adoption by the Senate. The resolution was introduced by Senator Chris Lee, and explained as a way to “help” the public understand who might be spreading “misinformation.”
But, not everyone shares his stance, with one obvious point of criticism being that government bodies shouldn’t be the ones with the power to “anoint” one news source as reputable over another.
That, opponents of such trends in general would say, brings a society closer to having a “ministry of truth” than flourishing democratic authorities.
A New Book Has Amplified Fierce Debate Around Teens, Mental Health and Smartphones
A new book has embroiled the academic community in a heated debate over whether spending time on smartphones affects young people’s mental health and, if so, how.
Social psychologist Jonathan Haidt’s “The Anxious Generation,” published last week, argues that the smartphone-driven “great rewiring of childhood” is causing an “epidemic of mental illness.” He suggests four ways to combat this: no smartphones before high school, no social media before age 16, no phones in schools; and prioritizing real-world play and independence.
“I call smartphones ‘experience blockers,’ because once you give the phone to a child, it’s going to take up every moment that is not nailed down to something else,” Haidt told TODAY.com, adding, “It’s basically the loss of childhood in the real world.”
Researcher Jean Twenge, author of “Generations” and “iGen,” said there’s a “reasonably robust” consensus among academics that smartphones and social media are at least partially linked to the rise in teen depression, self-harm and loneliness.
NYC’s Government Chatbot Is Lying About City Laws and Regulations
If you follow generative AI news at all, you’re probably familiar with LLM chatbots’ tendency to “confabulate” incorrect information while presenting that information as authoritatively true. That tendency seems poised to cause some serious problems now that a chatbot run by the New York City government is making up incorrect answers to some important questions of local law and municipal policy.
A new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings “are not required to accept Section 8 vouchers,” when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination.
The Markup also received incorrect information in response to chatbot queries regarding worker pay and work-hour regulations, as well as industry-specific information like funeral home pricing.
Further testing from BlueSky user Kathryn Tewson shows the MyCity chatbot giving some dangerously wrong answers regarding the treatment of workplace whistleblowers, as well as some hilariously bad answers regarding the need to pay rent.
As International Travel Grows, so Does U.S. Use of Technology. A Look at How It’s Used at Airports
The Belgian family of four was on their fourth trip to the United States. They had been dreading the long line at passport control when they entered the country but had heard about a new app they could use to ease their way and decided to give it a shot. Within minutes, they had bypassed the long line at Washington Dulles International Airport and were waiting for their luggage.
As travel continues to boom following coronavirus pandemic-related slumps, U.S. Customs and Border Protection (CBP) is expanding the use of technology like the Mobile Passport Control app the De Staercke family used in an effort to process the ever-growing number of passengers traveling internationally. And with events like a rare solar eclipse, the Olympics in Paris, and summer holidays still driving international travel, those numbers don’t look set to drop anytime soon.
Marc Calixte, the top CBP official at Dulles, said possibly by the end of summer the airport will be opening so-called E-Gates where passengers using Global Entry can use the app, bypass an officer at a booth, and instead go to a gate where their photo is taken and matched to their passport, and, assuming no red flags arise, the gates open and they pass out of the customs and passport control area and are on their way.
Further, on the horizon, Blackmer said the agency is exploring a concept called smart queuing, where the app assigns passengers to certain lines depending on the information they have entered into the app, such as whether they have goods to declare.
The Next Pandemic Is Coming. Will We Be Ready?
In March, officials from 194 countries came together to agree on a global plan to deal with a threat known as “Disease X”. The ominous code name refers to the as-yet unknown illness expected to one day ravage the world in a repeat of COVID-19 — or perhaps inflict even worse damage.
This fear has now driven nine rounds of painstaking international negotiations on the text of the world’s first pandemic treaty, which must be nailed down before the World Health Organization’s decision-making annual assembly meets in May. The accord is aimed at helping governments, institutions and populations avoid the mistakes of the COVID crisis — but getting there is causing deep divisions.
The accord has revived criticisms from people who are suspicious of multilateral institutions. They question the WHO’s fitness for purpose and point to concerns about its pandemic performance, such as the time it took to fully embrace the crucial point that COVID-19 spreads through airborne transmission. In its defense, the WHO says its thinking evolved with the evidence and that it always advised people to be cautious.
Such critiques have already shaped the treaty. It has a special clause listing powers it will not confer upon the WHO, such as “to ban or accept travelers, impose vaccination mandates or therapeutic or diagnostic measures, or implement lockdowns.”
Judge Dismisses Scott Jensen’s First Amendment Lawsuit Against the State Board of Medical Practice
Former Republican gubernatorial candidate Scott Jensen’s discrimination lawsuit against the state Board of Medical Practice has been dismissed by a federal judge. Jensen failed to provide any examples of other physicians being treated differently when they were targets of complaints, Judge Jerry Blackwell wrote in his order from last week. He also failed to show that the board investigations had impeded his free speech, the judge said.
The former senator and candidate, who practices family medicine, noted that Blackwell dismissed the case without prejudice, meaning that Jensen can refile with additional information — something Jensen said is already in progress. “My life has been turned upside down,” he said.
In his federal lawsuit, Jensen claimed the complaints and board inquiries placed a “cloud of constant uncertainty” over his gubernatorial campaign, according to court documents. He said the inquiries amounted to “weaponization of a government agency” and an “ideologically driven, politicized government censorship apparatus which retaliated against its opponent based on the content of the message he espoused.”
By the fall of 2020, two claims had been filed and dismissed by the board. In the following years, Jensen continued to criticize the government’s handling of COVID-19 mandates and vaccine requirements. He urged civil disobedience against masks and vaccine policies. He vowed to reshape the board if re-elected. Jensen lost to Gov. Tim Walz.
Children’s Privacy Must Be a Priority on Social Media, Says the UK
Social media and video-sharing platforms need to make children’s privacy online their priority, urged the U.K.’s data protection body.
The Information Commissioner’s Office (ICO) has set out its strategy for the upcoming year to help service providers better address potential privacy and security risks for children across their platforms. This focuses on default privacy settings, geolocation data, targeted ads, recommender algorithms, and parental consent for children under 13.
“Children’s privacy must not be traded in the chase for profit,” said John Edwards, U.K. Information Commissioner, in an official announcement. “How companies design their online services and use children’s personal information have a significant impact on what young people see and experience in the digital world.”
It isn’t enough, in fact, for parents to protect their kids’ digital lives with VPN services and other security software. Even when parental controls are active, children can still be exposed to serious data harm or other threats by simply accessing a social media or video-sharing app.
For instance, these platforms are infamous for heavily tracking users’ location data. This can expose everyone, but even more so the youngest, to real physical threats.
Amazon Fresh Kills ‘Just Walk Out’ Shopping Tech — It Never Really Worked
Amazon is giving up on the cashier-less “Just Walk Out” technology at its Amazon Fresh grocery stores. The Information reports that new stores will be built without computer-vision-powered surveillance technology, and “the majority” of existing stores will have the tech removed. In the early days, Amazon’s ambitions included selling Just Walk Out to other brick-and-mortar stores. The problem was that the technology never really worked.
As it says on the tin, Just Walk Out was supposed to let customers grab what they wanted from a store and just leave, skipping any kind of checkout process. Amazon wanted to track what customers took with them purely via AI-powered video surveillance; the system just took a phone scan at the door, and shoppers would be billed later via their Amazon accounts.
A May 2023 report from The Information revealed the myriad tech problems Amazon was still having with the idea six years after the initial announcement. The report said that “Amazon had more than 1,000 people in India working on Just Walk Out as of mid-2022 whose jobs included manually reviewing transactions and labeling images from videos to train Just Walk Out’s machine learning model.”
The WHO’s Power Grab + More
The WHO’s Power Grab
The response to COVID was the greatest mistake in the history of the public health profession, but the officials responsible for it are determined to do even worse. With the support of the Biden administration, the World Health Organization (WHO) is seeking unprecedented powers to impose its policies on the United States and the rest of the world during the next pandemic.
It was bad enough that America and other countries voluntarily followed WHO bureaucrats’ disastrous pandemic advice instead of heeding the scientists who had presciently warned, long before 2020, that lockdowns, school closures, and mandates for masks and vaccines would be futile, destructive, and unethical. It was bad enough that U.S. officials and the corporate media parroted the WHO’s false claims and ludicrous praise of China’s response. But now the WHO wants new authority to make its bureaucrats’ whims mandatory — and to censor those who disagree with their version of “the science.”
The WHO hopes to begin this power grab in May at its annual assembly in Geneva, where members will vote on proposed changes in international health regulations and a new treaty governing pandemics. Pamela Hamamoto, the State Department official representing the U.S. in negotiations, has already declared that America is committed to signing a pandemic treaty that will “build a stronger global health architecture,” which is precisely what we don’t need.
If we learned anything from the pandemic, it was the folly of entrusting narrow-minded public health officials with wide-ranging powers. The countries that fared best, like Sweden, were the ones that ignored the advice of the WHO, and the U.S. states that fared best, like Florida, were the ones that defied the White House Coronavirus Task Force and the Centers for Disease Control. This wasn’t a new lesson. Previous research has shown that giving national leaders new powers to respond to a natural disaster typically leads to more fatalities and economic damage.
Rutgers University Lifts COVID Vaccine Requirement for Students, Staff
Rutgers University lifted its COVID-19 vaccine requirement for students, staff and faculty members. The university made the announcement on its website on Monday.
The decision comes after Republican state Sen. Declan O’Scanlon called for the practice to end and also called for university officials to resign for their decision to continue requiring the vaccine.
“I’m glad Rutgers decided to join the rest of the enlightened world by finally lifting its COVID-19 vaccine requirement. This is something I have been calling for since last year. Having said that, Rutgers doesn’t deserve any additional praise. In fact, it deserves to be harshly criticized for not following the science,” O’Scanlon wrote in a statement.
Rutgers officials also say that the university won’t require face coverings, but say that coverings “are welcomed.”
Supreme Court Is Asked to Resolve Split Decision in Social Media Censorship Lawsuit
As swiftly as the pandemic — and more importantly, associated radical restrictive measures concerning people’s everyday lives — descended onto the world, it all seemingly quickly vanished into thin air.
But the consequences, particularly related to the stifling of speech, live on in a number of legal battles being fought now to prove that both government(s) and social media were wrong to introduce mass censorship because of alleged COVID misinformation.
Now the New Civil Liberties Alliance (NCLA) has gotten involved in one of these cases by asking the Supreme Court to make a decision in the Changizi, Senger, and Kotzin v. HHS (United States Department of Health and Human Services) lawsuit.
The plaintiffs here claim that their First Amendment rights were violated by the HHS and the U.S. surgeon general — government agencies — going to Twitter, a private tech company, with the “request” to have their voices silenced for opposing the government’s COVID mandates of the era.
Now, in light of the discovery process in a major, related in context, case deliberated by the Supreme Court — Murthy v. Missouri, NCLA believes that there is reason to review Cingazi et al v. HHS as well, specifically, to ascertain if the district court (later supported by the Sixth Circuit) was right to dismiss the case without allowing discovery.
User Privacy Must Come First With Biometrics
The rapid rise and expansion of artificial intelligence (AI) use cases in recent years has led to companies sharply increasing experimentation and adoption of facial recognition and other biometric technology in their consumer-facing products and services. Apple pioneered this when the company introduced Face ID, allowing users to open their iPhones with a simple scan of their face, transitioning the use of biometric data from innovation to normalization.
Now, biometric data is a common form of personal currency, a firewall entirely unique to the individual. Use cases have expanded to airports with biometric boarding, mobile banking and e-commerce to facilitate and authenticate transactions, and even with various branches of law enforcement using it for surveillance purposes.
The benefits of AI-powered facial recognition technology are off the charts, with the potential for dramatic increases in efficiency, security and ease of use across industries. But with the upside comes an equally compelling downside, as organizations need to consider the privacy risks and concerns associated with collecting and using biometric data at scale.
Threat to individual privacy and personal rights: With the scale of facial recognition used in public places, soon users and citizens will not be able to go virtually anywhere in public without surveillance, posing a major threat to privacy when many already feel vulnerable.
User privacy needs to be prioritized when handling biometric data. This information is so sensitive and personal that any innovation it can drive must take a backseat to privacy, as the harms of poorly implemented facial recognition technology outweigh the benefits.
Facebook Let Netflix Peek Into User DMs, Explosive Court Docs Claim
The social media giant Meta allegedly allowed Netflix to access Facebook users’ direct messages for nearly a decade, breaking anti-competitive activities and privacy rules, explosive court documents claim. The court documents, which were unsealed last week, are part of a major anti-trust lawsuit filed by U.S. citizens Maximilian Klein and Sarah Grabert, who claim Netflix and Facebook “enjoyed a special relationship” so that Netflix could better tailor its ads with Facebook.
Facebook received millions of dollars in ad revenue from Netflix as part of these close ties, guaranteeing ad spending of $150 million in 2017, the lawsuit claims.
In 2018, the New York Times published a report citing hundreds of pages of Facebook documents, alleging Facebook had authorized Spotify and Netflix to access users’ DMs. The publication reported that the connections helped Facebook gain explosive growth and bolstered its ad revenue streams.
Meta has already been fined for sharing users’ information without permission. In 2022, Ireland fined Meta $284 million after data about more than half a billion users was leaked online.
U.S. Defense Official Had ‘Havana Syndrome’ Symptoms During a 2023 NATO Summit, the Pentagon Confirms
A senior Defense Department official who attended last year’s NATO summit in Vilnius, Lithuania, had symptoms similar to those reported by U.S. officials who have experienced “Havana syndrome,” the Pentagon confirmed Monday.
Havana syndrome is still under investigation but includes a string of health problems dating back to 2016, when officials working at the U.S. Embassy in Havana reported sudden unexplained head pressure, head or ear pain, or dizziness.
The injuries to U.S. government personnel or their families were part of a “60 Minutes” report Sunday that suggested Russia is behind the incidents, one of which took place during the 2023 NATO summit in Vilnius.
The Pentagon’s healthcare system has established a registry for employees or dependents to report such incidents. In March, however, a five-year study by the National Institutes of Health found no brain injuries or degeneration among U.S. diplomats and other government employees who had Havana syndrome symptoms.
U.S., Britain Announce Partnership on AI Safety, Testing
The United States and Britain on Monday announced a new partnership on the science of artificial intelligence safety, amid growing concerns about upcoming next-generation versions.
Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November.
Britain and the United States are among the countries establishing government-led AI safety institutes.
Britain said in October its institute would examine and test new types of AI, while the United States said in November it was launching its own safety institute to evaluate risks from so-called frontier AI models and is now working with 200 companies and entites.
Poland Launches Investigation in Pegasus Spyware Use by Government
Following allegations made by Poland’s current Prime Minister in February 2024, the Polish government has formally launched an investigation into the use of Pegasus spyware by the previous administration.
Former officials who were involved in the use of the spyware will likely face criminal charges, with the victims potentially able to claim financial compensation and be involved in criminal proceedings.
Pegasus is a phone-based spyware that covertly hijacks the device providing full access to apps and files, while also turning the device into a 24/7 tracking and listening device.
A 2021 data leak, accessed by the Guardian, showed that thousands of telephone numbers spread across several countries were accessed by the Pegasus spyware, with a number of media outlets being targeted by governments in Eastern Europe, most notably by Hungary’s Viktor Orbán.
Big Tech Is the Big Tobacco of Today + More
Big Tech Is the Big Tobacco of Today
Cigarette and tobacco companies, with their calculated strategy of addiction by design, ensnared me and millions of others. They kept us hooked and spent a fortune opposing policies that could save lives. Today, we face a new breed of addiction peddlers. Big Tech companies that own Facebook, Instagram, Discord, YouTube, Snapchat, Twitter, TikTok and others are the cigarette and tobacco companies of our children’s generation. Cigarettes and tobacco have had little redeeming societal value.
Social media companies dismiss mounting evidence linking their platforms to deteriorating youth mental health and increases in suicidal ideation and suicide among our young people. They deflect responsibility with hollow promises of content moderation and public relations gimmicks like safety ratings.
All the while they fight for changes to the real solution to saving kids’ lives: Modifying their platforms and their business models. Today’s Big Tech is our kids’ Big Tobacco, and the Big Lie is that there is no proof that social media is harming our children’s mental health.
The Big Truth from parents is that social media is having a devastating impact on their kids’ mental health, and it has resulted in increases in suicidal ideation and suicide.
Supreme Court Rejects Case From Fired Worker Denied Jobless Benefits After Refusing Vaccine
The Supreme Court on Monday rejected the appeal of a Minnesota woman who said she was wrongly denied unemployment benefits after being fired for refusing to be vaccinated for COVID-19 because of her religious beliefs.
The Minnesota Department of Employment and Economic Development determined she wasn’t eligible for benefits because her reasons for refusing the vaccine were based less on religion and more on a lack of trust that the vaccine was effective.
The case shows that the vaccine debate continues to smolder after the pandemic and after the Supreme Court in 2022 halted enforcement of a Biden administration vaccine-or-testing mandate for large employers but declined to hear a challenge to the administration’s COVID-19 vaccine mandate for healthcare facilities that receive federal funding.
After refusing to get vaccinated, Tina Goede was fired in 2022 from her job as an account sales manager for the pharmaceutical company Astra Zeneca. Her position had required her to meet with customers in hospitals and clinics, some of which required proof of vaccination.
Google Will Delete Billions of Browsing Records to Settle Privacy Lawsuit: Court Filing
Google will destroy users’ browsing data to settle a $5 billion privacy lawsuit about its “incognito” browsing, according to federal court filings. The 2020 class action lawsuit accused the search engine of collecting millions of users’ data without their knowledge while they used incognito mode. The suit alleged that Google was secretly amassing data from users when they thought their browsing was private.
The settlement was first announced last December, but the specific details of the settlement were revealed Monday in new court filings.
Now, Google has agreed to delete billions of data records that are older than nine months, the filing states. As part of the settlement, Google also agreed to inform users that it collects data in incognito mode and make it so third-party trackers are turned off by default when using the feature.
Previously, Google had used third-party cookies to collect users’ data even when they were on non-Google sites. Google had known for years that the marketing and branding of its incognito mode was potentially misleading, the lawsuit alleged.
AT&T Says a Data Breach Leaked Millions of Customers’ Information Online. Were You Affected?
The theft of sensitive information belonging to millions of AT&T’s current and former customers has been recently discovered online, the telecommunications giant said this weekend.
In a Saturday announcement addressing the data breach, AT&T said that a dataset found on the “dark web” contains information including some Social Security numbers and passcodes for about 7.6 million current account holders and 65.4 million former account holders.
Whether the data “originated from AT&T or one of its vendors” is still unknown, the Dallas-based company noted — adding that it had launched an investigation into the incident. AT&T has also begun notifying customers whose personal information was compromised.
Russian Military Intelligence Unit May Be Linked to ‘Havana Syndrome,’ Insider Reports
The mysterious “Havana syndrome” ailment that has afflicted U.S. diplomats and spies across the world may be linked to energy weapons wielded by members of a Russian military intelligence sabotage unit, the Insider media group reported.
A U.S. intelligence investigation whose findings were released last year found that it was “very unlikely” a foreign adversary was responsible for the ailment, first reported by U.S. embassy officials in the Cuban capital Havana in 2016.
But Insider, a Russia-focused investigative media group based in Riga, Latvia reported that members of a Russian military intelligence (GRU) unit known as 29155 had been placed at the scene of reported health incidents involving U.S. personnel.
The year-long Insider investigation in collaboration with 60 Minutes and Germany’s Der Spiegel also reported that senior members of Unit 29155 received awards and promotions for work related to the development of “non-lethal acoustic weapons.”
Teens’ Latest Social Media Trend? Self-Diagnosing Their Mental Health Issues
Teenagers are increasingly using social media to self-diagnose their mental health issues, alarming parents and advocates who say actual care should be easier to access.
A poll by EdWeek Research Center released this week found that 55% of students use social media to self-diagnose, and 65% of teachers say they’ve seen the phenomenon in their classrooms.
And with their amateur diagnoses in hand, teenagers might not only fail to understand their actual problems, but they could pursue solutions — or even medications — that aren’t right for them.
A recent Pew Research study found 95% of teenagers have a smartphone, and around 60% use social media platforms such as TikTok.
I Tried the New Google. Its Answers Are Worse.
Have you heard about the new Google? They “supercharged” it with artificial intelligence. Somehow, that also made it dumber. With the regular old Google, I can ask, “What’s Mark Zuckerberg’s net worth?” and a reasonable answer pops up: “169.8 billion USD.”
Now let’s ask the same question with the “experimental” new version of Google search. Its AI responds: Zuckerberg’s net worth is “$46.24 per hour, or $96,169 per year. This is equivalent to $8,014 per month, $1,849 per week, and $230.6 million per day.”
Google acting dumb matters because its AI is headed to your searches sooner or later. The company has already been testing this new Google — dubbed Search Generative Experience, or SGE — with volunteers for nearly 11 months, and recently started showing AI answers in the main Google results even for people who have not opted into the test.
The new Google can do some useful things. But as you’ll see, it sometimes also makes up facts, misinterprets questions, delivers out-of-date information and just generally blathers on. Even worse, researchers are finding that AI often elevates lower-quality sites as reliable sources of information.
Normally, I wouldn’t review a product that isn’t finished. But this test of Google’s future has been going on for nearly a year, and the choices being made now will influence how billions of people get information. At stake is also a core idea behind the current AI frenzy: that the tech can replace the need to research things ourselves by just giving us answers. If a company with the money and computing power of Google can’t make it work, who can?