Big Brother News Watch
Inside Biden’s Secret Surveillance Court + More
Inside Biden’s Secret Surveillance Court
At an undetermined date, in an undisclosed location, the Biden administration began operating a secretive new court to protect Europeans’ privacy rights under U.S. law.
Officially known as the Data Protection Review Court, it was authorized in an October 2022 executive order to fix a collision of European and American law that had been blocking the lucrative flow of consumer data between American and European companies for three years.
The court’s eight judges were named last November, including former U.S. Attorney General Eric Holder. Its existence has allowed companies to resume the lucrative transatlantic data trade with the blessing of EU officials.
The details get blurry after that. And critics worry it will tie the hands of U.S. intelligence agencies with an unusual power: It can make binding decisions on surveillance practices with federal agencies, which won’t be able to challenge those decisions.
The court’s creation is also raising fears within U.S. circles that Europeans could get certain privacy protections that American citizens lack. U.S. residents who suspect they are under improper surveillance cannot go to the Data Protection Review Court. Under U.S. law, they can go to a federal court — but only if they can show a concrete wrong or harm that gives them legal standing, which presents a Catch-22 since they can’t prove what they don’t know.
Children on Instagram and Facebook Were Frequent Targets of Sexual Harassment, State Says
The Wall Street Journal reported:
Children using Instagram and Facebook have been frequent targets of sexual harassment, according to a 2021 internal Meta Platforms presentation that estimated that 100,000 minors each day received photos of adult genitalia or other sexually abusive content.
That finding is among newly unredacted material about the company’s child-safety policies in a lawsuit filed last month by New Mexico that alleges Meta’s platforms recommend sexual content to underage users and promote underage accounts to predatory adult users.
In one 2021 internal document described in the now unredacted material, Meta employees noted that one of its recommendation algorithms, called “People You May Know,” was known among employees to connect child users with potential predators. The New Mexico lawsuit says the finding had been flagged to executives several years earlier, and that they had rejected a staff recommendation that the company adjust the design of the algorithm, known internally as PYMK, to stop it from recommending minors to adults.
New Mexico alleges that Meta has failed to address widespread predation on its platform or limit design features that recommended children to adults with malicious intentions. Instead of publicly acknowledging internal findings such as the 100,000 child-a-day scale of harassment on its platforms, the suit alleges, Meta falsely assured the public that its platforms were safe.
Much of the internal discussion described in the newly unredacted material focused on Instagram. In an internal email in 2020, employees reported that the prevalence of “sex talk” to minors was 38 times greater on Instagram than on Facebook Messenger in the U.S. and urged the company to enact more safeguards on the platform, according to documents cited in the lawsuit.
Mother Whose Child Died in TikTok Challenge Urges U.S. Court to Revive Lawsuit
A U.S. appeals court on Wednesday wrestled with whether the video-based social media platform TikTok could be sued for causing a 10-year-old girl’s death by promoting a deadly “blackout challenge” that encouraged people to choke themselves.
Members of a three-judge panel of the Philadelphia-based 3rd U.S. Circuit Court of Appeals noted during oral arguments that a key federal law typically shields internet companies like TikTok from lawsuits for content posted by users.
But some judges questioned whether Congress in adopting Section 230 of the Communications Decency Act in 1996 could have imagined the growth of platforms like TikTok that do not just host content but recommend it to users using complex algorithms.
Tawainna Anderson sued TikTok and its Chinese parent company ByteDance after her daughter Nylah in 2021 attempted the blackout challenge using a purse strap hung in her mother’s closet. She lost consciousness, suffered severe injuries, and died five days later. Anderson’s lawyer, Jeffrey Goodman, told the court that while Section 230 provides TikTok some legal protection, it does not bar claims that its product was defective and that its algorithm pushed videos about the blackout challenge to the child.
‘Fundamentally Against Their Safety’: The Social Media Insiders Fearing for Their Kids
Parents working for tech companies have a first-hand look at how the industry works — and the threats it poses to child safety.
Arturo Bejar would not have let his daughter use Instagram at the age of 14 if he’d known then what he knows now. Bejar left Facebook in 2015, where he spent six years making it easier for users to report when they had problems on the platform. But it wasn’t until his departure that he witnessed what he described in recent congressional testimony as the “true level of harm” the products his former employer built are inflicting on children and teens – his own included.
Bejar discovered his then 14-year-old daughter and her friends were routinely subjected to unwanted sexual advances, harassment and misogyny on Instagram, according to his testimony.
But it wasn’t his daughter’s experience on Instagram alone that convinced Bejar that the social network is unsafe for kids younger than 16; it was the company’s meager response to his concerns. Ultimately, he concluded, companies like Meta will need to be “compelled by regulators and policymakers to be transparent about these harms and what they are doing to address them.”
Iowa Sues TikTok, Alleging App Misleads Parents About Inappropriate Content
Iowa sued TikTok on Wednesday, accusing the video-based social media app of misrepresenting the prevalence of inappropriate content on the platform to avoid parental controls.
The lawsuit alleges that TikTok falsely claims there is only infrequent or mild sexual content and nudity, profanity or crude humor, mature and suggestive themes, and alcohol, tobacco or drug use references on the platform to obtain a “12+” rating in Apple’s App Store.
“TikTok knows and intends to evade the parental controls on Apple devices by rating its app ‘12+,’” the complaint reads. “If TikTok correctly rated its app, it would receive a ‘17+’ age rating, and parental restrictions on phones would prevent many kids from downloading it.”
“TikTok has kept parents in the dark,” Iowa Attorney General Brenna Bird (R) said in a statement. “It’s time we shine a light on TikTok for exposing young children to graphic materials such as sexual content, self-harm, illegal drug use, and worse.”
Watch Out Windows 11 Users: Microsoft May Be Sharing Your Outlook Emails Without You Knowing — Here’s How to Stop It
It looks like Microsoft’s penchant for collecting its users’ data may get it in more trouble, with a worrying new report suggesting that it’s sharing more information from emails sent by the new Outlook for Windows app than people may know.
This is particularly concerning as most people check their emails daily, to keep up with friends and family, or send important documents and information at work, and with the Outlook for Windows app now being the default program for emails in Windows 11, this discovery could impact a lot of people.
MSPoweruser reports that the team behind ProtonMail, an end-to-end encrypted email service and competitor to Microsoft Outlook, has discovered the worrying scale of user data being collected by Outlook for Windows, which reportedly includes your emails, contacts, browsing history, and possibly even location data.
ProtonMail’s blog post goes so far as to call Outlook for Windows “a surveillance tool for targeted advertising”, a harsh comment, certainly, but people who downloaded the new Outlook for Windows app have encountered a disclaimer that explains how Microsoft and hundreds of third parties will be helping themselves to your data.
EU Set to Allow Draconian Use of Facial Recognition Tech, Say Lawmakers
Last-minute tweaks to the European Union’s Artificial Intelligence Act will allow law enforcement to use facial recognition technology on recorded video footage without a judge’s approval — going further than what was agreed by the three EU institutions, according to European lawmaker Svenja Hahn.
The German member of the European Parliament said the final text of the bloc’s new rules on artificial intelligence, obtained by POLITICO, was “an attack on civil rights” and could enable “irresponsible and disproportionate use of biometric identification technology, as we otherwise only know from authoritarian states such as China.”
The wording also made it to the full legal text, which the Spanish Council presidency put together on December 22. The current presidency of the EU Council, held by Belgium, is working with Parliament to finalize bits of interpretative text known as recitals.
The Davos Elite Embraced AI in 2023. Now They Fear It.
ChatGPT was the breakout star of last year’s World Economic Forum, as the nascent chatbot’s ability to code, draft emails and write speeches captured the imaginations of the leaders gathered in this posh ski town.
But this year, tremendous excitement over the nearly limitless economic potential of the technology is coupled with a more clear-eyed assessment of its risks. Heads of state, billionaires and CEOs appear aligned in their anxieties, as they warn that the burgeoning technology might supercharge misinformation, displace jobs and deepen the economic gap between wealthy and poor nations.
In contrast to far-off fears of the technology ending humanity, a spotlight is on concrete hazards borne out last year by a flood of AI-generated fakes and the automation of jobs in copywriting and customer service. The debate has taken on new urgency amid global efforts to regulate the swiftly evolving technology.
China-Made Drones Pose Significant Risk to U.S. Data, Security Agencies Say
China-made drones pose a significant risk to American data, critical infrastructure and national security, two federal security agencies said this week in an official cybersecurity guidance urging U.S. professional and hobby users to transition to safer alternatives.
The use of Chinese-manufactured unmanned aircraft systems — UAS, or more commonly known as drones — in critical infrastructure “risks exposing sensitive information to PRC authorities, jeopardizing U.S. national security, economic security, and public health and safety,” the Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation on Wednesday.
The warning comes amid a deepening technological and national security competition between the United States and China, with concerns growing among lawmakers and officials over a range of technologies that are made in China or by Chinese companies with U.S. branches, widely sold in the U.S., and are even in the heart of critical infrastructure systems.
“Central to this strategy is the acquisition and collection of data — which the PRC views as a strategic resource and growing arena of geopolitical competition,” said the CISA and the FBI, using the acronym for the People’s Republic of China. All drones collect information and could have vulnerabilities that compromise networks, thus enabling data theft by companies or governments, the guidance said.
Bill Gates Hopes AI Can Reduce ‘Polarization,’ Save ‘Democracy,’ Ignores Censorship Implications + More
Bill Gates Hopes AI Can Reduce ‘Polarization,’ Save ‘Democracy,’ Ignores Censorship Implications
The notion that whoever controls and shapes AI could potentially wield significant influence over large swathes of society could be one of the most alarming and prominent over the next few years.
In a recent episode of “Unconfuse Me with Bill Gates,” Sam Altman, the CEO of OpenAI, and tech billionaire Bill Gates controversially delved into the potential of artificial intelligence (AI) as a tool for maintaining democracy and promoting world peace.
The discussion notably omits a critical aspect: the influence of the programmers’ own beliefs and principles on the AI’s functioning. The designers and developers of AI systems inherently embed their ideas about democracy, free speech, and governance into the AI’s algorithms. This raises significant concerns about the impact of these personal biases on the AI’s neutrality and its ability to make fair, unbiased decisions.
The prospect of AI systems being programmed with particular ideologies could have profound implications for free speech. If an AI is designed to favor certain political or social viewpoints, (excluding those it decides are “polarizing”) it could potentially suppress opposing perspectives, leading to a form of digital censorship.
Exclusive: Pentagon Faces Questions for Funding Top Chinese AI Scientist
U.S. lawmakers are demanding answers from the Department of Defense as to why it ignored signs that a scientist who got tens of millions of dollars in federal research grants was for years transferring potentially sensitive research on advanced artificial intelligence to China, Newsweek reports exclusively.
The chairs of two House committees and three subcommittees also asked the National Science Foundation (NSF), which is a federal government agency, and the University of California Los Angeles (UCLA) why they failed to pay attention to “concerning signs” over the Chinese-born scientist Song-Chun Zhu, in similarly worded letters sent to all three institutions on Wednesday.
Newsweek revealed in November 2023 that Zhu had received over $30 million in U.S. grants to lead research into the most advanced artificial intelligence that could have major military implications.
“U.S. federal grant-providing agencies ignored numerous concerning signs while granting Mr. Zhu $30 million in grants,” the chairs of the Committee on Energy and Commerce and of the Select Committee on the Chinese Communist Party (CCP) wrote in a letter addressed to the Secretary of Defense, Lloyd J. Austin III.
EXCLUSIVE: CDC Drafted Alert for Myocarditis and COVID Vaccines, but Never Sent It
The U.S. Centers for Disease Control and Prevention (CDC) prepared to alert state and local officials to an emerging connection between heart inflammation and COVID-19 vaccines but ultimately did not send the alert, according to a new document obtained by The Epoch Times.
All four COVID-19 vaccines that are or have been available in the United States can cause the heart inflammation, or myocarditis, according to studies, experts, and agencies like the CDC. The first cases were reported shortly after the vaccines became available in late 2020.
The CDC sends alerts to federal, state, and local public health officials and doctors across the nation through a system called the Health Alert Network (HAN). Messaging through the system conveys “vital health information,” according to the CDC.
“This censorship of a proposed alert in May of 2021 is just one more example of our regulatory agencies’ repeated pattern of behavior to censor any information that serves to counter the narrative that the COVID-19 vaccinations are ’safe and effective,’” Dr. Joel Wallskog, co-chair of the vaccine-injured advocacy group React19, told The Epoch Times via email.
Unsettling New Warning in Chrome Incognito Mode Reveals Ongoing Tracking
Chrome’s Incognito mode is a bit of a joke that even Google employees weren’t so hyped about it. It’s now going to be less useful for “privacy” reasons, as explained in a new disclaimer change.
An updated warning page for Incognito mode went live on Canary, a version of Chrome primarily used by developers, as first spotted by MSPowerUser on Tuesday. The new text confirms your data will be collected by websites and Google while browsing in this mode. This change has yet to hit the latest version of Chrome, but it’s likely to come soon.
Google’s update to the disclaimer stems from a 2020 lawsuit the company was hit with over the not-so-private Incognito mode. That $5 billion class-action suit alleged Google’s privacy options didn’t work as described, meaning users were continually tracked while using Chrome.
Google settled the lawsuit in late December according to Reuters, but no details of the settlement were made public, and it will still need to be approved by a judge in February.
Meta CEO Mark Zuckerberg to Be Deposed in Texas Facial Recognition Case
Meta CEO Mark Zuckerberg must take part in a deposition as part of an ongoing lawsuit in Texas involving the company’s facial recognition technology. Justice Jeff Rambin of Texas’s Sixth Court of Appeals said in a Tuesday ruling that the state court has denied Meta’s recent petition “seeking relief from an order compelling the oral deposition” of Zuckerberg at an unspecified date.
Texas Attorney General Ken Paxton filed the lawsuit in February 2022, saying at the time that Meta has been “capturing and using the biometric data of millions of Texans without properly obtaining their informed consent to do so.”
Attorneys representing Texas also said Meta violated the state’s Deceptive Trade Practices Act by “failing to disclose information — including the fact that it collects biometric identifiers — with the intent to induce Facebook users in Texas into using Facebook, which such users would not have done had the information been disclosed.”
In Tuesday’s ruling, the state of Texas claimed that Zuckerberg has “had unique personal knowledge of discoverable information” that’s relevant to its lawsuit, alleging that Meta violated state laws related to the collection of biometric data and deceptive trade practices.
The Cops Are Watching You
Who watches the watchmen? All of us, if we’re smart. In the age of surveillance, that means monitoring how and where the snoops put us under scrutiny. Among the people and organizations doing such important work is the Electronic Frontier Foundation, which recently updated one of its countersurveillance tools.
“The Electronic Frontier Foundation (EFF) today unveiled its new Street Level Surveillance hub, a standalone website featuring expanded and updated content on various technologies that law enforcement agencies commonly use to invade Americans’ privacy,” the group announced on January 10.
The Street Level Surveillance hub integrates closely with EFF’s already established Atlas of Surveillance. Users can search the Atlas for jurisdictions to see what surveillance tools are currently in use in their hometowns or in places they’re visiting.
Unsurprisingly, Washington, DC, is closely monitored by the powers that be. Residents and visitors in the nation’s capital are scrutinized by automated license plate readers, face recognition scanning by the FBI of driver’s license photos, a registry of private security cameras, gunshot detection microphones (yes, they can overhear conversations), cell-site simulators which pinpoint the locations of phones and their users, and more. The Atlas lists the surveillance tools used in the city and links to more information on them — including the extensive write-ups on the Street Level Surveillance hub.
OpenAI Won’t Let Politicians Use Its Tech for Campaigning, for Now
Artificial intelligence company OpenAI laid out its plans and policies to try to stop people from using its technology to spread disinformation and lies about elections, as billions of people in some of the world’s biggest democracies head to the polls this year.
The company, which makes the popular ChatGPT chatbot, DALL-E image generator and provides AI technology to many companies, including Microsoft, said in a Monday blog post that it wouldn’t allow people to use its tech to build applications for political campaigns and lobbying, to discourage people from voting or spread misinformation about the voting process. OpenAI said it would also begin putting embedded watermarks — a tool to detect AI-created photographs — into images made with its DALL-E image generator “early this year.”
“We work to anticipate and prevent relevant abuse — such as misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates,” OpenAI said in the blog post.
If You’re in the EU, You Can Now Decide How Much Data to Share With Google
If you are in the EU, you can take back more agency over your digital privacy even when using notorious data-hungry platforms.
Google now allows users to decide the amount of information they want to share (or not) with the provider as they can opt for “unlinking” certain services from each other. The move comes as the big tech giant gets ready to comply with new data-sharing rules introduced by the Digital Market Act (DMA).
Approved in November 2022, the new legislation is set to be officially enforced on March 6, 2024. That’s when the choices you select for your Google account will also take effect. These services include Google Search, YouTube, Ad services, Google Play, Chrome, Google Shopping and Google Maps.
You can exactly decide the amount of data you’re comfortable to share with the big tech firm. As Google explains, in fact, “you can choose to keep all these services linked, choose to have none of these services linked, or choose which of these individual services you want to keep linked.”
The TSA Plans Big Digital ID Push in 2024 + More
The TSA Plans Big Digital ID Push in 2024
The U.S.’s leading transportation security organization, the Transportation Security Administration (TSA), is taking significant steps toward a more digital future. And, of course, that means more surveillance and tracking. The plan is that, by the end of 2024, many of their operational objectives will encompass a digital identity component, a move that suggests an enduring commitment towards streamlining traveler experiences with technology, even though it undermines privacy.
In a four-part action plan released by the TSA, the agency plans to extend its mobile driver’s license initiative and more widely utilize facial recognition technology in airports. This includes up-scaling their current pilot program testing digital identities and mobile licenses — used at TSA checkpoints — to at least nine states. It follows a previous announcement in May that disclosed the TSA’s examination of the potential for digital license and ID implementations across 25 domestic airports.
Parallel with these digital ID efforts, the TSA also commits to amplifying the utilization of facial identification systems under their PreCheck service, a program aimed at preemptively assessing threats and facilitating a quicker airport security process for enrolled travelers. The service is somewhat controversial as it allows the agency deeper access to data and information about an individual and their lives — some of which go beyond what travelers believe they have access to.
The aim is to double the number of airports equipped with this technology, slated to increase from five in the past year to ten by the end of this year. Similarly, the number of airlines engaging with PreCheck is set to grow from two to three.
‘The Tide Has Turned’: Why Parents Are Suing U.S. Social Media Firms After Their Children’s Death
The night of June 23, 2020, passed by like any other for 16-year-old Carson Bride. The teen had just gotten a new job at a pizza restaurant, his mother, Kristin Bride, said, and the family had been celebrating at home in Lake Oswego, Oregon. He wrote his future work schedule on the kitchen calendar after dinner, said goodnight, and went to his room for bed. But the next morning, Kristin says, the family woke to “complete shock and horror”: Carson had died by suicide.
Kristin soon discovered that in the days leading up to his death, her son had received hundreds of harassing messages on Yolo — a third-party app that at the time was integrated into Snapchat and allowed users to communicate anonymously. Search history on Carson’s phone revealed some of his final hours online were spent desperately researching how to find who was behind the harassment and how to put an end to it.
After Carson’s death and the harassment Kristen says contributed to it, the mother tried to take action to prevent such tragedy from striking again — but found herself running into walls. She says she contacted Yolo four times only to be ignored until receiving a single automated response email. She filed charges against Snapchat and the two anonymous messaging apps it hosted in May 2021, in a suit that is partially ongoing. Days after the suit was filed, Snapchat removed Yolo and LMK, the other app, from the platform, and a year later the company banned all apps with anonymous messaging features.
Kristin Bride’s lawsuit is one of hundreds filed in the U.S. against social media firms in the past two years by family members of children who have been affected by online harms. Lawyers and experts expect that number to increase in the coming year as legal strategies to fight the companies evolve and cases gain momentum.
States Get Serious About Limiting Kids’ Social Media Exposure
An increasing number of states are moving to require social media companies to create child-safe versions of their sites as Washington struggles with how to shield kids.
The states are moving because they believe social media is contributing to increasing rates of mental illness among children, and because Congress hasn’t. There’s bipartisan support on Capitol Hill to do more, but lawmakers there can’t agree on whether a national privacy standard should override state laws.
Pressure to act is rising. In 2021, more than 40 percent of high school students felt so sad or hopeless over a two-week period that they stopped keeping up with their regular pastimes, according to the Centers for Disease Control and Prevention’s most recent Youth Risk Behavior Survey. The survey also said that 30% of teen girls seriously considered suicide, up from 19 percent 10 years ago.
Experts are concerned that social media companies are contributing to the problem — and profiting from it.
Rand Paul Says Fauci Should ‘Go to Prison’ Over COVID ‘Dishonesty’
Sen. Rand Paul (R-Ky.) said that the former U.S. chief medical adviser, Dr. Anthony Fauci, should “go to prison” over his “dishonesty” in handling the COVID-19 pandemic and lying to Congress.
“For his dishonesty, frankly, he should go to prison,” Paul said during a Sunday interview with radio host John Catsimatidis on “The Cats Roundtable” on WABC 770 AM. “If you lie to Congress, and you’re dishonest, and you won’t accept responsibility. For his mistake in judgment, he should just be pilloried. He should never be accepted.”
He added, “History should judge him as a deficient person who made one of the worst decisions in public health history — in the entire history of the world.”
The Kentucky Republican, who believes the virus came from a lab in China, accused Fauci of directly contributing to the deaths of “somewhere between 10 and 20 million” with his decision to “fund dangerous research — gain-of-function research, where you allow viruses to be combined.”
A Facial-Recognition Tour of New York
We’re being watched. But when, and by whom? Kashmir Hill, the author of the new book “Your Face Belongs to Us,” took a walk around midtown the other day, to check out a few businesses that routinely capture visitors’ biometric data. She wore a red coat and white boots, and her hair was a faded purple.
First up: Macy’s Herald Square. “Let’s see if Macy’s is still collecting face-recognition data,” she said. Businesses that do so are required by city law to post signs alerting visitors. She’d noticed, earlier, that the store’s signs were “very affixed to their walls.” One in an entrance vestibule, below an inflatable reindeer, stated that Macy’s “collects, retains, converts, stores, or shares customers’ biometric identifier information.”
Macy’s has used Clearview AI, one of the subjects of Hill’s book. (Popular Google searches involving the firm include “Is Clearview AI banned in the U.S.?,” “Does Clearview AI have my photo?” “Does the F.B.I. use Clearview AI?”) A 2020 data breach at Clearview, which was founded, in 2017, by two men who met at the Manhattan Institute, helped reveal that Madison Square Garden and thousands of law-enforcement agencies had used the technology, too.
Hill’s next stop was the Moynihan Train Hall, in Penn Station. On the way, she noticed an N.Y.P.D. security camera on a street-light pole. “There are some things we allow businesses and companies to do that we’re pretty uncomfortable seeing government actors do,” she said. “If the government scraped all our photos and created this massive face-recognition database, we’d probably say that seems unconstitutional. But a private company does it and the government just buys from them.”
Meta’s Latest Attempt to Spy on Your Online Behaviors
Meta‘s newest tool makes it very easy for the company to track you. The social media giant recently introduced a new feature called Link History to the Facebook app for iPhone and Android. Facebook’s parent company claims the setting is a tool for users to keep all of their browser history in one spot. However, is there more than meets the eye? Facebook’s latest feature raises plenty of privacy concerns and worries about Meta’s information collection.
Link History is a list of websites you’ve visited on Facebook Mobile Browser within the last 30 days. Meta’s Link History setting collects the links you’ve clicked on within the Facebook app. This is limited to links you accessed within Facebook’s browser, which automatically pops up when you click on a link within the Facebook app. You can then view all the links you’ve clicked on and then revisit those links, which will reopen in Facebook’s browser.
Mark Zuckerberg’s $47 Billion Metaverse Bet Will Take at Least a Decade to Be ‘Fully Realized,’ Says Meta Exec
Meta is still investing “significantly” in the metaverse despite losses of nearly $50 billion, according to an executive.
The head of Meta‘s global business group, Nicola Mendelsohn, said it will take a “good decade” to reach the company’s “fully realized vision.”
She made the comments in a panel session at the World Economic Forum in Davos on Tuesday, adding that Meta was investing in both AI, as well as hardware, for the metaverse.
The tech giant has lost a cumulative $47 billion in its Reality Labs division since 2019, a previous Business Insider analysis of regulatory filings found. The division contains Meta’s VR and metaverse operations.
COVID Mask Mandate Reinstated at National Park
A mask mandate has been reinstated at Sandy Hook National Park following an uptick in COVID hospitalizations in the area.
Visitors will be required to wear a mask inside buildings at the New Jersey national park. Masks are mandatory in the Sandy Hook Visitors Center as well as any other building where events or tours are held.
The decision to reinstate the mask mandate was made after the Centers for Disease Control (CDC) reported that COVID hospital admissions were considered high in Monmouth County, where the park is located, and neighboring Ocean County. According to CDC data from the week ending January 6, Monmouth County and Ocean County have each seen 250 new hospital admissions of confirmed COVID cases.
As COVID cases rise, some New Jersey locations are requiring people to wear masks again. Besides Sandy Hook National Park, major hospitals in the state have reinstated mask mandates as respiratory illnesses, including COVID, the flu, and respiratory syncytial virus (RSV) are on the rise.
Transportation Department’s Vaccine Mandates Were ‘Unique, Aggressive’
A 2021 memo obtained by the Justice Centre for Constitutional Freedoms says Canada’s Transportation Department called its own vaccination mandate “aggressive” and “unique in the world,” reports Blacklock’s Reporter. This goes against the department’s public claims that the mandate just “followed the recommendations of public health experts.”
At the time, the cabinet claimed its mandate was recommended by scientists, but Canada’s Public Health Agency never recommended vaccine mandates.
The Justice Centre for Constitutional Freedoms called the mandates unlawful but the Federal Court of Appeal on Nov. 9, 2023, dismissed the challenge since the mandates expired in 2022.
However, the Centre said in a statement: “The federal government can impose these same travel restrictions on Canadians again without notice.”
Jordan Subpoenas Biden Spy Chief for Big Tech Collusion Investigation + More
Jordan Subpoenas Biden Spy Chief for Big Tech Collusion Investigation
House Judiciary Committee Chairman Jim Jordan (R-OH) subpoenaed Director of National Intelligence Avril Haines, the Biden administration’s spy chief, on Thursday as part of an investigation into alleged government collusion with Big Tech companies, The Daily Wire can confirm.
The Office of the Director of National Intelligence (ODNI) has so far provided a “woefully inadequate” response to multiple requests by the judiciary panel and its Weaponization subcommittee for communications with “private companies, and other third-party groups such as nonprofit organizations, in addition to other information,” Jordan wrote in a cover letter for the subpoena.
Jordan contended that it is “necessary” for Congress to “gauge the extent to which ODNI officials have coerced, pressured, worked with, or relied upon social media and other tech companies to censor speech.” He stressed that the scope of the inquiry includes ODNI, and the result could be legislation that could exact “new statutory limits” on the Executive Branch being able to work with companies to restrict content and deplatform users.
A Week Into 2024 and Big Tech Has Earned Enough to Pay Off All 2023 Fines
2023 surely was an eventful year in tech. To cite just a few key moments, generative AI became mainstream thanks to software like ChatGPT; we had to say goodbye to the iconic blue bird while welcoming Twitter‘s new name (I know very well the pain of writing ‘X, formerly known as Twitter’ over the past six months); and big tech companies got fined the most under GDPR’s data abuses for a total of more than $3 billion.
“What’s clear is that these fines, though they appear to be a huge amount of money, in reality are just a drop in the ocean when it comes to the revenues that the tech giants are making. In other words, they aren’t a deterrent at all,” Jurgita Miseviciute, Head of Public Policy & Government Affairs at Proton, told me.
Researchers at Proton have calculated that Alphabet (Google‘s parent company) needs only a bit more than a day to pay off its $941 million fines. Amazon and Apple‘s earnings of just a few hours are then enough to repay their data protection’s sanctions of $111.7 and $186.4 million respectively.
While biggest data abuse perpetrator Meta, which got a record $1.3 billion fine for its (mis)handling of EU user data in May last year, managed to accumulate all the necessary money in just about five working days.
Zuckerberg: King of the Metaverse Review — It Will Make You Even More Terrified of the Internet
Is Mark Zuckerberg — the Facebook ‘dictator’ — really evil? Nobody in this two-hour tell-all seems to know or care. But it will make you question every website you dangerously rely on.
Google’s unofficial company motto, at least until it was restructured into Alphabet Inc in 2015, was “Don’t be evil”. That is not a normal thing to have to say. It should have been a warning to us all that big tech’s default position might, in fact … be evil?
But, y’know — what ya gonna do? That is the question, and its merely rhetorical nature is underscored at every turn by the two-hour documentary Zuckerberg: King of the Metaverse. Mark Zuckerberg, of course, is the inventor (in his Harvard dorm in 2004 at the age of 20) of Facebook, the social media platform that now connects 49% of the global population, and he is its CEO. Or perhaps, as one of the many contributors to the film puts it, “its dictator”. He is personally worth about $100 billion — probably the greatest self-made fortune in history.
In a way, it doesn’t matter. What matters is the creation, not the man. It is clear now what it does, what it can do and what it will continue to do if not regulated — perhaps by people who understand how the internet works and why the greatest experiment ever carried out on a global population might benefit from a weather eye being kept on it. The relationship status of people v the metaverse? It’s complicated. Dangerously so.
School Software Breach Reveals Private Data on Millions of Users
Experts have uncovered a significant data breach involving a non-password-protected database containing more than four million records, totalling around 827GB, concerning private school data.
Cybersecurity researcher Jeremiah Fowler said the breach at Texas-based school security company Raptor Technologies includes sensitive school safety records and personally identifiable information relating to students, parents, and staff. While reviewing a sample of the documents, Fowler discovered school layouts, information about malfunctioning cameras and security gaps, background checks, student health information, court-ordered protection orders, and more.
Raptor Technologies has since taken action to secure the files, but how long they were leaked for and whether anybody else had gained access to them remains unconfirmed.
Fowler outlined hypothetical scenarios of misuse, emphasizing the huge risks associated with the exposed information that could prove to be seriously harmful to those involved, including schoolchildren and minors.
Your Medical Data Is Code Blue
Until last November, I had never heard of Perry Johnson and Associates. But they had heard of me. In fact, without my knowledge, they had information about me that even my closest friends and relatives might not know.
Because the company provides “transcription and dictation” services to Northwell Health, a medical provider that has treated me in the past, they had access to what they refer to as “certain files containing my health information as well as other personal data.
”This might have included my name, birth date, address, and medical record number, and information about my medical condition — including admission diagnosis, operative reports, physical exams, laboratory and diagnostic results, and medical history, which could include family medical history, surgical history, social history, medications, allergies, and/or other observational information.
The medical information of nearly 10 million people would be an invaluable resource to drug marketers, insurance companies, and manufacturers of bogus medical devices. And unlike personal finance information, there’s no way to make that information moot. You can get a new credit card or a new bank account, but you can’t get a new medical history.
Regulators Are Finally Catching Up With Big Tech
In the United States, in the absence of federal privacy legislation, regulators have already started to repurpose laws and rules they do have at their disposal to address some of the most egregious examples of Big Tech playing fast and loose with our rights and personal data. In 2023, the U.S. Federal Trade Commission (FTC) continued to expand the regulatory heft of consumer protection regulations.
It took on the problem of dark patterns — deceptive design used by apps and websites to trick users into doing something that they didn’t intend to, like buying or subscribing to something — with a half-billion-dollar fine against Fortnite maker Epic Games.
The FTC also issued massive fines to Amazon for significant breaches of privacy through Alexa and Ring doorbell devices. There are no signs that, in 2024, the FTC will slow down, with rules in the pipeline to govern commercial surveillance and digital security. In 2024, we’ll see regulators in other fields and other parts of the world follow suit, bolstered by the FTC’s successes.
Amazon’s Cashierless Checkout Is Coming to Hospitals
Amazon is pitching its cashierless checkout technology to hospitals and other healthcare facilities.
The company on Thursday said the latest version of its Just Walk Out system lets healthcare employees pay for items at on-site cafes using their work badge. Hospital visitors will also be able to shop at JWO-enabled stores using credit cards and debit cards, as well as mobile wallets. It’s rolling out the tech at St. Joseph’s/Candler Hospital in Savannah, Georgia, Amazon said in a blog post.
Amazon’s JWO technology allows shoppers to enter a store by scanning an app and exit without needing to stand in a checkout line. Cameras and sensors track what items shoppers select and charge them when they leave. Some newer iterations of the technology remove the need for ceiling-mounted cameras, and instead use radio-frequency identification, or RFID, tags to keep track of which items are taken off the shelf.
Johns Hopkins Medicine Brings Back Mandatory Masking at All Facilities in Maryland
Johns Hopkins Medicine will resume universal mandatory masking at its facilities, the hospital system announced on Thursday. The policy update will take effect on Jan. 12.
The mandatory masking is for patients, visitors and employees regardless of vaccination status at all Johns Hopkins Medicine locations throughout Maryland because of an increase in hospitalizations from COVID, the flu and RSV.
Earlier this week, LifeBridge Health and the University of Maryland Medical System reinstated masking at its medical facilities.
U.K. Communications Regulator Forms 350-Person Team to Enforce U.K.’S New Online Censorship Law
The authorities in the U.K. are “thinking of the children” — but really, of online censorship, say critics — and in doing so, thanks to Online Safety Act, are dipping their toes into the long since established in the U.S. “revolving door” policy.
In the U.K. there is evidence of this flow going in one direction — from private Big Tech corporations to government jobs. Reports say that in order to implement the controversial law that considerably restricts online speech, the regulator tasked with this, Ofcom, has employed as many as some 350 new staff — those from tech giants among them.
Those who pushed its adoption for a long time and continue to justify it, as well as the new, ex Big Tech hires, like to frame and sell the legislation as necessary in order to protect children’s well-being online. However, this is also the easiest way to protect themselves from criticism, as few people are willing to argue against a case positioned in this way.
However, many still have and do, and the gist of their opposition to the act and nebulous terms like “legal but harmful content” that must be suppressed is that one of the provisions — forcing messaging apps to scan user content (with child sexual abuse always first mentioned as a target — but not the only one) means a serious threat to encryption and therefore, online safety of everyone, including children.