Close menu

Big Brother News Watch

Sep 25, 2023

Russell Brand Accuses Government of Bypassing Judicial Process to Censor Him on Social Media + More

Russell Brand Accuses Government of Bypassing Judicial Process to Censor Him on Social Media

Sky News reported:

Russell Brand has accused the government of trying to “bypass” the judicial system after his YouTube channel was demonetised in the wake of sexual abuse allegations against him.

In a livestream video on Rumble the comedian also accused the “legacy media” of being in “lockstep” with each other to “support a state agenda” and “stamp on independent media voices.”

It comes after four women made allegations of rape, sexual assault and abuse against the star between 2006 and 2013 as part of an investigation by The Times, The Sunday Times and Channel 4’s Dispatches.

The 48-year-old denies the allegations.

ChatGPT Can Now Talk Back to You With an Eerily Human-Like Voice

Business Insider reported:

OpenAI is introducing a new feature to ChatGPT that could make the artificial intelligence (AI) tool feel even more human: the ability to talk to you.

The AI company announced Monday that, over the next two weeks, paying users of ChatGPT will be able to start interacting with the popular chatbot by voice so they can “engage in a back-and-forth conversation.”

The feature, enabled by a new text-to-speech model, allows users to choose from five different voices — named Juniper, Sky, Cove, Ember and Breeze — developed by work done with professional voice actors, the company said.

In a review of the new feature, the Wall Street Journal‘s Joanna Stern described the voices as eerily human. In demos, the voices sound responsive and smooth, unlike the occasionally stilted responses given by smartphone assistants.

OpenAI warned that although the new voice technology, which creates “synthetic voices from just a few seconds of real speech,” offers a new tool for creativity, the feature can present risks such as “the potential for malicious actors to impersonate public figures or commit fraud.”

Supreme Court Considers Limits on White House Contacts With Social Media

Ars Technica reported:

The Supreme Court on Friday extended a stay of a lower-court order that would limit the Biden administration’s contacts with social media firms, giving justices a few more days to consider whether to block the ruling entirely. The court could rule by the middle of this week on the Biden administration motion in a case in which the states of Missouri and Louisiana allege that speech related to COVID-19 and other topics was illegally suppressed at the behest of government officials.

A stay issued Sept. 14 was scheduled to expire on Friday, but Justice Samuel Alito ordered that it be extended until Wednesday, Sept. 27, at 11:59 p.m. ET. Alito is the justice assigned to the 5th Circuit, the circuit in which an appeals court ruled that the White House and U.S. Federal Bureau of Investigation likely violated the First Amendment by coercing social media platforms into moderating content and changing their moderation policies.

While most of the original injunction’s restrictions were eliminated, the Biden administration asked the Supreme Court to block the one surviving prohibition. Under the recently revised injunction, Biden administration officials would be barred from taking any action to directly or indirectly “coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech.”

Cash Will Be No Refuge Under CBDCs

ZeroHedge reported:

The world is headed toward Central Bank Digital Currencies (CBDCs) and everybody knows it, even the people who don’t want them (which at the moment looks to be most people).

But the policy-makers have deigned it be so, and CBDCs provide such a compelling opportunity for surveillance and social control that they are irresistible. That the fiat currency system is in the process of imploding makes it an imperative.

It was to draw my attention to the timeline for Australia going cashless, and while doing so doesn’t coincide with the launch of their CBDC, the RBA is dilligently headed there (as are nearly all central banks globally).

One area of focus in The Bitcoin Capitalist is that we track all the national CBDC deployments and the myriad supranational policies and aspirations that go into them (we call it “Eye on EvilCoin”, for the Mr. Robot fans out there).

With this tweet, I became more intrigued by the call-to-action itself, because I see this a lot: the idea that the way to resist the CBDC is to keep using cash. This is not only wrong-headed, it’s self-defeating.

Facial Recognition Technology Jailed a Man for Days. His Lawsuit Joins Others From Black Plaintiffs

AP News reported:

Randal Quran Reid was driving to his mother’s home the day after Thanksgiving last year when police pulled him over and arrested him on the side of a busy Georgia interstate.

He was wanted for crimes in Louisiana, they told him, before taking him to jail. Reid, who prefers to be identified as Quran, would spend the next several days locked up, trying to figure out how he could be a suspect in a state he says he had never visited.

A lawsuit filed this month blames the misuse of facial recognition technology by a sheriff’s detective in Jefferson Parish, Louisiana, for his ordeal.

“I was confused and I was angry because I didn’t know what was going on,” Quran told The Associated Press. “They couldn’t give me any information outside of, ‘You’ve got to wait for Louisiana to come take you,’ and there was no timeline on that.”

Quran, 29, is among at least five Black plaintiffs who have filed lawsuits against law enforcement in recent years, saying they were misidentified by facial recognition technology and then wrongly arrested. Three of those lawsuits, including one by a woman who was eight months pregnant and accused of a carjacking, are against Detroit police.

Your Boss’s Spyware Could Train AI to Replace You

Wired reported:

You’ve probably heard the story: A young buck comes into a new job full of confidence, and the weathered older worker has to show them the ropes — only to find out they’ll be unemployed once the new employee is up to speed. This has been happening among humans for a long time — but it may soon start happening between humans and artificial intelligence (AI).

Countless headlines over the years have warned that automation isn’t just coming for blue-collar jobs, but that AI would threaten scores of white-collar jobs as well. AI tools are becoming capable of automating tasks and sometimes entire jobs in the corporate world, especially when those jobs are repetitive and rely on processing data. This could affect everyone from workers at banks and insurance companies to paralegals and beyond.

Carl Frey, an economist at Oxford University, coauthored a landmark study in 2013 that claimed AI could threaten nearly 50% of U.S. jobs in the coming decades. Frey says that he doesn’t think new AI tools like ChatGPT are going to automate jobs in this way because they still require human involvement and are often unreliable.

Still, many of the underlying factors that were outlined in that paper remain pertinent today. Considering the rapid pace at which AI is advancing, it’s hard to predict how it could soon be utilized and what it will be capable of.

Experts Disagree Over Threat Posed but Artificial Intelligence Cannot Be Ignored

The Guardian reported:

For some AI experts, a watershed moment in artificial intelligence (AI) development is not far away. And the global artificial intelligence safety summit, to be held at Bletchley Park in Buckinghamshire in November, therefore cannot come soon enough.

Ian Hogarth, the chair of the U.K. taskforce charged with scrutinising the safety of cutting-edge AI, raised concerns before he took the job this year about artificial general intelligence (AGI), or “God-like” AI.

Definitions of AGI vary but broadly it refers to an AI system that can perform a task at a human, or above human, level — and could evade our control.

Max Tegmark, the scientist behind a headline-grabbing letter this year calling for a pause in large AI experiments, told the Guardian that tech professionals in California believe AGI is close.

“A lot of people here think that we’re going to get to God-like artificial general intelligence in maybe three years. Some think maybe two years.”

He added: “Some think it’s going to take a longer time and won’t happen until 2030.” Which doesn’t seem very far away either.

Sep 22, 2023

What Big Tech Knows About Your Body + More

What Big Tech Knows About Your Body

The Atlantic reported:

If you were seeking online therapy from 2017 to 2021 — and a lot of people were — chances are good that you found your way to BetterHelp, which today describes itself as the world’s largest online therapy purveyor, with more than 2 million users. Once you were there, after a few clicks, you would have completed a form — an intake questionnaire, not unlike the paper one you’d fill out at any therapist’s office: Are you new to therapy? Are you taking any medications? Having problems with intimacy? Experiencing overwhelming sadness? Thinking of hurting yourself? BetterHelp would have asked you if you were religious, if you were LGBTQ, if you were a teenager. These questions were just meant to match you with the best counselor for your needs, small text would have assured you. Your information would remain private.

Except BetterHelp isn’t exactly a therapist’s office, and your information may not have been completely private. In fact, according to a complaint brought by federal regulators, for years, BetterHelp was sharing user data — including email addresses, IP addresses, and questionnaire answers — with third parties, including Facebook and Snapchat, for the purposes of targeting ads for its services.

It was also, according to the Federal Trade Commission, poorly regulating what those third parties did with users’ data once they got them. In July, the company finalized a settlement with the FTC and agreed to refund $7.8 million to consumers whose privacy regulators claimed had been compromised. (In a statement, BetterHelp admitted no wrongdoing and described the alleged sharing of user information as an “industry-standard practice.”)

All of this information is valuable to advertisers and to the tech companies that sell ad space and targeting to them. It’s valuable precisely because it’s intimate: More than perhaps anything else, our health guides our behavior. And the more these companies know, the easier they can influence us. Over the past year or so, reporting has found evidence of a Meta tracking tool collecting patient information from hospital websites, and apps from Drugs.com and WebMD sharing search terms such as herpes and depression, plus identifying information about users, with advertisers.

Amazon’s Generative-AI-Powered Alexa Is as Big a Privacy Red Flag as Old Alexa

Ars Technica reported:

Amazon is trying to make Alexa simpler and more intuitive for users through the use of a new large language model (LLM). During its annual hardware event on Wednesday, Amazon demoed the generative AI-powered Alexa that users can soon preview on Echo devices. But in all its talk of new features and a generative-AI-fueled future, Amazon barely acknowledged the longstanding elephant in the room: privacy.

Amazon’s devices event featured a new Echo Show 8, updated Ring devices, and new Fire TV sticks. But most interesting was a look at how the company is trying to navigate generative AI hype and the uncertainty around the future of voice assistants. Amazon said users will be able to start previewing Alexa’s new features via any Echo device, including the original, in a few weeks.

One development with an immediately noticeable impact is Alexa learning to listen without the user needing to say “Alexa” first. A device will be able to use its camera, a user’s pre-created visual ID, and a previous setup with Alexa to determine when someone is speaking to it.

All this points to an Alexa that listens and watches with more intent than ever. But Amazon’s presentation didn’t detail any new privacy or security capabilities to make sure this new power isn’t used maliciously or in a way that users don’t agree with.

COVID Surge Shouldn’t Close Schools, Says Biden Education Secretary: ‘I Worry About Government Overreach’

The Hill reported:

Education Secretary Miguel Cardona says schools should not be shutting down due to surges in COVID-19 and expressed worry about government overreach.

“I worry about government overreach, sending down edicts that will lead to school closures because either folks are afraid to go in or are infected and can’t go,” Cardona told The Associated Press in an interview.

Despite the new wave of COVID-19 cases, “schools should be open, period,” Cardona said, according to the AP.

Cardona told the AP that in-person instruction “should not be sacrificed for ideology” and that school closures harmed community relationships.

AI Might Be Listening During Your Next Health Appointment

Axios reported:

Your doctor or therapist might not be the only one listening in during your next visit. Artificial intelligence may be tuning in as well.

Why it matters: Healthcare is racing to incorporate generative AI and natural language processing to help wrangle patient information, provide reliable care summaries and flag health risks. But the efforts come with quality and privacy concerns that people developing these tools acknowledge.

Driving the news: On Thursday, digital health company Hint Health announced a product in collaboration with OpenAI that will allow doctors to record an appointment, automatically transcribe the notes from it and generate a summary that can be embedded directly in the patient’s medical record.

Between the lines: It’s also among a growing number of AI applications interacting directly with patients. What to watch: The use of AI in patient encounters raises a number of privacy concerns, as well as worries about the accuracy of the data and potential biases.

Americans Deeply Dissatisfied With Government and Both Parties: Study

Newsweek reported:

Public trust in the federal government has reached a historic low, and most Americans say they’re deeply unhappy with both major political parties and their choices in 2024 presidential candidates, according to a new report by the Pew Research Center.

Americans’ approval of Congress, the Supreme Court and other political institutions have been declining for years, driven by a sharp rise in political polarization as Democrats and Republicans increasingly view the opposite party with skepticism or outright disgust.

But the public’s trust in the American political system has hit a low not seen in several decades, the new Pew study found. Just 16% of U.S. adults said they trust the federal government, the lowest trust level in nearly 70 years of polling, according to Pew.

A Flood of New AI Products Just Arrived — Whether We’re Ready or Not

The Washington Post reported:

Big Tech launched multiple new artificial intelligence products this week, capable of reading emails and documents or conversing in a personal way. But even in their public unveilings, these new tools were already making mistakes — inventing information or getting basic facts confused — a sign that the tech giants are rushing out their latest developments before they are fully ready.

Google said its Bard chatbot can summarize files from Gmail and Google Docs, but users showed it falsely making up emails that were never sent. OpenAI heralded its new Dall-E 3 image generator, but people on social media soon pointed out that the images in the official demos missed some requested details. And Amazon announced a new conversational mode for Alexa, but the device repeatedly messed up in a demo for The Washington Post, including recommending a museum in the wrong part of the country.

Spurred by a hypercompetitive race to dominate the revolutionary “generative” AI technology that can write humanlike text and produce realistic-looking images, the tech giants are fast-tracking their products to consumers. Getting more people to use them generates the data needed to make them better, an incentive to push the tools out to as many people as they can. But many experts — and even tech executives themselves — have cautioned against the dangers of releasing largely new and untested technology.

The Great AI ‘Pause’ That Wasn’t

Axios reported:

The organizers of a high-profile open letter last March calling for a “pause” in work on advanced artificial intelligence lost that battle, but they could be winning a longer-term fight to persuade the world to slow AI down.

The big picture: Almost exactly six months after the Future of Life Institute’s letter — signed by Elon Musk, Steve Wozniak and more than 1,000 others — called for a six-month moratorium on advanced AI, the work is still charging ahead. But the ensuing massive debate deepened public unease with the technology.

Between the lines: In recent months, the AI conversation around the world has intensely focused on the social, political and economic risks associated with generative AI, and voters have been vocal in telling pollsters about their AI concerns.

Driving the news:  The British government is gathering a who’s who of deep thinkers on AI safety at a global summit Nov. 1-2. The event is “aimed specifically at frontier AI,” U.K. Deputy Prime Minister Oliver Dowden told a conference in Washington on Thursday afternoon.

Sep 21, 2023

Mask Mandate Update: Full List of States With Some Restrictions in Place + More

Mask Mandate Update: Full List of States With Some Restrictions in Place

Newsweek reported:

New mask mandates have been imposed in healthcare facilities and other places in at least three states in recent days. While it doesn’t seem that widespread mask mandates will make a return, some businesses, schools and hospitals have temporarily required masks in response to reported COVID-19 cases in recent weeks.

Officials in three Bay Area counties — Contra Costa, Sonoma and San Mateo — announced that staff in healthcare facilities will be required to wear masks. The order will remain in effect through April 30.

The Cincinnati Children’s Hospital Medical Center announced that all staff will be required to wear masks on the premises beginning September 25.

Windsor Terrace Middle School in New York City’s Brooklyn borough has temporarily reinstated a mask mandate. The measure was prompted by a spike in cases among sixth-grade students at the school, Chalkbeat reported.

UN Deadlocked Over Regulating AI

Axios reported:

If you think the U.S. Congress is moving slowly on AI regulation, you’ll be waiting much longer for a global AI regulator or treaty.

The big picture: That’s the message out of the UN General Assembly in New York this week, as political leaders, tech companies and civil society gather to debate global challenges. Why it matters: A plurality of AI experts surveyed by Axios support global guardrails for AI.

But in the absence of UN leadership, no organization has asserted authority over the AI safety debates led by groups ranging from the G-7 to the World Economic Forum and the Organization for Economic Cooperation and Development.

Driving the news: The leaders of four of the five permanent members of the UN Security Council skipped this week’s debate — only Biden showed up.

A downbeat UN Secretary-General António Guterres this week called for “some global entity” with AI monitoring and regulatory capacity and warned that “governments alone will not be able to tame” AI. But in a CNN interview, he admitted the UN “has no power at all” to bring superpowers together and warned that the world is headed towards a “great fracture.”

Your Face Belongs to Us by Kashmir Hill Review — Nowhere to Hide

The Guardian reported:

In the past few years powerful “machine learning” and cloud computing, allied to the growth of smartphones, selfies and social media, have made a facial recognition system able to identify anyone as inevitable as the atomic bomb after the splitting of the uranium atom in 1938.

Just as that breakthrough led to a cascade with an obvious endpoint, so the preconditions for facial recognition — masses of pictures online and rapidly improving algorithms for determining what makes a face unique — have been there waiting for whoever was willing to ignore the socially controversial effects.

Overall, the problem is that we can’t figure out if pervasive, immediate facial recognition is a good or bad thing. Might it find kidnapped children? Hit-and-run drivers? Burglars? Save us embarrassment at social occasions? Certainly. Would it be abused by people looking to harm and harass, and by governments and police in authoritarian or democratic states?

Again, certainly. More importantly, can it be stopped? It’s hard to see how, and the New York Times journalist Kashmir Hill, author of Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy — not unreasonably — doesn’t offer any suggestions. The bomb is out of the bay. The question now is where it lands.

DeSantis Says He Won’t Support COVID Vaccine Funding if Elected President

CNN Politics reported:

Ron DeSantis on Wednesday said if elected president, he would not pay for further coronavirus vaccines for Americans.

“Certainly, we’re not going to fund them,” the Florida Republican governor said during a wide-ranging interview with ABC News recorded Wednesday from Midland, Texas, where he announced his domestic energy policy.

The comment comes as DeSantis has ramped up his attacks in recent weeks on former President Donald Trump, the front-runner for the GOP presidential nomination, over his administration’s response to the COVID-19 pandemic. As a presidential candidate, DeSantis has regularly warned that mandates and restrictions would return if the government is given the opportunity.

As some limited, local mask mandates have returned, DeSantis held a roundtable last week on the new COVID-19 shots from Pfizer/BioNTech and Moderna, where his surgeon general recommended people under 65 against receiving them.

The EU’s Quest to Fix the Internet Could Become a Privacy and Security Nightmare

TechRadar reported:

The European Union is notorious for its commitment to regulate the internet, either for better or worse. The GDPR has been echoed by nations worldwide as the blueprint for protecting citizens’ digital privacy.

From August 25, the Digital Market Act and Digital Service Act have introduced new obligations for digital services. At the same time, the Chat Control proposal is gathering a lot of criticism for its attack on encryption in the name of online safety.

Among these highly-debated legislations, one proposal may have flown under the radar: the revision of the EU’s digital identity law (eIDAS). A process started in October 2020 and is currently under trialogue negotiations as lawmakers seek to “fix” web security among country members. However, experts warn of unintended consequences like greater surveillance, censorship, and false security instead.

Australia to Hold Independent Inquiry Into Handling of COVID Pandemic

Reuters reported:

Australia’s center-left Labor government on Thursday said it would hold an independent inquiry into the handling of the COVID-19 pandemic to better prepare for future health crises.

Australia closed its international borders and locked down cities among other pandemic restrictions that helped keep infections and deaths far below levels in other comparable developed economies such as the United States and Britain.

A three-member panel, which includes an epidemiologist, public service expert and economist, will conduct the inquiry, Prime Minister Anthony Albanese told a media conference.

The opposition also criticized Albanese’s government for excluding from the inquiry state-level restrictions, such as the stop-start lockdowns by the Victoria government of Melbourne, which endured a total of 262 days in lockdown, one of the longest in the world.

Sep 20, 2023

Biden Administration Tried to Censor This Stanford Doctor, but He Won in Court + More

The Biden Administration Tried to Censor This Stanford Doctor, but He Won in Court

New York Post reported:

A federal court of appeals ruled earlier this month that the White House, surgeon general, CDC and FBI “likely violated the First Amendment” by exerting a pressure campaign on social media companies to censor COVID-19 skeptics — including Stanford epidemiologist Dr. Jay Bhattacharya.

“I think this ruling is akin to the second Enlightenment,” Bhattacharya told The Post. “It’s a ruling that says there’s a democracy of ideas. The issue is not whether the ideas are wrong or right. The question is who gets to control what ideas are expressed in the public square?”

The court ordered that the Biden administration and other federal agencies “shall take no actions, formal or informal, directly or indirectly” to coerce social media companies “to remove, delete, suppress or reduce” free speech.

Bhattacharya, a professor of medicine, economics and health research policy at Stanford University, co-authored the Great Barrington Declaration in the fall of 2020 with professors from Harvard and Oxford. The epidemiologists advocated for “focused protection” — safeguarding the most vulnerable Americans while cautiously allowing others to function as normally as possible — rather than broad pandemic lockdowns.

Meta Encryption Plan Will Let Child Abusers ‘Hide in the Dark,’ Says U.K. Campaign

The Guardian reported:

Mark Zuckerberg’s plan to roll out encrypted messaging on his platforms will let child abusers “hide in the dark”, according to a government campaign urging the tech billionaire to halt the move.

The Facebook founder has been under pressure from ministers over plans to automatically encrypt communications on his Messenger service later this year, with Instagram expected to follow soon after.

On Wednesday the Home Office launched a new campaign, including a statement from an abuse survivor, urging Zuckerberg’s Meta to halt its plans until it has safety plans in place to detect child abuse activity within encrypted messages.

A video to be distributed on social media features a message from one survivor, Rhiannon-Faye McDonald, who addresses her concerns to Mark Zuckerberg. “Your plans will let abusers hide in the dark,” she says as she urges the Meta CEO to “take responsibility.” McDonald, 33, was groomed online and sexually abused at the age of 13, although she did not encounter her abuser on Meta platforms.

Biden Admin Awards Over $4 Million in Grants to Programs That Target ‘Misinformation’

Reclaim the Net reported:

Since the start of September, the Biden administration’s National Science Foundation (NSF) and State Department have awarded grants totaling more than $4 million to programs, studies, and other initiatives that target “misinformation” — a term that the Biden admin has used to demand censorship of content that challenges the federal government’s COVID narrative.

These awards were granted as the Biden admin faces a major lawsuit for pressuring Big Tech to censor content that it deems to be misinformation.

An appeals court recently stated that the Biden regime violated the First Amendment when pushing social media platforms to censor and in an Independence Day ruling on this case, a judge described the Biden admin’s actions as “Orwellian.” The Supreme Court is now considering whether to hear the case.

This Ex-Googler Helped Launch the Gen AI Boom. Now He Wants to Reinvent Vaccines

VentureBeat reported:

Former Google AI researcher Jakob Uszkoreit was one of the eight co-authors of the seminal 2017 paper “Attention is All You Need,” which introduced the Transformers architecture that went on to underpin ChatGPT and most other large language models (LLMs).

The fact that he is the only one of the cohort that transitioned into biotech — co-founding Inceptive, which recently raised $100 million from investors like Nvidia and Andreessen Horowitz — is no surprise, Uszkoreit told VentureBeat in a recent interview.

The Palo Alto-based Inceptive, which was founded in 2021 by Uszkoreit and Stanford University’s Rhiju Das to create “biological software” using Transformers, has built an AI software platform that designs unique molecules made of mRNA, which Pfizer and BioNTech used to make their COVID-19 vaccines. Essentially, the company designs mRNAs with neural networks, tests the molecules, and licenses them to pharmaceutical companies that put them through clinical trials.

G20 Leaders Plot CBDCs and Digital IDs Worldwide

Reclaim the Net reported:

In a monumental step toward a digitized future, the convocation of the 20 largest world economies, famously known as the G20, have concluded upon a commitment to herald the advent of digital currencies and digital IDs across their territories.

This decision, however, has sparked major anxieties given its potential as a mechanism through which governments can keep tabs on their citizen spending habits and stifle opposition. The announcement came from a recent meeting held in New Delhi, under the mantle of India’s presidency.

Voices from across the globe have raised alarms over the potential grooming of cryptocurrencies through government-aided regulation, which could subsequently lead to the replacement of these decentralized digital currencies with state-controlled Central Bank Digital Currencies (CBDCs) that could override privacy and security attributes.

The idea of extensive monitoring of cryptocurrencies has ruffled feathers, with apprehensive individuals arguing that this might grant governments the master keys to manipulate social credit scores and control the monetary spending of citizens.

UN to Discuss How to Better Control the World at Annual General Assembly

ZeroHedge reported:

The UN’s charter outlines a grand mission statement of benevolent purpose, with its supposed root mission being the pursuit of global peace and security. It is therefore ironic that the institution relies on a host of fabricated crisis events and ongoing conflicts in order to remain relevant.

As UN Secretary António Guterres argues: “The UN is not a Vanity Fair, it is a political body.”  And really, that is the problem. There is no use for the UN other than to act as a foil for the eventual imposition of a faceless and unaccountable world government.

The organization will always strive for more centralization as long as it exists; it does not care about peace, it cares about power. Thus, every new crisis event is seen as an opportunity for these people, not as a threat that needs to be solved.

While think tanks like the WEF and summits like Davos are designed to keep political and financial elites informed on the overall agenda ahead, the UN is more of a vehicle for public engagement and implementation. They are the “governing body” that is supposed to give legitimacy to the globalist obsession with world government. They are the friendly faces of the beast, and they come with many gifts and promises of justice and equity.

Everything We Know About Neuralink’s Brain Implant Trial

Wired reported:

Elon Musk’s brain implant company Neuralink has announced it is one step closer to putting brain implants in people.

Today, the company stated that it will begin recruiting patients with paralysis to test its experimental brain implant and that it has received approval from a hospital institutional review board.

Such boards are independent committees assembled to monitor biomedical research involving human subjects and flag any concerns to investigators. Neuralink is dubbing this “the PRIME Study,” an acronym for Precise Robotically Implanted Brain-Computer Interface.

Neuralink is one of a handful of companies developing a brain-computer interface, or BCI, a system that collects brain signals, analyzes them, and translates them into commands to control an external device. In May, the company said on X, formerly Twitter, that it had received approval from the Food and Drug Administration to conduct its first-in-human clinical study, but didn’t provide further details at the time.

This World-Class Airport Will Soon Go Passport-Free

CNN Travel reported:

Traveling through one of the world’s best airports is set to get even smoother next year. Starting in 2024, officials say Singapore’s Changi Airport will introduce automated immigration clearance, which will allow passengers to depart the city-state without passports, using only biometric data.

Biometric technology, along with facial recognition software, is already in use to some extent in Changi Airport at automated lanes at immigration checkpoints.

Biometrics will be used to create a “single token of authentication” that will be employed at various automated touch points — from bag drops to immigration clearance and boarding — eliminating the need for physical travel documents like boarding passes and passports.

Seamless travel has been catching on around the world and biometric identification could soon be the future of travel, observers say.

Several Bay Area Health Departments Issue New Mask Mandates, Amid Rising COVID Cases

KTVU FOX 2 reported:

Health officials in several Bay Area counties have issued new mask mandates as COVID-19 cases continued to rise and in preparation for the upcoming respiratory virus season.

This week, Contra Costa, Sonoma, and San Mateo counties all issued mask orders for healthcare personnel in hospitals and other patient care facilities. All three orders were set to go into effect on Nov. 1 and last through April 30.

The rule would only be applicable to healthcare workers in these settings and would not affect patients or visitors of healthcare facilities, said  Anna Roth, director of Contra Costa Health Services.