World Economic Forum Pushes for Interoperability of Centralized Currency to Ensure Global ‘Success’
The World Economic Forum (WEF) is riding hard for central bank digital currencies (CBDCs). That, in and of itself, gives pause to critically-minded observers. But it’s worth keeping up with how WEF carries out this campaign aimed at as broad as possible CBDCs adoption.
At this point, the “elevator pitch,” pushed by the informal group gathering the most influential globalist elites, is to transition from simply advocating in favor of this massively controversial form of money.
Now, WEF wants to pretend that adopting, or planning to adopt CBDCs is more or less a done deal, and move onto the technical nitty-gritty. And yet, even while shifting the narrative this way, some decidedly policy (and political) decisions, to be made by governments and regulators, are also pushed for.
One of them is referred to as CBDCs “interoperability” as a necessary precondition to making this centralized — government-controlled and tied to people’s identities — currency successful.
Hundreds of Families Urge Schumer to Pass Children’s Online Safety Bill
Hundreds of parent advocates urged Senate Majority Leader Chuck Schumer (D-N.Y.) to pass the Kids Online Safety Act in a letter and full-page Wall Street Journal ad published Thursday.
The call to action builds on pressure from parents at last week’s Senate Judiciary Committee hearing with the CEOs of Meta, TikTok, Discord, Snap and X, the company formerly known as Twitter.
“We have paid the ultimate price for Congress’s failure to regulate social media. Our children have died from social media harms,” the parents wrote in the letter.
“Platforms will never make meaningful changes unless Congress forces them to. The urgency of this matter cannot be overstated. If the status quo continues, more children will die from preventable causes and from social media platforms’ greed,” they wrote in the letter.
London Underground Is Testing Real-Time AI Surveillance Tools to Spot Crime
Thousands of people using the London Underground had their movements, behavior, and body language watched by AI surveillance software designed to see if they were committing crimes or were in unsafe situations, new documents obtained by WIRED reveal. The machine learning software was combined with live CCTV footage to detect aggressive behavior, guns or knives being brandished, as well as looking for people falling onto tube tracks or dodging fares.
Documents sent to WIRED in response to a Freedom of Information Act request detail how Transport for London (TfL) used a wide range of computer vision algorithms to track people’s behavior while they were at the station. It is the first time the full details of the trial have been reported and follow TfL saying, in December, that it will expand its use of AI to detect fare dodging to more stations across the British capital.
Privacy experts who reviewed the documents question the accuracy of object detection algorithms. They also say it is not clear how many people knew about the trial, and warn that such surveillance systems could easily be expanded in the future to include more sophisticated detection systems or face recognition software that attempts to identify specific individuals.
“While this trial did not involve facial recognition, the use of AI in a public space to identify behaviors, analyze body language, and infer protected characteristics raises many of the same scientific, ethical, legal, and societal questions raised by facial recognition technologies,” says Michael Birtwistle, associate director at the independent research institute the Ada Lovelace Institute.
Don’t Blame Zuckerberg: Why More Tech Regulation Would Lead to More Tech Censorship
There are many serious problems with social media today — and its outsized and still-growing grip on our culture and daily lives make solving them of paramount importance. The senators and the top critics of these platforms have identified some legitimate concerns. But so often these days we begin to slide toward the wrong solutions to real problems — in a direction that gives more power to those who feel it slipping away every time a random person can go “viral” or accrue a significant audience.
In this case, the boogeyman of the internet has become Section 230, part of a 1996 law that gives online service providers immunity from being sued over what users on these platforms post. Politicians on both sides of the aisle would like to gain leverage over these companies to push further regulation. And thousands of lawyers are surely salivating about what they’ll get to do if these platforms lose their immunity.
Mark Zuckerberg and his management of his powerful collection of social media companies — from Facebook to Instagram to WhatsApp — are not without criticism. He himself acknowledged in the hearing, and has for years, that there are efforts that need to be taken to continue policing the vast amount of content that appears on the platforms.
But the short-sighted approach to removing Section 230 as a salve for the internet outrage du jour will backfire because more tech regulation and fewer protections will surely lead to more tech censorship. We’ve seen the insidious ways these companies can deplatform and chill the speech of those who are deemed unacceptable. The Twitter Files, the Hunter Biden laptop and so much more — Americans would lose their ability to converse freely if the platforms become liable like the users are.
Leading AI Companies Join New U.S. Safety Consortium: Biden Administration
Leading artificial intelligence (AI) companies joined a new safety consortium to play a part in supporting the safe development of generative AI, the Biden administration announced Thursday.
Microsoft, Alphabet’s Google, Apple, Facebook-parent Meta Platforms, OpenAI and others have joined the AI Safety Institute Consortium, a coalition focusing on the safe development and deployment of generative AI.
The newly formed consortium also includes government agencies, academic institutions and other companies like Northrop Grumman, BP, Qualcomm and Mastercard.
The group will be under the umbrella of the U.S. AI Safety Institute and will work toward achieving goals unveiled by President Biden’s executive order issued in late October that focused on ensuring the safety of AI development while preserving the privacy of data. The group will work on developing guidelines for “red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.”
Kyrie Irving Suggests NYC Mayor, Vaccine Mandate Were to Blame for Disappointing Run With Nets
Kyrie Irving made a bizarre revelation in his return to Brooklyn on Tuesday night when frustrated fans sitting courtside questioned why the Dallas Mavericks guard did not perform at the same level while still playing for the Nets last season.
His response was New York City Mayor Eric Adams.
The comment seemingly points to Irving’s turbulent 2021-2022 season when he was ineligible to play in most of Brooklyn’s home games because of his refusal to get vaccinated against COVID-19 as mandated in New York City.
Florida Gov. DeSantis Blasts Colleges That Still Have COVID Vax Mandate
The COVID-19 pandemic officially ended a year ago this week when the U.S. Department of Health and Human Services gave governors a 90-day warning that the federal order authorizing emergency action to fight the virus was expiring.
Yet at least 68 colleges across the U.S. still mandate COVID vaccines for students, according to an activist group that seeks to repeal the directives.
In response, Florida Gov. Ron DeSantis called that “ridiculous” and reminded college kids and the public at large how Florida rejected such orders last year.
The Problem With Social Media Is That It Exists at All
The world would be a better place without social media.
I’m not talking about teenage suicide. This is not meant to channel the fury of Republican Sens. Lindsey Graham (S.C.) and Josh Hawley (Mo.) at the chief executives of TikTok, Meta, X (formerly Twitter) and the like for turning a profit off platforms where teens drive themselves to despair.
I make no claims as to whether TikTok might be addictive. Nor is this about the “harmful image exploitation” online and the proliferation of child sex abuse materials on social media that Democratic Sen. Amy Klobuchar (Minn.) wants to stop. It’s not about cracking down on the online illegal drug business.
This is a standard, no-frills proposition from the comparatively staid land of economics: All things considered, social media platforms detract from human welfare.
Several scholars have toyed with this hypothesis. But a group of economists from the University of Chicago, the University of California at Berkeley, Bocconi University in Milan and the University of Cologne come pretty close to nailing it. Basically, they measured what people would pay for these platforms not to exist. It turns out, people would pay a lot.