Close menu

Big Brother News Watch

Feb 14, 2024

Over 70% of Service Members Say They Felt ‘Coerced’ Into Taking COVID Vaccine: Survey + More

Over 70% of Service Members Say They Felt ‘Coerced’ Into Taking COVID Vaccine: Survey

ZeroHedge reported:

Over 70% of individuals serving in the U.S. military who responded to an Epoch Times survey said they felt “coerced” into taking the COVID-19 vaccine and/or booster after the Pentagon issued a 2021 mandate to do so.

The survey, conducted last fall, spanned all branches of the military and included both enlisted and officer ranks. The average length of service was around 16 years.

One 20-year Army combat veteran told the outlet that he opposed the mandate. “I’m not a lab rat and neither are the people I work with,” he said, adding “While holding out [from taking the vaccines], I was forced to wear a mask and was often singled out for being unvaccinated.”

Like Officer Johnson, a majority of survey participants said they were “coerced” into receiving a vaccine and/or boosters. Nearly 95% of those who objected to the mandate said they faced reprisals, including verbal threats of punitive legal action, loss of promotion, and exclusion from career-enhancing schools.

ChatGPT Will Soon Be Able to Remember Your Conversations

Insider reported:

ChatGPT is getting a new feature: memory. OpenAI said Tuesday that it is testing a feature that will allow ChatGPT to remember things users discuss with the chatbot in future conversations.

The changes should make the chatbot, which amassed 100 million users in two months when it launched in 2022, more conversational.

OpenAI said that it was taking steps to ensure ChatGPT won’t automatically log what it calls “sensitive information,” such as information about your health.

However, like everything users feed into the chatbot, the company said it may use your memories to train future versions of the AI model — so be careful what you type into it.

Harvard University Receives ‘Lifetime Censorship Award’ From Free Speech Group: ‘No One Is Safe From the Possibility of Censorship’

Boston Herald reported:

A chaotic and controversial stretch at Harvard University has led to a dubious distinction from a free speech watchdog group.

The Cambridge campus has received the “Lifetime Censorship Award” from the Foundation for Individual Rights and Expression. This comes after Harvard came in last on FIRE’s College Free Speech Rankings — achieving a worst-ever score last year.

The free speech group also noted that Harvard has punished faculty and students for their speech. Harvard joins Georgetown University, Yale University, Syracuse University, Rensselaer Polytechnic Institute, and DePaul University on FIRE’s list of Lifetime Censorship Award recipients.

“This year’s list goes to show that no one is safe from the possibility of censorship,” said FIRE President and CEO Greg Lukianoff. “Americans of all ages and professions are being pushed into a corner when trying to express themselves freely: Shut up or be shut up. Censorship is an abuse of authority and a poor substitute for honest dialogue, and FIRE is here to fight it every step of the way.”

FBI Reveals Controversial Spy Tool Foiled Terror Plot as Congress Debates Overhaul

Politico reported:

The FBI revealed it used a controversial foreign surveillance tool to foil a terrorist plot on U.S. soil last year, part of a series of last-minute disclosures it hopes will sway Congress as lawmakers debate overhauling the measure later this week.

The bureau shared three newly declassified instances with POLITICO in which its access to data collected under the digital spying authority — codified in Section 702 of the Foreign Intelligence Surveillance Act — allowed it to protect national security, including one in which it thwarted a “potentially imminent terrorist attack” against U.S. critical infrastructure last year.

The House is expected to vote as early as Thursday on whether to approve a major change to the foreign surveillance authority, which has faced backlash because it also sweeps in data from Americans. That change would require bureau analysts to acquire a warrant or court order before searching a database of emails, texts and other digital communications of foreigners for information on U.S. citizens.

The proposal has support from lawmakers in both parties, and the FBI is on a campaign to sway those who are undecided or willing to reconsider.

Second Circuit Set to Hear Oral Arguments in Rumble’s Free Speech Case Against New York’s Online Censorship Law

Reclaim the Net reported:

The upcoming hearing on February 16 before the United States Court of Appeals for the Second Circuit represents a pivotal moment in the fight for free speech online.

This court session will address the significant concerns raised by Eugene Volokh, a renowned First Amendment professor and legal blogger, alongside social media platforms Rumble and Locals. These parties have joined forces with the Foundation for Individual Rights and Expression (FIRE), a leading national group advocating for free speech, to challenge a contentious New York law.

Rumble’s Chairman and CEO, Chris Pavlovski, expressed a strong stance against the law, emphasizing Rumble’s dedication to safeguarding free speech.

Pavlovski remarked, “New York’s law commands social media platforms to crack down on a variety of forms of protected speech, and fighting to defend that speech is the very reason Rumble exists. We cannot let activist governments continue to chip away at the freedom of expression, which is one of the most basic of all human rights. We are grateful to our partners in this fight, Locals, Eugene Volokh, and FIRE, for helping to carry the torch of freedom with us.”

Google’s Gemini AI Keeps Your Conversations for up to 3 Years (Even If You Delete Them)

Gizmodo reported:

Have you got a secret you don’t want anyone to know? Don’t tell any of humanity’s fancy new AI-powered assistants because the companies behind these new tools are probably keeping your data a lot longer than you think.

Google’s Gemini, the AI assistant formerly known as Bard, has received rave reviews, with many people hailing it as heads above OpenAI’s ChatGPT. But if you plan on using Gemini, it might be a good idea to give the privacy policy a quick read-through.

Not only does Google explicitly warn users not to give Gemini any sensitive information they wouldn’t want a human reviewer to read, but Google is also retaining many of your questions to help make their tools better. In fact, everything you tell Gemini might be kept by the company for up to three years — even if you delete all your information from the app.

Microsoft and OpenAI Say Hacking Groups Are Using AI as Part of Cyberattack Efforts

Yahoo!Finance reported:

Microsoft (MSFT) and OpenAI released a report on Wednesday saying that hacking groups from China, Iran, North Korea, and Russia are increasingly probing the use of AI large language models (LLMs) to improve their chances of successfully launching cyberattacks.

According to the report, the state-affiliated groups are using AI to understand everything from satellite technology to how to develop malicious code that can evade detection by cybersecurity software.

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” the companies said in the report.

The EU’s Online Content Rulebook Isn’t Ready for Primetime

Politico reported:

“The time of big online platforms behaving like they are ‘too big to care’ has come to an end,” said Thierry Breton, whose role includes overseeing the new social media standards that come into full force on Feb. 17. “We now have clear rules.”

Despite Breton’s public confidence in the European Union’s landmark digital content playbook, internal power struggles, strained resources and limited expertise threaten to hamstring the legislation — with the European Parliament election mere months away.

EU politicians still hope the rules will thwart electoral digital interference, quell widespread political misinformation and show Europeans that Brussels could stand up to Big Tech. As things stand though, the rulebook may have more bark than bite.

Feb 13, 2024

A Backroom Deal Looms Over High-Stakes U.S. Surveillance Fight + More

A Backroom Deal Looms Over a High-Stakes U.S. Surveillance Fight

WIRED reported:

Twice in the past decade, legislation limiting the United States government’s domestic surveillance powers sailed through the U.S. House of Representatives. Attached to bills that would ultimately become law, both of these pro-privacy amendments were killed off in the final hours of consideration — erased each time in secret meetings held among a select group of congressional power brokers. Capitol Hill sources familiar with ongoing negotiations over a top U.S. surveillance program fear House leaders may once again scrap popular civil-liberty-focused reforms.

Last week, House members became aware that closed-door discussions were ongoing at the highest levels concerning the latest pro-privacy reforms to gain widespread legislative support. Public reporting on the discussions, first disclosed by Politico, set off a firestorm of speculation over whether another deal may have been quietly struck to prolong a domestic surveillance program no longer assumed to have the support of a majority of Congress.

Sources with knowledge of ongoing negotiations over the future of Section 702 — a controversial but pivotal U.S. foreign surveillance program — say a host of pro-privacy reforms, including new warrant requirements for obtaining commercially available data, have gained serious traction among an anomalous coalition of progressives and conservatives otherwise at odds on most matters. WIRED granted these sources anonymity because they were not authorized to speak publicly about ongoing negotiations.

A source with knowledge of the 702 fight tells WIRED that last week House Speaker Mike Johnson and House Majority Leader Steve Scalise met privately about drafting a new bill to reauthorize the program — an attempt to somehow merge existing bills introduced separately in December by the House Judiciary and Intelligence committees. The history of genuine privacy legislation being killed off in these closed-door sessions immediately sparked concerns among reformers.

Google Vows to Use AI Models and Work With EU Anti-‘Disinformation’ Groups and Global ‘Fact-Checking’ Groups to Censor ‘Misinformation,’ ‘Hate’

Reclaim the Net reported:

Does the European Parliament need the “support” for its elections from a tech behemoth like Google? Google certainly thinks so, as does the EU.

And Google is doing it the best way it knows how: by manipulating information. A blog post on the giant’s site calls this “surfacing high-quality information to voters.”

Working with EU’s various “anti-disinformation” groups and “fact-checkers” from around the world to facilitate censorship is also part of the promised “support package,” while the targets of this censorship will be the usual list of online bogeymen (as designated by Google and/or governments), real or imagined: manipulated media, hate, harassment, misinformation.

All this will have to be done at scale, Google notes, hence the promise of bringing in more AI (Large Language Models, LLMs, included) than ever.

U.S. Judge Blocks Ohio Law Restricting Children’s Use of Social Media

Reuters reported:

A federal judge on Monday prevented Ohio from implementing a new law that requires social media companies, including Meta Platform’s (META.O) Instagram and ByteDance’s TikTok, to obtain parental consent before allowing children under 16 to use their platforms.

Chief U.S. District Judge Algenon Marbley in Columbia agreed with the tech industry trade group NetChoice that the law violated minors’ free speech rights under the U.S. Constitution’s First Amendment.

It marked the latest court decision blocking a state’s law designed to protect young people online as federal and state lawmakers look for ways to address rising concerns about the dangers posed by social media to the mental health of children.

Ohio Governor Mike DeWine, a Republican, called the ruling disappointing. He cited “overwhelming evidence that social media has a negative effect on the mental health of minors, including increases in depression and suicide-related behavior.”

Major Companies Are Reportedly Using This AI Tool to Track Slack and Teams Messages From More Than 3 Million Employees. Privacy Experts Are Alarmed.

Insider reported:

Aware, a software startup, is using AI to read employee messages sent across business communication platforms like Slack, Microsoft Teams, and Workplace by Meta. Its purpose: to monitor employee behavior in an attempt to understand risk.

Some of the biggest American companies — including Starbucks, Chevron, T-Mobile, Walmart, and Delta — use Aware to assess up to 20 billion individual messages across more than 3 million employees, the company said, per CNBC.

But even though workplace surveillance is nothing new, some experts have expressed concerns that using nascent AI technology to track employees can lead to faulty decision-making — and a privacy nightmare.

Brianna Ghey’s Mother Warns Tech Bosses More Children Will Die Without Action

The Guardian reported:

The mother of Brianna Ghey has called for her murder to be a “tipping point” in how society views “the mess” of the internet, warning that a generation of anxious young people will grow up lacking resilience.

Esther Ghey said technology companies had a “moral responsibility” to restrict access to harmful online content. She supports a total ban on social media access for under-16s — a move currently under debate in certain legislatures, including Florida in the U.S.

Talking to the Guardian, the 37-year-old food technologist said tech bosses were also culpable when it came to the wave of anxiety and mental health problems affecting children, which she said had led to “a complete lack of resilience in young people.”

She said tech companies should reflect not just on Brianna’s murder, but also on “the amount of young people that have taken their own lives” as a result of their harmful experiences online.

‘Behind the Times’: Washington Tries to Catch Up With AI’s Use in Healthcare

KFF Health News reported:

Lawmakers and regulators in Washington are starting to puzzle over how to regulate artificial intelligence in healthcare — and the AI industry thinks there’s a good chance they’ll mess it up.

“It’s an incredibly daunting problem,” said Bob Wachter, the chair of the Department of Medicine at the University of California-San Francisco. “There’s a risk we come in with guns blazing and overregulate.”

Already, AI’s impact on healthcare is widespread. The Food and Drug Administration has approved some 692 AI products. Algorithms are helping to schedule patients, determine staffing levels in emergency rooms, and even transcribe and summarize clinical visits to save physicians’ time. They’re starting to help radiologists read MRIs and X-rays. Wachter said he sometimes informally consults a version of GPT-4, a large language model from the company OpenAI, for complex cases.

Fertility Tracker Glow Fixes Bug That Exposed Users’ Personal Data

TechCrunch reported:

A bug in the online forum for the fertility tracking app Glow exposed the personal data of around 25 million users, according to a security researcher.

The bug exposed users’ first and last names, self-reported age groups (such as children aged 13-18 and adults aged 19-25, and aged 26 and older), the user’s self-described location, the app’s unique user identifier (within Glow’s software platform) and any user-uploaded images, such as profile photos.

Security researcher Ovi Liber told TechCrunch that he found user data leaking from Glow’s developer API. Liber reported the bug to Glow in October and said Glow fixed the leak about a week later.

In a blog post published on Monday, Liber wrote that the vulnerability he found affected all of Glow’s 25 million users. Liber told TechCrunch that accessing the data was relatively easy.

EU Lawmakers Ratify Political Deal on Artificial Intelligence Rules

Reuters reported:

Two key groups of lawmakers at the European Parliament on Tuesday ratified a provisional agreement on landmark artificial intelligence rules ahead of a vote by the legislative assembly in April that will pave the way for the world’s first legislation on the technology.

Called the AI Act, the new rules aim to set the guardrails for a technology used in a broad swathe of industries, ranging from banking to cars to electronic products and airlines, as well as for security and police purposes.

The rules will also regulate foundation models or generative AI like the one built by Microsoft-backed OpenAI (MSFT.O), which are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks.

EU countries gave their backing earlier this month after France secured concessions to lighten the administrative burden on high-risk AI systems and offer better protection for business secrets. Big Tech however remained guarded, worried about the vague and general wording of some of the requirements and the impact of the law on innovation.

Feb 12, 2024

Aaron Rodgers Says He Has ‘Important Responsibility’ to Speak Out Against COVID, Vaccines + More

Aaron Rodgers Says He Has ‘Important Responsibility’ to Speak Out Against COVID, Vaccines

Fox News reported:

Aaron Rodgers has never been afraid to speak his mind, and he is not stopping any time soon. The four-time MVP found himself at the center of controversy when he said he was “immunized” from COVID-19 during the 2021 NFL season, leading people to believe he was vaccinated against the virus when he was, and is, not.

Rodgers has continued to be outspoken against vaccine mandates and says it is his responsibility to do so.

“In the end, you’re on a decision. You stand for something, you stand courageously for what you believe in, or the opposite side of that is you say nothing and you’re a coward. And I wasn’t willing to do that,” Rodgers said during his appearance on “The Joe Rogan Experience.”

Rodgers has battled back and forth with those who continue to rip his stance on COVID and vaccines. When former ESPN and MSNBC broadcaster Keith Olbermann said Rodgers suffered “Another #SuddenLisfranc due to failure to vaccinate,” making fun of his Achilles injury, Rodgers told him to get his “fifth booster.”

Big Brother Biden Wants AI to Control What You Read, Say and Think

New York Post reported:

Get ready, America: Joe “Big Brother” Biden wants to make sure you only see and hear what his minions think is appropriate. The Biden administration has been spending millions on R&D for AI-powered tools meant to sniff out “disinformation,” reports the House Subcommittee on the Weaponization of the Federal Government.

The AI would alert social media companies on what to squelch, enabling an industrial-scale version of the suppression efforts Biden’s enforcers and allies already provably engaged in around COVID, Hunter’s laptop, and other major true stories.

The money’s going to tech-development heavy hitters like MIT, the University of Wisconsin-Madison, and the University of Michigan under a program ominously named “Trust & Authenticity in Communication Systems,” part of the even more ominously named “Convergence Accelerator Program.”

It’s Biden’s aborted Ministry of Truth all over again but on steroids. Again and again, operators in the White House and multiple federal agencies used the implied threat of their regulatory and prosecutorial powers to make backroom calls to Facebook and Twitter and kill content they didn’t like.

Big Brother Is Watching in Big Apple With Sneaky New Plan to Spy on Drivers, Charge Them

Fox News reported:

New York City drivers buckle up because Big Brother (aka the MTA) is keeping a watchful eye on you by installing cameras along New York City streets to track you. But why? Well, it all boils down to money, of course. The MTA is rolling out a controversial $15 per day congestion fee for all drivers venturing south of 60th Street. They’ve even given this area of Manhattan a snazzy name: the toll congestion zone.

Now, let’s dive into the nitty-gritty. License plate readers have been strategically placed above FDR Drive at East 25th Street and on Route 9A (The West Side Highway) to keep tabs on drivers entering the congestion zone. This means that any driver who enters this zone will have to pay the fee, regardless of where they live or where they are going.

As New York City’s streets become watched by license plate readers and surveillance cameras, drivers find themselves at the crossroads of convenience and scrutiny. The $15 congestion toll promises to fund transit improvements, but it also raises questions about fairness and future expansions. So, fellow commuters, keep your eyes on the road and your wallets because Big Brother is definitely watching and charging you.

Amendment to Ban Vaccine Mandates on Airline Flights Passes U.S. Senate Committee

KTTN reported:

An amendment to a bill that would outlaw COVID-19 vaccine mandates for airline passengers has been passed by a U.S. Senate committee.

The measure was sponsored by Missouri U.S. Senator Eric Schmitt. He says the amendment would protect all Americans, regardless of vaccine status, from being forced to share private medical info with an airline before boarding a plane.

Schmitt added in a written statement that “draconian vaccine mandates” pushed by the Biden Administration and by Democrats “have no place in today’s world.” His amendment was added to the FAA reauthorization bill, which now goes before the full U.S. Senate.

After Court Victory for Freedom Convoy, Canadians Ready to Sue

The Epoch Times reported:

Several Freedom Convoy protesters, buoyed by a recent victory in Canadian federal court, said they’re preparing to sue the federal government, banks, and the police that brought the 2022 protest to a heated end. On Jan. 23, Federal Court Justice Richard Mosley issued a ruling against the federal government’s invocation of the Emergencies Act in response to the protests and blockades that gridlocked Canada’s capital Ottawa for weeks.

The government’s use of the act did “not bear the hallmarks of reasonableness — justification, transparency and intelligibility — and was not justified in relation to the relevant factual and legal constraints that were required to be taken into consideration,” Justice Mosley wrote in his ruling.

Justice Mosley’s decision was ultimately the result of court action by five plaintiffs who participated in the protest, two of whom had their bank accounts frozen.

Three of the plaintiffs — Mr. Jost, Mr. Gircys, and Mr. Cornell — said on Jan. 29 that they plan to take further legal action against “those in government, the financial institutions who froze people’s bank accounts, and the police officers who beat up and injured innocent Canadians.”

‘Existential Catastrophe’ May Loom as No Proof AI Is Controllable — Expert

Newsweek reported:

Artificial intelligence (AI) has the potential to cause an “existential catastrophe” for humanity, a researcher has warned.

Roman Yampolskiy, an associate professor of computer engineering and science at the Speed School of Engineering, University of Louisville, has conducted an extensive review of the relevant scientific literature, stating that he has found no proof AI can be controlled. And even if some partial controls are introduced, these will likely be insufficient, he argues.

As a result, the researcher is of the view that AI should not be developed without this proof. Despite the fact that AI may be one of the most important problems facing humanity, the technology remains poorly understood, poorly defined and poorly researched, according to Yampolskiy, who is an AI safety expert.

The researcher’s upcoming book, AI: Unexplainable, Unpredictable, Uncontrollable, explores the ways that AI has the potential to dramatically reshape society — perhaps not always to our advantage.

U.S. Judge Orders Elon Musk to Testify in SEC’s Twitter Probe

Reuters reported:

A federal judge ordered Elon Musk to testify again in the U.S. Securities and Exchange Commission’s probe of his $44 billion takeover of Twitter, giving the regulator and the billionaire a week to agree on a date and location for the interview.

U.S. Magistrate Judge Laurel Beeler’s order, issued on Saturday night, formalized a tentative ruling she made in December that sided with the regulator.

The SEC sued Musk in October to compel the Tesla (TSLA.O) and SpaceX CEO to testify as part of an investigation into his 2022 purchase of Twitter, the social media giant that he subsequently renamed X. Musk refused to attend an interview in September that was part of the probe, the SEC said.

Feb 09, 2024

World Economic Forum Pushes for Interoperability of Centralized Currency to Ensure Global ‘Success’ + More

World Economic Forum Pushes for Interoperability of Centralized Currency to Ensure Global ‘Success’

Reclaim the Net reported:

The World Economic Forum (WEF) is riding hard for central bank digital currencies (CBDCs). That, in and of itself, gives pause to critically-minded observers. But it’s worth keeping up with how WEF carries out this campaign aimed at as broad as possible CBDCs adoption.

At this point, the “elevator pitch,” pushed by the informal group gathering the most influential globalist elites, is to transition from simply advocating in favor of this massively controversial form of money.

Now, WEF wants to pretend that adopting, or planning to adopt CBDCs is more or less a done deal, and move onto the technical nitty-gritty. And yet, even while shifting the narrative this way, some decidedly policy (and political) decisions, to be made by governments and regulators, are also pushed for.

One of them is referred to as CBDCs “interoperability” as a necessary precondition to making this centralized — government-controlled and tied to people’s identities — currency successful.

Hundreds of Families Urge Schumer to Pass Children’s Online Safety Bill

The Hill reported:

Hundreds of parent advocates urged Senate Majority Leader Chuck Schumer (D-N.Y.) to pass the Kids Online Safety Act in a letter and full-page Wall Street Journal ad published Thursday.

The call to action builds on pressure from parents at last week’s Senate Judiciary Committee hearing with the CEOs of Meta, TikTok, Discord, Snap and X, the company formerly known as Twitter.

“We have paid the ultimate price for Congress’s failure to regulate social media. Our children have died from social media harms,” the parents wrote in the letter.

“Platforms will never make meaningful changes unless Congress forces them to. The urgency of this matter cannot be overstated. If the status quo continues, more children will die from preventable causes and from social media platforms’ greed,” they wrote in the letter.

London Underground Is Testing Real-Time AI Surveillance Tools to Spot Crime

WIRED reported:

Thousands of people using the London Underground had their movements, behavior, and body language watched by AI surveillance software designed to see if they were committing crimes or were in unsafe situations, new documents obtained by WIRED reveal. The machine learning software was combined with live CCTV footage to detect aggressive behavior, guns or knives being brandished, as well as looking for people falling onto tube tracks or dodging fares.

Documents sent to WIRED in response to a Freedom of Information Act request detail how Transport for London (TfL) used a wide range of computer vision algorithms to track people’s behavior while they were at the station. It is the first time the full details of the trial have been reported and follow TfL saying, in December, that it will expand its use of AI to detect fare dodging to more stations across the British capital.

Privacy experts who reviewed the documents question the accuracy of object detection algorithms. They also say it is not clear how many people knew about the trial, and warn that such surveillance systems could easily be expanded in the future to include more sophisticated detection systems or face recognition software that attempts to identify specific individuals.

“While this trial did not involve facial recognition, the use of AI in a public space to identify behaviors, analyze body language, and infer protected characteristics raises many of the same scientific, ethical, legal, and societal questions raised by facial recognition technologies,” says Michael Birtwistle, associate director at the independent research institute the Ada Lovelace Institute.

Don’t Blame Zuckerberg: Why More Tech Regulation Would Lead to More Tech Censorship

The Hill reported:

There are many serious problems with social media today — and its outsized and still-growing grip on our culture and daily lives make solving them of paramount importance. The senators and the top critics of these platforms have identified some legitimate concerns. But so often these days we begin to slide toward the wrong solutions to real problems — in a direction that gives more power to those who feel it slipping away every time a random person can go “viral” or accrue a significant audience.

In this case, the boogeyman of the internet has become Section 230, part of a 1996 law that gives online service providers immunity from being sued over what users on these platforms post. Politicians on both sides of the aisle would like to gain leverage over these companies to push further regulation. And thousands of lawyers are surely salivating about what they’ll get to do if these platforms lose their immunity.

Mark Zuckerberg and his management of his powerful collection of social media companies — from Facebook to Instagram to WhatsApp — are not without criticism. He himself acknowledged in the hearing, and has for years, that there are efforts that need to be taken to continue policing the vast amount of content that appears on the platforms.

But the short-sighted approach to removing Section 230 as a salve for the internet outrage du jour will backfire because more tech regulation and fewer protections will surely lead to more tech censorship. We’ve seen the insidious ways these companies can deplatform and chill the speech of those who are deemed unacceptable. The Twitter Files, the Hunter Biden laptop and so much more — Americans would lose their ability to converse freely if the platforms become liable like the users are.

Leading AI Companies Join New U.S. Safety Consortium: Biden Administration

The Hill reported:

Leading artificial intelligence (AI) companies joined a new safety consortium to play a part in supporting the safe development of generative AI, the Biden administration announced Thursday.

Microsoft, Alphabet’s Google, Apple, Facebook-parent Meta Platforms, OpenAI and others have joined the AI Safety Institute Consortium, a coalition focusing on the safe development and deployment of generative AI.

The newly formed consortium also includes government agencies, academic institutions and other companies like Northrop Grumman, BP, Qualcomm and Mastercard.

The group will be under the umbrella of the U.S. AI Safety Institute and will work toward achieving goals unveiled by President Biden’s executive order issued in late October that focused on ensuring the safety of AI development while preserving the privacy of data. The group will work on developing guidelines for “red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.”

Kyrie Irving Suggests NYC Mayor, Vaccine Mandate Were to Blame for Disappointing Run With Nets

Fox News reported:

Kyrie Irving made a bizarre revelation in his return to Brooklyn on Tuesday night when frustrated fans sitting courtside questioned why the Dallas Mavericks guard did not perform at the same level while still playing for the Nets last season.

His response was New York City Mayor Eric Adams.

The comment seemingly points to Irving’s turbulent 2021-2022 season when he was ineligible to play in most of Brooklyn’s home games because of his refusal to get vaccinated against COVID-19 as mandated in New York City.

Florida Gov. DeSantis Blasts Colleges That Still Have COVID Vax Mandate

Tampa Free Press reported:

The COVID-19 pandemic officially ended a year ago this week when the U.S. Department of Health and Human Services gave governors a 90-day warning that the federal order authorizing emergency action to fight the virus was expiring.

Yet at least 68 colleges across the U.S. still mandate COVID vaccines for students, according to an activist group that seeks to repeal the directives.

In response, Florida Gov. Ron DeSantis called that “ridiculous” and reminded college kids and the public at large how Florida rejected such orders last year.

The Problem With Social Media Is That It Exists at All

The Washington Post reported:

The world would be a better place without social media.

I’m not talking about teenage suicide. This is not meant to channel the fury of Republican Sens. Lindsey Graham (S.C.) and Josh Hawley (Mo.) at the chief executives of TikTok, Meta, X (formerly Twitter) and the like for turning a profit off platforms where teens drive themselves to despair.

I make no claims as to whether TikTok might be addictive. Nor is this about the “harmful image exploitation” online and the proliferation of child sex abuse materials on social media that Democratic Sen. Amy Klobuchar (Minn.) wants to stop. It’s not about cracking down on the online illegal drug business.

This is a standard, no-frills proposition from the comparatively staid land of economics: All things considered, social media platforms detract from human welfare.

Several scholars have toyed with this hypothesis. But a group of economists from the University of Chicago, the University of California at Berkeley, Bocconi University in Milan and the University of Cologne come pretty close to nailing it. Basically, they measured what people would pay for these platforms not to exist. It turns out, people would pay a lot.