Big Brother News Watch
Social Media Fact Checkers Claim Their Work Isn’t Censorship. Here’s Why It Is. + More
Social Media Fact Checkers Claim Their Work Isn’t Censorship. Here’s Why It Is.
There’s good news, and bad: first, the fact that “fact-checkers” masquerading as unbiased and accurate moderators of content — while actually unreliable and bias-prone tools of censorship — are now recognized widely enough as just that, to trigger a reaction from some prominent actors.
But then — these “fact-checkers” are reacting in order to double down on their role as something positive, and justified.
Because there are no facts to support this attitude, one of the key “fact-checkers” is hiding behind an opinion piece. But the claim is there: “Fact-checking is not censorship,” a post on Poynter wants you to believe.
According to Facebook (Meta) CEO Mark Zuckerberg, posts that get fact-checked experience a 95% drop in clicks. In other words, even if this content is not outright removed, it is made virtually invisible. That’s censorship by any other name.
Former CU School of Medicine Research Director Sues Over COVID Vaccine Mandate
A former research director at the University of Colorado School of Medicine has filed a lawsuit against the university, alleging religious discrimination after being terminated for refusing to comply with the institution’s COVID-19 vaccination policy.
Alyse Brennecke, who served as the director of clinical research in the Department of Obstetrics and Gynecology in the Division of Gynecological Oncology for six years, was fired in October 2021 amidst a dispute over the vaccine mandate. “It was jab or job,” said Brad Bergford, Brennecke’s attorney, who emphasized the alleged ultimatum faced by his client.
Despite Brennecke’s religious exemption request, the university swiftly denied it and notified Brennecke that failure to provide proof of COVID-19 vaccination would result in further action, potentially leading to termination.
In a lawsuit filed on April 2, she argues the university’s denial violated rights outlined in Title VII, emphasizing that employers are required to seek accommodations for sincerely held religious beliefs.
Brennecke is seeking monetary damages for loss of pay, emotional suffering and other losses. While she is the sole plaintiff in her lawsuit, another lawsuit against the university involves at least 11 unnamed plaintiffs, including physicians, nurses and other administrative roles within the Anschutz Medical Campus.
The Future of AI Will Run on Amazon, Company CEO Says
Less than two weeks after rolling back one of its most ambitious artificial intelligence projects — a cashierless checkout technology called Just Walk Out — Amazon CEO Andy Jassy said in an annual shareholder letter published Thursday that he’s confident the future of the company’s biggest breakthroughs for customers will come from generative AI.
While Amazon has widely been viewed by consumers and the market as falling behind on AI, Jassy said in his letter that he’s “optimistic that much of this world-changing AI will be built on top of AWS,” or Amazon Web Services, the company’s cloud computing business that many of the world’s digital businesses already rely on to run.
In the letter, Jassy lays out the company’s strategy on generative AI, describing how it is less focused on building consumer-facing applications to compete directly with popular tools like OpenAI’s ChatGPT, but on building the underlying “foundational” AI models and selling them to enterprise customers, which Jassy said already include Delta Air Lines, Siemens and Pfizer.
JP Morgan Chase Cashes In on Customer Data
The largest U.S. bank, and the world’s largest by capitalization, JP Morgan Chase, is going forward with another way to monetize their clients — by giving access to their spending data to be used for targeted ads.
One would have thought that something of the kind was already in full swing. But given the glacial speed at which giant financial institutions move when it comes to introducing any new features, it’s perhaps not entirely surprising that only now, Chase is allowing businesses to make ad money directly off of the data belonging to the bank’s 80 million customers.
To make this possible, a platform called Chase Media Solutions is now available to brands who want to utilize transaction data the bank harvests from customers, to “fine-tune” campaigns, such as “personalized” offers and incentives.
House Republicans Revolt Against Spy Agency Bill, Signaling Trouble for Johnson
A small faction of House Republicans is once again blocking key legislation and posing a critical test of Speaker Mike Johnson’s ability to hold on to his gavel.
And their actions threw the House once more into chaos, as Republicans sniped among themselves and some far-right members threatened to let funding for the Foreign Intelligence Surveillance Act — a post-9/11 measure that strengthened the surveillance powers of U.S. intelligence services — expire on April 19.
Hard-liners had telegraphed that they would sink the procedural vote if the House Rules Committee did not include a change to the legislation to reshape how those services surveil malicious foreign actors, by ensuring that they don’t spy on U.S. citizens swept up in the communications-gathering without a warrant.
Privacy Talks Are Heating Up in Congress. Here’s What to Watch For.
Congressional negotiations over data privacy and children’s online safety took a notable step forward this week as House and Senate leaders unveiled bipartisan proposals and started ramping up their consideration of the measures.
Most notably, Sen. Maria Cantwell (D-Wash.) and Rep. Cathy McMorris Rodgers(R-Wash.) struck a deal on a comprehensive privacy bill, and House lawmakers unveiled a companion to the Kids Online Safety Act, raising the prospects that both could still move this Congress.
The House is scheduled to debate those and other tech measures at a hearing next week, and the Senate could soon follow suit. But just like in years past, the same pesky friction points that have bogged down talks for years may surface again.
House and Senate lawmakers for years have tussled over whether to prioritize broader privacy legislation or protections for kids, since taking up both has at times appeared unachievable.
One issue to watch: The House’s new version of the Kids Online Safety Act includes key changes that could complicate negotiations with the Senate.
DuckDuckGo Is Taking Its Privacy Fight to Data Brokers
For more than a decade, DuckDuckGo has rallied against Google’s extensive online tracking. Now the privacy-focused web search and browser company has another target in its sights: the sprawling, messy web of data brokers that collect and sell your data every single day.
Today, DuckDuckGo is launching a new browser-based tool that automatically scans data broker websites for your name and address and requests that they be removed. Gabriel Weinberg, the company’s founder and CEO, says the personal-information-removal product is the first of its kind where users don’t have to submit any of their details to the tool’s owners.
The service will make the requests for information to be removed and then continually check if new records have been added, Weinberg says. “We’ve been doing it to automate it completely end-to-end, so you don’t have to do anything.
An OpenAI Investor Says TikTok Is China’s ‘Programmable Fentanyl,’ a CCP-Controlled Tool Used to Manipulate U.S. Citizens
Billionaire and early OpenAI backer Vinod Khosla says he supports the forced divestiture of the social media platform TikTok from its Chinese parent company, ByteDance. In March, Congress passed a bill to ban TikTok in the U.S. if ByteDance didn’t sell its U.S. operations to non-Chinese owners.
“Neither I nor my firm stands to gain or lose anything on the back of this bill’s outcome, but I can see how TikTok can be weaponized by a foreign adversary,” Khosla wrote in an op-ed for the Financial Times on Tuesday.
In the op-ed, Khosla accused China of perpetuating double standards since Chinese consumers use TikTok’s Chinese variant, Douyin. And unlike TikTok, Douyin users aged 14 and below can only be on the platform for just 40 minutes a day.
“Spinach for Chinese kids, fentanyl — another chief export of China’s — for ours,” Khosla said. “Worse still, TikTok is a programmable fentanyl whose effects are under the control of the CCP.”
Morrison’s COVID Measures a ‘Grotesque Overreaction’ to a ‘Relatively Mild Pandemic,’ Tony Abbott Says
The former prime minister Tony Abbott has described the Morrison government’s COVID response as a “grotesque overreaction” to a “relatively mild pandemic”, adding he reluctantly got vaccinated because he “didn’t want anyone to have an excuse for keeping us locked up any longer than was absolutely necessary.”
Abbott, who also served as a health minister under the Howard government, clarified he was not opposed to vaccinations but used a feminist slogan — “my body, my choice” — to voice his opposition to vaccine mandates in a podcast hosted by Graham Hood, a former leader of the anti-vaccine mandate movement.
“And yet, that certainly wasn’t the approach that health authorities adopted at the time.” It is not the first time Abbott has criticized Australia’s response to the global pandemic.
Rand Paul Claims ‘Smoking Gun’ Ties Fauci, NIH to Research With ‘Desire’ to Create COVID-Type Virus + More
Rand Paul Claims ‘Smoking Gun’ Ties Fauci, NIH to Research With ‘Desire’ to Create COVID-Type Virus
After Sen. Rand Paul, R-Ky., sent letters to 15 federal agencies requesting information on their purported newfound connections to a 2018 grant proposal that sought to experiment with a COVID-19-type microbe, the lawmaker told Fox News the development is the “smoking gun” critics had long sought.
Paul claimed the developments — to which he credited a Marine Corps whistleblower — tie the National Institutes of Health to the research and prove former National Institute of Allergy & Infectious Diseases Director Anthony Fauci was untruthful in his denials before Congress.
The lawmaker, a doctor of ophthalmology who has been investigating COVID-19’s origins since the height of the pandemic, blasted the feds for allegedly keeping the research from the public. “Yeah, we found out about this first from a brave Marine who reported that this research — was a grant proposal back in 2018 — would have allowed Wuhan Institute to create a virus very similar to what COVID-19 turned out to be,” he said.
“But we only found out about this from a whistleblower — nobody else in government ever informed us, including Anthony Fauci.” Paul has long sparred with Fauci, who joined NIH in 1968, was appointed by former President Reagan to lead the NIAID and retired in December 2022.
Jim Jordan Summons Ex-Biden Admin Officials to House Hearing on Internet Censorship
House Judiciary Committee chairman Jim Jordan invited three former Biden White House officials to appear at a May 1 public hearing to discuss their pressure on companies to censor online content — as the Supreme Court considers whether the administration violated the First Amendment by pressuring platforms such as Facebook to yank posts and videos.
Jordan (R-Ohio) invited former White House director of digital strategy Rob Flaherty, former COVID-19 coordinator Andy Slavitt and former COVID-19 digital director Clarke Humphrey after Flaherty and Slavitt flouted Jordan’s earlier subpoenas requiring their testimony in closed-door depositions.
“The Committee on the Judiciary is continuing to conduct its investigation into how and to what extent the Executive Branch has coerced and colluded with companies and other intermediaries to censor speech,” Jordan wrote to the group.
Jordan, who chairs the Subcommittee on the Weaponization of the Federal Government, issued legally binding subpoenas in November requiring that Flaherty and Slavitt appear at depositions in January, but the pair elected not to appear.
Supreme Court justices last month heard oral arguments in a lawsuit brought by Missouri and Louisiana against the Biden administration for pressuring social media companies to remove alleged misinformation, especially about COVID-19 vaccines.
House Panel to Hold Hearing on Privacy, Kids Safety Bills
The House Energy and Commerce Committee will hold a hearing next week on several technology policy bills, including a newly unveiled comprehensive data privacy bill and kids online safety bills, the committee announced late Tuesday. Wednesday’s hearing will include discussions of the American Privacy Rights Act, which was released Sunday by committee Chair Cathy McMorris Rodgers (R-Wash.) and Senate Commerce Committee Chair Maria Cantwell (D-Wash.).
The bill would set in place regulations around how companies collect and use Americans’ data, and it would preempt state laws that have been enacted in recent years in lieu of federal guidelines.
The hearing will also consider discussions of an update to the Children’s Online Privacy Protection Act (COPPA) and the Kids Online Safety Act (KOSA). House versions of the bipartisan bills, which advanced out of the Senate Commerce Committee in July, were introduced Tuesday and led by Rep. Kathy Castor (D-Fla.).
COPPA 2.0 would update privacy protections for children online, adding regulations around how data is collected and used by tech companies for users ages 16 and under. It would also ban targeted advertising practices. KOSA would add regulations that aim to mitigate concerns about the use of certain tools and features and their impact on children’s mental health.
Two Tribal Nations Sue Social Media Companies Over Native Youth Suicides
Two tribal nations are accusing social media companies of contributing to the disproportionately high rates of suicide among Native American youth.
Their lawsuit filed Tuesday in Los Angeles County court names Facebook and Instagram’s parent company Meta Platforms; Snapchat’s Snap Inc.; TikTok parent company ByteDance; and Alphabet, which owns YouTube and Google, as defendants.
“Enough is enough. Endless scrolling is rewiring our teenagers’ brains,” added Gena Kakkak, chairwoman of the Menominee Indian Tribe of Wisconsin. “We are demanding these social media corporations take responsibility for intentionally creating dangerous features that ramp up the compulsive use of social media by the youth on our Reservation.”
Their lawsuit describes “a sophisticated and intentional effort that has caused a continuing, substantial, and long-term burden to the Tribe and its members,” leaving scarce resources for education, cultural preservation and other social programs.
Anxiety and Depression Is Spiking Among Young People. No One Knows Why.
State and local governments across the country are scrambling to find new strategies to slow an epidemic of kids’ mental illness that exploded during the pandemic.
But there’s a problem: No one knows what’s causing the spike. Even after the isolation and fear COVID wrought dissipated, levels of anxiety and depression remain sky-high.
Governments are forging ahead anyway, conducting a nationwide experiment in whatever ideas seem promising. That could ultimately help determine what works and save a generation. But some who treat children worry the lack of evidence to support many of the approaches threatens to waste time and money — or could even make matters worse.
Theories are plentiful. Depending on the researcher, doctor, therapist, lawmaker or business leader who’s talking, the epidemic level of children and teens’ mental illness is caused by loneliness, social media, the opioid-fueled destruction of families, social isolation from smartphones, climate change’s existential threat, political rancor, overactive parenting, phone-induced sleep deprivation, long COVID, the decline of churches and other social institutions, bad diets or environmental toxins.
Congress Bribes Itself to Renew Dystopian FISA ‘Sham Reforms’ That Actually ‘Codify Status Quo’
Late last year, Congress elected to punt the issue of FISA renewal — the Foreign Intelligence Surveillance Act that was designed to surveil terrorists in foreign countries, and has since been horrendously abused by the U.S. intelligence community to target Americans — including former President Donald Trump.
Now, they have 9 days to go to come up with a permanent replacement. To that end, House Speaker Mike Johnson put forth “RISAA” — a bill backed by Ohio Rep. Mike Turner and the intelligence committee, and just passed through the House Rules Committee — where a final floor vote will likely take place on Thursday.
Privacy hawks, however, point out that it’s a steaming pile of shit with no meaningful language to protect privacy rights — except for members of Congress, who gave themselves a carve-out that requires the FBI to notify and seek consent from Congress before spying on them.
What’s more, critics say the RISAA essentially codifies surveillance abuses into law. Under Section 702 of the FISA, the government is authorized to gather foreigners’ communications if they have been flagged in connection with national security matters. The communications can be gathered even if the target was speaking about, or with, Americans.
How to Stop Your Data From Being Used to Train AI
If you’ve ever posted something to the internet — a pithy tweet, a 2009 blog post, a scornful review, or a selfie on Instagram — it has most likely been slurped up and used to help train the current wave of generative AI. Large language models, like ChatGPT, and image creators are powered by vast reams of our data. And even if it’s not powering a chatbot, the data can be used for other machine-learning features.
Tech companies have scraped vast swathes of the web to gather the data they claim is needed to create generative AI — with little regard for content creators, copyright laws, or privacy. On top of this, increasingly, firms with reams of people’s posts are looking to get in on the AI gold rush by selling or licensing that information. Looking at you, Reddit.
However, as the lawsuits and investigations around generative AI and its opaque data practices pile up, there have been small moves to give people more control over what happens to what they post online. Some companies now let individuals and business customers opt out of having their content used in AI training or being sold for training purposes.
AI Companies Would Have to Fess Up on What They Use to Train AI Under Proposed Law
Rep. Adam Schiff, a Democrat from California, proposed a new bill on Tuesday that would force AI companies to disclose what data was used to train their models. And while it’s already being celebrated by major players in the entertainment industry, it’s almost certainly going to upset big AI companies like OpenAI, which is being sued by the New York Times for copyright infringement.
Officially known as the Generative AI Copyright Disclosure Act, the proposed legislation would require that any AI company submit paperwork to the U.S. Copyright Office before the release of any new generative AI system in order to explain what copyrighted works were used to build its system.
Companies that have released generative AI products like ChatGPT and image generators like Midjourney have come under fire for using copyrighted works to train their models. The AI companies argue it’s all legal under the Fair Use doctrine of U.S. copyright law, but the rights holders say it’s a violation of their intellectual property rights. And some politicians seem to agree strongly with the rights holders.
Amazon, Walmart Workers Worry About Surveillance Tech in Warehouses
Nearly half of warehouse workers who participated in a recent survey of Amazon and Walmart employees said they feel like they’re being watched at work.
Most employees said they didn’t know how the company used that information — and roughly 40% said that the monitoring contributed to pressure to move faster, even if that meant increasing the risk of injury.
While the report showed Amazon and Walmart workers had the highest rates of concern about technology that monitored worker activity and the pressure to keep up with the pace of co-workers, it also illustrated that was a concern across the industry.
The “psychological effects” of surveillance and monitoring are “felt most viscerally and negatively” by Black, Latino and immigrant workers who face surveillance, monitoring and over-policing outside of the workplace, the authors wrote.
German Intelligence Chief Advocates for Monitoring Speech and Thought
The head of Germany’s domestic spy agency, Thomas Haldenwang, has penned an op-ed for a German newspaper and provided some insight into the way he understands freedom of expression, and more importantly, its limits.
Haldenwang, who is at the helm of the Federal Office for the Protection of the Constitution (BfV), defended in the article published by the Frankfurter Allgemeine Zeitung his policy of keeping watch on citizens, which includes things like “thought and speech patterns.”
Meanwhile, critics see this as a policy designed to advance restrictions on speech and economic freedoms, primarily aimed at political opponents. In fact, recent polls suggest that most citizens also believe that BfV has become a political tool, and this opinion is said to be strongly present among parties (other than, unsurprisingly, the Greens).
This can be interpreted as yet another example of authorities in a declaratively democratic country trying to find a way to restrict speech they don’t like regardless of its being formally legal — while at the same time being unwilling to legislate to outlaw it, either because of lack of political consensus, or fear of political backlash.
Exclusive: Synchron Readies Large-Scale Brain Implant Trial + More
Exclusive: Synchron, a Rival to Musk’s Neuralink, Readies Large-Scale Brain Implant Trial
Synchron Inc., a rival to Elon Musk‘s Neuralink brain implant startup, is preparing to recruit patients for a large-scale clinical trial required to seek commercial approval for its device, the company’s chief executive told Reuters.
Synchron on Monday plans to launch an online registry for patients interested in joining the trial meant to include dozens of participants, and has received interest from about 120 clinical trial centers to help run the study, CEO Thomas Oxley said in an interview.
New York-based Synchron is farther along in the process of testing its brain implant than Neuralink. Both companies initially aim to help paralyzed patients type on a computer using devices that interpret brain signals. Synchron received U.S. authorization for preliminary testing in July 2021 and has implanted its device in six patients. Prior testing in four patients in Australia showed no serious adverse side effects, the company has reported.
Synchron, whose investors include billionaires Jeff Bezos and Bill Gates, and Neuralink compete in a niche of so-called brain-computer interface (BCI) devices. Such devices use electrodes that penetrate the brain or sit on its surface to provide direct communication to computers. No company has received final FDA approval to market a BCI brain implant.
Inside the House GOP’s Surveillance Law Nightmare
House Republicans are plunging headlong into another divisive debate — this time over government spy powers, a battle that pits them against each other and reveals deep-seated uncertainty about their party’s ideological direction.
Reapproving the section of the Foreign Intelligence Surveillance Act known as Section 702, which allows the intelligence community to collect and search through the communications of foreign targets without a warrant, was always going to be difficult given the sour relationship between some Republicans and the FBI. But that skepticism, which dates back to the FBI’s initial investigation into Donald Trump’s 2016 campaign, is only the start of the party’s problems on surveillance policy, according to interviews with nearly 20 GOP aides and lawmakers.
There was a time when government surveillance powers united Republicans to an unparalleled degree, particularly in the years after the Sept. 11, 2001, terrorist attacks. Even after George W. Bush left office, former President Barack Obama relied on Republicans to provide political cover during the debate over reauthorizing the wiretapping power.
Speaker Mike Johnson, while staring down an attempt to oust him, is dealing with two competing Republican factions that have battled privately for months over how much to rein in Section 702 — in particular, its ability to sift through the foreign data for information related to Americans.
Elon Musk Is Investigated by Brazil’s Top Censor After Refusing to Comply With Censorship Demands
Alexandre de Moraes, Brazil’s powerful president of the Superior Electoral Court (TSE) and Supreme Federal Court (STF) justice, on Sunday ordered the federal police to launch a “digital militias” investigation — prompted by X owner Elon Musk’s “conduct.”
This was Moraes’ reaction to what is qualified as an “attacks and disinformation campaign” against him and the two courts. He was referring to a series of posts by Musk on Saturday and Sunday, which, among other things, called for Moraes to either resign or be impeached.
Musk was posting about restrictions imposed on accounts on X at the behest of Brazil’s authorities, and on Sunday said he would publish “everything demanded by (Moraes) and how those requests violate Brazilian law.”
Now, after Musk first revealed that Brazilian authorities are forcing X to block several popular accounts without providing any justification, and then that he would “publish everything” Moraes demanded — Moraes came back with the police investigation.
Insurers Spy on Houses via Aerial Imagery, Seeking Reasons to Cancel Coverage
Insurance companies across the country are using satellites, drones, manned airplanes and even high-altitude balloons to spy on properties they cover with homeowners policies — and using the findings to drop customers, often without giving any opportunity to address alleged shortcomings.
“We’ve seen a dramatic increase across the country in reports from consumers who’ve been dropped by their insurers on the basis of an aerial image,” United Policyholders executive director Amy Bach tells the Wall Street Journal. Reasons can range from shoddy roofing to yard clutter and undeclared trampolines.
Much of this surveillance is done via the Geospatial Insurance Consortium, which boasts of its coverage of 99% of the U.S. population.
Allstate CEO Tom Willson framed aerial spying as a pricing issue, but many consumers are finding that companies are using it to suddenly drop their coverage altogether.
Tennessee Lawmakers Seek to Require Parental Permission Before Children Join Social Media
Tennessee’s GOP-dominant Senate on Monday unanimously signed off on legislation requiring minors to have parental consent to create social media accounts.
The bill is similar to pushes currently being made across the United States as concern grows over young people’s internet usage. Louisiana, Arkansas, Texas and Utah have all passed measures requiring parental consent for children to use social media — though Arkansas’ version is currently blocked as a federal lawsuit makes its way through court. Georgia sent a proposal to Gov. Brian Kemp for his signature or veto last month.
The Tennessee Senate approved its version without debate, though lawmakers tacked on a last-minute addition to clarify the bill only applied to social media websites. That means the House chamber must approve those changes before it can go to Gov. Bill Lee’s desk for his approval.
Public Worried by Police and Companies Sharing Biometric Data
More than half of the British public are worried about the sharing of biometric data, such as facial recognition, between police and the private sector, according to research from the Alan Turing Institute (ATI), with many expressing concern that a lack of transparency will lead to abuses.
The research, conducted alongside the Centre for Emerging Technology and Security (CETaS), revealed that 57% of the U.K. public are uncomfortable with biometric data sharing schemes between police forces and the private sector to prevent crimes like shoplifting.
The ATI said while some members of the public believed they would be more comfortable with the data sharing if there were appropriate transparency, oversight and accountability mechanisms in place, others said they would only feel comfortable if data sharing was a one-way process from commercial entities to the police — and not the other way round.
Others said they were completely opposed to any data sharing, arguing it opened up too much risk for abuse and an invasion of privacy. Beyond a focus on facial recognition, the research delved into a wider array of emerging biometric technologies, such as age estimation technology and emotion recognition systems.
CBS News Bringing Misinformation Unit to TV With Hiring of EP Melissa Mahtani (Exclusive)
The Hollywood Reporter reported:
CBS News is planning to bring its CBS News Confirmed unit to TV with the hiring of a new executive producer: Melissa Mahtani.
CBS News Confirmed, launched late last year, is focused on tackling misinformation, including the growing scourge of deepfakes and photos, videos and audio created by generative artificial intelligence. Now, Mahtani will be tasked with figuring out how best to bring its reporting to TV, digital and social platforms. Segments will run on CBS News programs leading up the the 2024 election, and a dedicated streaming show is planned for later this summer.
“CBS News Confirmed is the right initiative at the right time,” Mahtani said. “We are witnessing an onslaught of misinformation that makes it harder for people to distinguish between what is real and what is not. CBS News Confirmed will empower our viewers to be able to tell fact from fiction, sharing our own process of verification every step of the way.”
CBS News Confirmed was created to tackle a problem that every news organization is facing: How to deal with misinformation and fake content, a problem that is becoming even more troublesome thanks to generative AI tech and the speed and ubiquity of social media. The unit uses investigative journalism, technical skills, data and other tools to filter through content and determine what is real, and what isn’t, and to explain it to viewers.
How Loopholes and Opt-Outs Can Tear Apart U.S. AI Policy
Last month, the White House published new rules establishing how the federal government uses artificial intelligence systems, including baseline protections for safety and civil rights.
Given AI’s well-documented potential to amplify discrimination and supercharge surveillance, among other harms, the rules are urgently needed as federal agencies race to adopt this technology.
The good news is that, for the most part, the new rules issued in a memo by the Office of Management and Budget are clear, sensible and strong. Unfortunately, they also give agencies far too much discretion to opt out of key safeguards, seriously undercutting their effectiveness.
Ultimately, however, the responsibility to enact comprehensive protections rests with Congress, which can codify these safeguards and establish independent oversight of how they are enforced. The stakes are too high, and the harms too great, to leave broad loopholes in place.
Some States Are Seeking to Restrict TikTok. That Doesn’t Mean Their Governors Aren’t Using It
POV: You’re on TikTok, and so is your governor — even as your Legislature considers banning the app from state-owned devices and networks.
Efforts to ban TikTok over security concerns about China’s influence through the platform have picked up steam in the past year in state legislatures, with an expansive ban even proposed by Congress. In Pennsylvania, forward movement on a bill that first unanimously passed the state Senate last year could send legislation to the Democratic governor’s desk imminently.
But even as the app faces scrutiny and bans, governors and state agencies — and even President Joe Biden — are still using the app to promote their initiatives and expand their voting pool. Their target is the youth vote, or the people who largely make up the app’s U.S. user base of 170 million.
Disinformation ‘Expert’ Tells People to Only Use ‘Trusted Sources,’ Avoid ‘Doing Your Own Research’ + More
Disinformation ‘Expert’ Tells People to Only Use ‘Trusted Sources,’ Avoid ‘Doing Your Own Research’
Brianna Lyman, elections correspondent at The Federalist, recently reported on a panel discussion featuring Al Schmidt, Pennsylvania Secretary of the Commonwealth, and Beth Schwanke, Executive Director of the Pitt Disinformation Lab. Schmidt and Schwanke, speaking at a forum organized by Spotlight PA, voiced their stance on “misinformation” and “disinformation” surrounding elections.
Strikingly, Schwanke recommended that rather than conducting self-led investigations, Pennsylvanians should place their confidence in so-called “trusted” sources. These include certain institutions and media outlets that have unfortunately been tied in the past to acts of censorship.
Schwanke’s advice, interestingly, seemed to discourage individual research, questioning, and sharing of ideas. Instead, she advocated the use of sources like the Department of State, county elections offices, and, strikingly, media organizations such as local NPR affiliates, which she implied upheld superior journalistic standards.
Lawmakers Unveil New Bipartisan Digital Privacy Bill After Years of Impasse
A pair of bipartisan lawmakers released a new comprehensive privacy proposal on Sunday, the first sign in years that Congress could have a shot at breaking the long-standing impasse in passing such protections.
Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) unveiled the American Privacy Rights Act, the most significant comprehensive data privacy proposal introduced in years. The draft bill would grant consumers new rights regarding how their information is used and moved around by large companies and data brokers while giving them the ability to sue when those rights are violated.
The legislation would require large companies to minimize the amount of data they collect on users and allow them to correct, delete, or export their data. It would give consumers the right to opt out of targeted advertising and the transfer of their information and let them opt out of the use of an algorithm to make important life decisions for them, like those related to housing, employment, education, and insurance. The bill would also mandate security protections to safeguard consumers’ private information.
40 Million Americans’ Health Data Is Stolen or Exposed Each Year. See if Your Provider Has Been Breached.
More than 40 million Americans’ medical records have been stolen or exposed so far this year because of security vulnerabilities in electronic healthcare systems, a USA TODAY analysis of Health and Human Services data found.
And the problem is steadily worsening. From 2010 to 2014, the first five years that data was collected, close to 50 million people had their medical data stolen or exposed. In the following five years, that number quadrupled. And health privacy breaches have continued to grow on the heels of the COVID-19 pandemic.
Federal law strictly prohibits medical institutions — hospitals, insurance companies and outpatient clinics — from sharing patient information, and requires that companies take steps to shield sensitive data from prying eyes.
Hackers Stole 340,000 Social Security Numbers From Government Consulting Firm
U.S. consulting firm Greylock McKinnon Associates disclosed a data breach in which hackers stole as many as 341,650 Social Security numbers. The data breach was disclosed on Friday on Maine’s government website, where the state posts data breach notifications.
GMA provides economic and litigation support to companies and U.S. government agencies, including the U.S. Department of Justice, bringing civil litigation. According to its data breach notice, GMA told affected individuals that their personal information “was obtained by the U.S. Department of Justice (“DOJ”) as part of a civil litigation matter” supported by GMA.
GMA told victims that “your personal and Medicare information was likely affected in this incident,” which includes names, dates of birth, home addresses, some medical information and health insurance information, and Medicare claim numbers, which included Social Security Numbers.
It’s unclear why it took GMA nine months to determine the extent of the breach and notify victims.
How AI Risks Creating a ‘Black Box’ at the Heart of U.S. Legal System
Artificial intelligence (AI) is playing an expanding — and often invisible — role in America’s legal system. While AI tools are being used to inform criminal investigations, there is often no way for defendants to challenge their digital accuser or even know what role it played in the case.
AI and machine learning tools are being deployed by police and prosecutors to identify faces, weapons, license plates and objects at crime scenes, survey live feeds for suspicious behavior, enhance DNA analysis, direct police to gunshots, determine how likely a defendant is to skip bail, forecast crime and process evidence, according to the National Institute of Justice.
But trade secrets laws are blocking public scrutiny of how these tools work, creating a “black box” in the criminal justice system, with no guardrails for how AI can be used and when it must be disclosed.
Currently, public officials are essentially taking private firms at their word that their technologies are as robust or nuanced as advertised, despite expanding research exposing the potential pitfalls of this approach. Take one of its most common use cases: facial recognition. Clearview AI, one of the leading contractors for law enforcement, has scraped billions of publicly available social media posts of Americans’ faces to train its AI, for example.
Pharmaceutical Companies May Be the First Targets of the Washington State My Health My Data Act
The National Law Review reported:
On April 17, 2023, the Washington State Legislature passed the “My Health My Data Act” (WMHMDA or the Act), which took effect for most companies on March 31, 2024. Unlike other modern state privacy laws that purport to regulate any collection of “personal data,” WMHMDA confers privacy protections only upon “consumer health data.” This term is defined to include any data that is linked (or linkable) to an individual and that identifies their “past, present, or future physical or mental health status.” As the statute is not intended to apply to HIPAA-regulated entities or employers, there is some confusion regarding its scope (i.e., which companies may be collecting consumer health data) as well as its requirements.
Specifically, the Act refers to data that might “identify” a consumer seeking a service to improve or learn about a consumer’s mental or physical health as an example of consumer health data. As a result, organizations that traditionally do not consider themselves to be collecting health data, such as grocery stores, newspapers, dietary supplements providers, and even fitness clubs, are uncertain whether the Act may be interpreted to apply to them to the extent that someone seeks out such companies either for information about health or to improve their health.
While courts have not yet been presented with a case under the statute, and the Office of the Washington Attorney General has provided little guidance, plaintiff law firms have already begun seeking individuals who had visited pharmaceutical company websites as well as other medical-related providers (e.g., testing companies) to serve as plaintiffs in litigation. While it remains unclear what substantive provision within the statute the law firms will allege was violated, the firms have signaled that pharmaceutical companies may be the first group of targets under the Act. As a result, pharmaceutical companies may want to ensure they are in full compliance with the Act.
Meta Is so Desperate for Data Sources to Train Its AI It Weighed Risking Copyright Lawsuits: Report
Tech giants are scrambling to find new data sources to fuel the AI arms race.
And at Meta, the issue has been so critical that executives met almost daily in March and April of last year to hash out a plan, The New York Times reported.
As AI systems become more powerful, tech companies have been forced to seek data more aggressively, which could open them up to possible copyright violations. Some have suspected OpenAI, for example, of using YouTube to train its video generator, Sora. The company’s CTO, Mira Murati, has denied those accusations.