Disinformation ‘Expert’ Tells People to Only Use ‘Trusted Sources,’ Avoid ‘Doing Your Own Research’
Brianna Lyman, elections correspondent at The Federalist, recently reported on a panel discussion featuring Al Schmidt, Pennsylvania Secretary of the Commonwealth, and Beth Schwanke, Executive Director of the Pitt Disinformation Lab. Schmidt and Schwanke, speaking at a forum organized by Spotlight PA, voiced their stance on “misinformation” and “disinformation” surrounding elections.
Strikingly, Schwanke recommended that rather than conducting self-led investigations, Pennsylvanians should place their confidence in so-called “trusted” sources. These include certain institutions and media outlets that have unfortunately been tied in the past to acts of censorship.
Schwanke’s advice, interestingly, seemed to discourage individual research, questioning, and sharing of ideas. Instead, she advocated the use of sources like the Department of State, county elections offices, and, strikingly, media organizations such as local NPR affiliates, which she implied upheld superior journalistic standards.
Lawmakers Unveil New Bipartisan Digital Privacy Bill After Years of Impasse
A pair of bipartisan lawmakers released a new comprehensive privacy proposal on Sunday, the first sign in years that Congress could have a shot at breaking the long-standing impasse in passing such protections.
Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) unveiled the American Privacy Rights Act, the most significant comprehensive data privacy proposal introduced in years. The draft bill would grant consumers new rights regarding how their information is used and moved around by large companies and data brokers while giving them the ability to sue when those rights are violated.
The legislation would require large companies to minimize the amount of data they collect on users and allow them to correct, delete, or export their data. It would give consumers the right to opt out of targeted advertising and the transfer of their information and let them opt out of the use of an algorithm to make important life decisions for them, like those related to housing, employment, education, and insurance. The bill would also mandate security protections to safeguard consumers’ private information.
40 Million Americans’ Health Data Is Stolen or Exposed Each Year. See if Your Provider Has Been Breached.
More than 40 million Americans’ medical records have been stolen or exposed so far this year because of security vulnerabilities in electronic healthcare systems, a USA TODAY analysis of Health and Human Services data found.
And the problem is steadily worsening. From 2010 to 2014, the first five years that data was collected, close to 50 million people had their medical data stolen or exposed. In the following five years, that number quadrupled. And health privacy breaches have continued to grow on the heels of the COVID-19 pandemic.
Federal law strictly prohibits medical institutions — hospitals, insurance companies and outpatient clinics — from sharing patient information, and requires that companies take steps to shield sensitive data from prying eyes.
Hackers Stole 340,000 Social Security Numbers From Government Consulting Firm
U.S. consulting firm Greylock McKinnon Associates disclosed a data breach in which hackers stole as many as 341,650 Social Security numbers. The data breach was disclosed on Friday on Maine’s government website, where the state posts data breach notifications.
GMA provides economic and litigation support to companies and U.S. government agencies, including the U.S. Department of Justice, bringing civil litigation. According to its data breach notice, GMA told affected individuals that their personal information “was obtained by the U.S. Department of Justice (“DOJ”) as part of a civil litigation matter” supported by GMA.
GMA told victims that “your personal and Medicare information was likely affected in this incident,” which includes names, dates of birth, home addresses, some medical information and health insurance information, and Medicare claim numbers, which included Social Security Numbers.
It’s unclear why it took GMA nine months to determine the extent of the breach and notify victims.
How AI Risks Creating a ‘Black Box’ at the Heart of U.S. Legal System
Artificial intelligence (AI) is playing an expanding — and often invisible — role in America’s legal system. While AI tools are being used to inform criminal investigations, there is often no way for defendants to challenge their digital accuser or even know what role it played in the case.
AI and machine learning tools are being deployed by police and prosecutors to identify faces, weapons, license plates and objects at crime scenes, survey live feeds for suspicious behavior, enhance DNA analysis, direct police to gunshots, determine how likely a defendant is to skip bail, forecast crime and process evidence, according to the National Institute of Justice.
But trade secrets laws are blocking public scrutiny of how these tools work, creating a “black box” in the criminal justice system, with no guardrails for how AI can be used and when it must be disclosed.
Currently, public officials are essentially taking private firms at their word that their technologies are as robust or nuanced as advertised, despite expanding research exposing the potential pitfalls of this approach. Take one of its most common use cases: facial recognition. Clearview AI, one of the leading contractors for law enforcement, has scraped billions of publicly available social media posts of Americans’ faces to train its AI, for example.
Pharmaceutical Companies May Be the First Targets of the Washington State My Health My Data Act
The National Law Review reported:
On April 17, 2023, the Washington State Legislature passed the “My Health My Data Act” (WMHMDA or the Act), which took effect for most companies on March 31, 2024. Unlike other modern state privacy laws that purport to regulate any collection of “personal data,” WMHMDA confers privacy protections only upon “consumer health data.” This term is defined to include any data that is linked (or linkable) to an individual and that identifies their “past, present, or future physical or mental health status.” As the statute is not intended to apply to HIPAA-regulated entities or employers, there is some confusion regarding its scope (i.e., which companies may be collecting consumer health data) as well as its requirements.
Specifically, the Act refers to data that might “identify” a consumer seeking a service to improve or learn about a consumer’s mental or physical health as an example of consumer health data. As a result, organizations that traditionally do not consider themselves to be collecting health data, such as grocery stores, newspapers, dietary supplements providers, and even fitness clubs, are uncertain whether the Act may be interpreted to apply to them to the extent that someone seeks out such companies either for information about health or to improve their health.
While courts have not yet been presented with a case under the statute, and the Office of the Washington Attorney General has provided little guidance, plaintiff law firms have already begun seeking individuals who had visited pharmaceutical company websites as well as other medical-related providers (e.g., testing companies) to serve as plaintiffs in litigation. While it remains unclear what substantive provision within the statute the law firms will allege was violated, the firms have signaled that pharmaceutical companies may be the first group of targets under the Act. As a result, pharmaceutical companies may want to ensure they are in full compliance with the Act.
Meta Is so Desperate for Data Sources to Train Its AI It Weighed Risking Copyright Lawsuits: Report
Tech giants are scrambling to find new data sources to fuel the AI arms race.
And at Meta, the issue has been so critical that executives met almost daily in March and April of last year to hash out a plan, The New York Times reported.
As AI systems become more powerful, tech companies have been forced to seek data more aggressively, which could open them up to possible copyright violations. Some have suspected OpenAI, for example, of using YouTube to train its video generator, Sora. The company’s CTO, Mira Murati, has denied those accusations.