Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

The video-sharing platform Rumble and a constitutional law scholar are among those suing New York over the state’s new “Online Hate Speech Law,” claiming the law’s language is so broad it could put bloggers at risk of financial ruin merely for sharing opinions the state disfavors.

The nonprofit Foundation for Individual Rights and Expression (FIRE) last week filed the lawsuit in the U.S. District Court for the Southern District of New York on behalf of three plaintiffs: online video-sharing platform Rumble, its “Locals” subscription platform and First Amendment scholar Eugene Volokh, publisher of “The Volokh Conspiracy” blog.

The lawsuit names New York State Attorney General Letitia James as the sole defendant.

According to the plaintiffs, the legislation — which took effect Dec. 3 — will oblige online platforms to target and censor speech that is protected by the First Amendment of the U.S. Constitution.

In a statement, FIRE said, “The law is titled ‘Social media networks; hateful conduct prohibited,’ but it actually targets speech the state doesn’t like — even if that speech is fully protected by the First Amendment.”

According to the legislation, online platforms are required to “provide and maintain mechanisms for reporting hateful conduct on their platform,” and are subject to fines of up to $1,000 per day for non-compliance.

The plaintiffs are asking the court to declare the new legislation in violation of the First and 14th Amendments of the U.S. Constitution, which protect free speech and due process, respectively.

They also seek a permanent injunction against enforcement of the law, a declaration that the new law violates Section 230 of the Communications Decency Act and attorney’s fees and costs.

According to the lawsuit, the plaintiffs:

“Seek to promote free and open debate on their platforms because they believe in the free marketplace of ideas. They publish all manner of speech and do not believe that the speech targeted by the Online Hate Speech Law should be chilled, prohibited, or removed as a result of a government edict.

“They do not want to parrot the state’s message or be required to reply to every complaint of alleged ‘hate speech.’”

However, it is also possible that, in their efforts to potentially comply with the provisions of New York’s new law, social media platforms will make fundamental changes to their platforms and their policies that will affect users everywhere, and not just in the one particular state where the legislation is in force.

Another possibility might be that certain websites may begin choosing to “geofence” or “geoblock” — a practice by which IP (internet protocol) addresses from particular geographic regions, such as states or countries, are blocked by a website or online service provider.

This is, for instance, an increasingly common practice with many official online services of U.S. states — a practice on the part of state governments that appears to have evaded wider attention, although it is common in certain industries.

Alex Pattakos, Ph.D., co-founder of the Global Meaning Institute and contributing writer for Psychology Today, was permanently banned by social media platform LinkedIn, which is owned by Microsoft.

Pattakos told The Defender why he’s concerned about censorship and its potential expansion:

“My recent experience with the ‘legacy’ social media platforms has been unprecedented. In this regard, I have had my posts deemed ‘misinformation’ and censored by online moderators and so-called ‘fact-checkers’ on numerous occasions.

“Most recently, I was suspended permanently from LinkedIn for sharing information and empirical evidence that challenged the ‘mainstream’ narrative on a subject of major concern. It was disheartening to have an unknown, yet obviously biased, moderator restrict my freedom of expression in this way.”

For Pattakos, censorship of content on social media platforms, whether by the platforms themselves or by the government, represents “a direct assault” on democracy and freedom rather than protecting those ideals.

“As a subject matter expert in the disciplines of political science, existential philosophy and humanistic psychology, as well as someone who has always been committed to the scientific method and authentic dialogue, such treatment is obviously personal,” Pattakos said. “However, more importantly, it is a direct assault on democratic principles and human freedom.”

Law requires platforms to respond to ‘hateful’ content — but doesn’t define it

According to Reclaim The Net, New York’s new law will require online platforms to develop policies explaining how they will respond to user-generated content that “vilify, humiliate, or incite violence,” on the basis of protected classes such as gender, race or religion.

Platforms also will be obliged to create mechanisms through which users and visitors can submit complaints about “hateful content,” requiring the platforms to directly respond to such complaints or face potential investigations, subpoenas and fines levied directly by the attorney general’s office.

The legislation was passed in June and was signed into law by Gov. Kathy Hochul, a Democrat, who was since elected to a full term in office.

According to Law and Crime, the legislation was first proposed in the aftermath of a mass shooting at a Buffalo grocery store. In October, James and Hochul released a report that “details [the] shooter’s radicalization on fringe websites,” such as 4chan, and his “use of mainstream platforms to livestream violence.”

That same month, James said online platforms should be held accountable for “hateful conduct” resulting from a “lack of oversight, transparency, and accountability of these platforms” that allow “hateful and extremist views to proliferate online.”

Referring to the report, James said it represents “further proof that online radicalization and extremism is a serious threat to our communities, especially communities of color.”

“We cannot wait for another tragedy before we take action,” she added. “We must all work together to confront this crisis and protect our children and communities.”

However, Reclaim The Net argues that the language of the new law is vague, not providing a definition for such terms as “hateful content,” “humiliate,” “incite” or “vilify.”

In a statement, Rumble said this vague, broad language would, as a result, “cover constitutionally protected speech like jokes, satire, political debates and other online commentary.”

According to the lawsuit, the law:

“Hangs like the Sword of Damocles over a broad swath of online services (such as websites and apps), threatening to drop if they do not properly address speech that expresses certain state-disfavored viewpoints, as the state now mandates they must.”

The lawsuit also describes the law as a “First Amendment double whammy” that places platforms at risk of being fined despite the law’s vague language:

“In something of a First Amendment ‘double whammy,’ the Online Hate Speech Law burdens the publication of disfavored but protected speech through unconstitutionally compelled speech — forcing online services to single out ‘hate speech’ with a dedicated policy, a mandatory report & response mechanism, and obligatory direct replies to each report.

“If a service refuses, the law threatens New York Attorney General investigations, subpoenas, and daily fines of $1,000 per violation.”

FIRE described the law as “entirely subjective,” with the potential to target anything from “a comedian’s blog entry” to most comments posted by online users, “that could be considered by someone, somewhere, at some point in time, as ‘humiliating’ or ‘vilifying’ to a group based on protected class status like religion, gender, or race.”

In a Dec. 1 post on his blog, Volokh wrote:

“New York politicians are slapping a speech-police badge on my chest because I run a blog.

 “I started the blog to share interesting and important legal stories, not to police readers’ speech at the government’s behest.”

Chris Pavlovski, CEO and chairman of Rumble, said:

“New York’s law would open the door for the suppression of protected speech based on the complaints of activists and bullies.

“Rumble will always celebrate freedom and support creative independence, so I’m delighted to work with FIRE to help protect lawful online expression.”

Are social media sites ‘publishers’ or ‘platforms?’

In challenging the new legislation, the plaintiffs referenced the New York attorney general’s report which calls for limiting Section 230 of the Communications Decency Act, which protects social media platforms from being held liable for the third-party content posted by their users.

Social media sites have used Section 230 to argue they are not “publishers” of content — which would imply certain legal obligations that would preempt the immunity conferred on them as “platforms.” This is despite the fact that such platforms typically engage in moderation of content posted on their platforms.

While some have called for the repeal of Section 230 protections for social media platforms in response to numerous alleged instances of censorship, the plaintiffs in the lawsuit against New York’s attorney general argue in favor of the protections afforded to “platforms” and against James’ call to dilute them in the name of combating alleged “hate speech.”

According to Law and Crime, Section 230 “has vanishingly few friends today outside of Silicon Valley and free-speech activists.”

However, Democratic lawmakers from New York State argue the new law will increase safety on online platforms.

For instance, state Sen. Anna Kaplan, who sponsored the bill, said in 2021: “New Yorkers know the expression ‘if you see something, say something,’ but unfortunately many social media platforms make it impossible to speak out when you see something dangerous or harmful online.”

Broader efforts to curtail online ‘misinformation’ in New York and worldwide

New York’s Online Hate Speech Law is just one of several recent attempts by the state to police social media, according to Reclaim The Net, which cited bills proposing a ban on the online sharing of videos depicting violent crime and a proposal that would allow the state to sue platforms if they are “contributing” to the “knowing or reckless” spread of “misinformation” online.

A federal judge in October 2022 struck down provisions of a new law in New York that would have required applicants for firearms licenses in the state to turn over information about their social media accounts.

At the federal level, the Biden administration is facing a lawsuit, filed by the attorneys general of Louisiana and Missouri, alleging several First Amendment violations on the part of the U.S. government, including that federal agencies coerced social media platforms into censoring those who criticized the government’s COVID-19 policies.

And in February 2022, the U.S. House of Representatives introduced the Digital Services Oversight and Safety Act (HR 6796) “to provide for the establishment of the Bureau of Digital Services Oversight and Safety within the Federal Trade Commission, and for other purposes.”

The bill remains stalled in the House Subcommittee on Consumer Protection and Commerce.

Similar policies — and pieces of legislation — are being pursued outside of the United States.

In the U.K., the Online Safety Bill was reintroduced in Parliament, while the U.K.’s Office of Communications (Ofcom) appointed a former Google executive, Gill Whitehead, as its “online safety” head as of April 2023. Other Ofcom executives have previously worked for Amazon and Meta, Reclaim The Net reported.

The proposed legislation “will empower Ofcom to levy huge fines against Big Tech firms that fail to enforce the censorship rules in their terms of service consistently.”

Included in the proposed bill’s provisions is the criminalization of “false communications” — defined as sending “information that the person [sender] knows to be false,” with the intent of causing “psychological harm” to a “likely audience” with “no reasonable excuse.” Penalties foreseen by the legislation include up to 51 weeks in prison.

The Online Safety Bill does not clearly define the terms “false,” “knows,” “intention,” “psychological harm,” “likely audience” or “reasonable excuse.”

The proposed legislation also would require Ofcom to set up an “advisory committee on disinformation and misinformation.” It also includes generous exceptions for “large media outlets” and “recognized news publishers,” who would be immune to the “false communications” offense that, for others, would be considered a criminal act.

As previously reported by The Defender, the EU also passed similar legislationDigital Services Act (DSA) — applicable to its 27 member states. The DSA seeks to tackle the spread of “misinformation and illegal content” and will apply “to all online intermediaries providing services in the EU,” in proportion to “the nature of the services concerned” and the number of users of each platform.

According to the DSA, “very large online platforms” and “very large online search engines” — those with more than 45 million monthly active users in the EU — will be subject to the most stringent of the DSA’s requirements.

Big Tech companies will be obliged to perform annual risk assessments to ascertain the extent to which their platforms “contribute to the spread of divisive material that can affect issues like health,” and independent audits to determine the steps the companies are taking to prevent their platforms from being “abused.”

These steps come as part of a broader crackdown on the “spread of disinformation” called for by the DSA, requiring platforms to “flag hate speech, eliminate any kind of terrorist propaganda” and implement “frameworks to quickly take down illicit content.”

Regarding alleged “disinformation,” these platforms will be mandated to create a “crisis response mechanism” to combat the spread of such content, with the DSA specifically citing the conflict between Russia and Ukraine and the “manipulation” of online content that has ensued.

The U.S. State Department also is involved in efforts to combat “misinformation” and “disinformation” in other countries, through “A Declaration for the Future of the Internet,” established April 28 and signed by 56 countries and entities, including the U.S. and EU.

While the declaration is not legally binding, it sets forth “a political commitment to push rules for the internet that are underpinned by democratic values.”

What is less clear is how the declaration, and other similar laws, define “democratic values,” although several clues may be found from recent statements made by global actors such as the World Economic Forum (WEF) and by social media executives.

For instance, a recent WEF article on how the “metaverse” can be governed makes reference to how “real-world governance models” represent one possible option. The “real-world” models referred to, however, included Facebook’s “Oversight Board.”

The Oversight Board describes itself as “the largest global fact-checking network of any platform,” praising itself for “displaying warnings on over 200 million distinct pieces of content on Facebook (including re-shares) globally based on over 130,000 debunking articles written by our fact-checking partners,” just during the second quarter of 2022.

The Oversight Board also launched a pilot program it said “aims to show people more reliable information and empower them to decide what to read, trust and share.” How “reliable” is determined is not specified.

At present, the Oversight Board is also considering recommending “alternative enforcement options” to the removal of “harmful health misinformation” pertaining to COVID-19 and other issues, where instead of the outright removal of such content from Meta’s platforms, they may be “labeled,” “fact-checked” by third parties or their distribution “reduced” — a practice commonly known as shadowbanning.

The Oversight Board received a “three-year, $150 million commitment” from Meta to fund these and other initiatives.

Social media and representatives from Big Tech and Big Media also recently expressed opinions about “democracy” in the digital realm. For instance, speaking at the Athens Democracy Forum in September, Nanna-Louise Linde, vice president of European Government Affairs for Microsoft, said, “We should make sure that we clean up our problems in the old Internet before we transfer them also to the metaverse: privacy, disinformation.”

Donald Martin, media consultant and former editor of The Herald, Scotland, said that while “fake news is not new,” the current scale of it is “unprecedented.” He added: “It’s really frightening how quickly ‘fake news’ gains traction and acceptance, and that’s thanks largely to social media algorithms.”

Martin said “fake news” needs to be “debunked within about 30 minutes, before it has traction.”

Esther O’Callaghan, founder and CEO of hundo.xyz, expressed concerns over the spread of “misinformation and extreme ideas” that “actually end up being very insidious,” questioning “how do we make sure we nudge them [online users] in the direction you are talking about and not in another way?”

As previously reported by The Defender, the concept of “nudging,” arising from the field of behavioral psychology, has been employed by governments and public health officials to “encourage” certain behaviors, such as adherence to COVID-19-related restrictions.