The Supreme Court Must Decide if It Wants to Own Twitter
The Twitter Wars have arrived at the Supreme Court.
On Halloween, the Supreme Court will hear the first two in a series of five cases the justices plan to decide in their current term that ask what the government’s relationship should be with social media outlets like Facebook, YouTube, or Twitter (the social media app that Elon Musk insists on calling “X”).
These first two cases are, admittedly, the most low-stakes of the lot — at least from the perspective of ordinary citizens who care about free speech. Together, the first two cases, O’Connor-Ratcliff v. Garnier and Lindke v. Freed, involve three social media users who did nothing more than block someone on their Twitter or Facebook accounts. But these three social media users are also government officials. And when a government official blocks someone, that raises very thorny First Amendment questions that are surprisingly difficult to sort out.
Two of the three other cases, meanwhile, ask whether the government may order social media sites to publish content they do not wish to publish — something that, under longstanding law, is an unambiguous violation of the First Amendment. The last case concerns whether the government may merely ask these outlets to pull down content.
When the Supreme Court closes out its term this summer, in other words, it could become the central player in the conflicts that drive the Way Too Online community: Which content, if any, should be removed from social media websites? Which users are too toxic for Twitter or Facebook? How much freedom should social media users, and especially government officials, have to censor or block people who annoy them online? And should decisions about who can post online be made by the free market, or by government officials who may have a political stake in the outcome?
How Facial-Recognition App Poses Threat to Privacy, Civil Liberties
Tech reporter Kashmir Hill has written about the intersection of privacy and technology for more than a decade, but even she was stunned when she came across a legal memo in 2019 describing a facial recognition app that could identify anyone based on a picture. She immediately saw the potential this technology had to become the stuff of dystopian nightmare, the “ultimate surveillance tool,” posing immense risks to privacy and civil liberties.
Hill recalled this incident to Jonathan Zittrain, the George Bemis Professor of International Law and Berkman Klein Center for Internet & Society director, as part of a conversation Wednesday at Harvard Law School about her new book, “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It.”
The work chronicles the story of Clearview AI, a small, secretive startup that launched an app in 2017, using a 30-billion-photo database scraped from social media platforms without users’ consent. The company, led by Australian computer engineer Hoan Ton-That, has been fined in Europe and Australia for privacy violations.
Hill spoke of the need to come up with regulations to safeguard users’ privacy and rein in social media platforms that are profiting from users’ personal information without their consent. Some states have passed laws to protect people’s right to access personal information shared on social media sites and the right to delete it, but that is not enough, she said.
The End of the Internet as We Know It? Online Safety Bill Gets Royal Assent
The long-debated and controversial Online Safety Bill finally received Royal Assent on October 26, 2023, the very last step for officially making it law.
The 300-page-long bill promises to make “the U.K. the safest place to be online,” especially for children, by forcing tech firms to take more responsibility for the content users spread across their platforms. Yet, tech firms claim that it actually threatens the internet as we know it.
Deemed by Technology Secretary Michelle Donelan as a “game-changing piece of legislation,” the Act gathered criticism from all fronts during its 6-year-long legal journey. From VPN services and messaging platforms to politicians, civil societies, industry experts, and academics, commentators fear its provisions may end up increasing the government’s surveillance and censorship reach while curbing people’s privacy.
Digital platforms now have a “duty of care” to protect children and prevent them from accessing harmful and age-inappropriate content, while enforcing age limits. They need to give an option to users to filter out harmful content, while parents will be entitled to obtain information about their children from tech firms. Platforms are also required to be transparent about all the risks of using their services beforehand.
The aim might be lofty, yet tech experts fear the means might end up undermining safety online instead. “As the Online Safety Bill becomes law without critical legal safeguards to end-to-end encryption, the internet as we know it faces a very real threat,” Andy Yen, Founder and CEO at Proton, told TechRadar right after the OSB received the long-awaited Royal Assent.
U.N. Chief Appoints 39-Member Panel to Advise on International Governance of Artificial Intelligence
U.N. Secretary-General António Guterres on Thursday announced the appointment of a 39-member global advisory panel to report on international governance of artificial intelligence and its risks, challenges and key opportunities.
The U.N. chief told a news conference the gender-balanced, geographically diverse group which spans generations will issue preliminary recommendations by the end of the year and final recommendations by the summer of 2024. The recommendations will feed into the U.N. Summit of the Future, which world leaders will attend in September 2024.
He said: “The potential harms of AI extend to serious concerns over misinformation and disinformation; the entrenching of bias and discrimination; surveillance and invasion of privacy; fraud, and other violations of human rights.”
The U.N. said the formation of the body, with experts from government, the private sector, the research community, civil society and academia marks a significant step in its efforts to address issues of AI international governance and will help bridge existing and emerging initiatives.
LA Council Members Call for Plan to End COVID Vaccine Mandate for City Works
Seeking to align the city of Los Angeles with federal and county vaccination directives, six City Council members on Wednesday introduced a motion calling for a plan to end the COVID-19 vaccine mandate for all current and future city employees.
Councilwoman Traci Park and Council President Paul Krekorian authored the motion, which instructs the city administrative officer and city attorney to report on the feasibility, impact and timeline of ending the mandate. Council members Heather Hutt, Kevin de Leon, John Lee and Curren Price seconded the motion.
While COVID-19 hospitalizations in the county remain low, the city’s vaccine mandate has stayed in place, even as other public entities have rescinded or eased vaccine requirements for their workforce.
The city of Los Angeles ended its policy requiring proof of vaccination to enter public buildings in February. In September, the Los Angeles Unified School District ended its vaccination requirement for staff, including teachers.
Canadian Lawmakers Want to Punish Online Platforms for Allowing ‘Misinformation’ Spread
The Canadian Parliament has become the latest global player in a widening tug-of-war geared towards constraining the tide of “misinformation” seeping into the digital landscape.
The House Ethics Committee in the North American province of Ottawa is calling for the imposing of stringent repercussions on tech giants whom they claim are complicit in disseminating “unverified” or “deceptive” content online.
Vice-chair of the Committee, Bloc Quebecois MP Rene Villemure, emphasized the urgent need for decisive action, mirroring similar controversial legislative combat seen by the European Union, which has imposed significant online regulations to control the spread of digital falsehoods.