‘Invasive’ Google Keyword Search Warrants Get Court Greenlight. Here’s Everything You Need to Know
Colorado’s Supreme Court this week had the opportunity to hand down a historic judgment on the constitutionality of “reverse keyword search warrants,” a powerful new surveillance technique that grants law enforcement the ability to identify potential criminal suspects based on broad, far-reaching internet search results.
Police say the creative warrants have helped them crack otherwise cold cases. Critics, which include more than a dozen rights organizations and major tech companies, argue the tool’s immense scope tramples on innocent users’ privacy and runs afoul of Fourth Amendment Protections against unreasonable searches by the government.
Civil liberties and digital rights experts speaking with Gizmodo described the court’s “confusing” decision to punt on the constitutionality of reverse keyword search this week as a major missed opportunity and one that could inevitably lead to more cops pursuing the controversial tactics, both in Colorado and beyond.
Critics fear these broad warrants, which compel Google and other tech companies to sift through its vast cornucopia of search data to sniff out users who’ve searched for specific keywords, could be weaponized against abortion seekers, political protestors, or even everyday internet users who inadvertently type a result that could someday be used against them in court.
Supreme Court Will Hear Biden Social Media Case This Term
The Supreme Court said Friday it will consider a social media censorship case brought against Biden administration officials in its next term, setting up a legal battle with resounding implications for online speech.
The high court also issued a stay in an injunction ordered by the 5th U.S. Circuit Court of Appeals, pausing its effect until the justices decide the case on its merits. Justices Samuel Alito, Clarence Thomas and Neil Gorsuch dissented in their decision to stay the order.
“At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news,” Alito wrote in his dissenting opinion. “That is most unfortunate.”
Missouri Attorney General Andrew Bailey, one of the attorneys general who brought the lawsuit, called the high court’s decision to stay the order “the worst First Amendment violation in our nation’s history.”
“We look forward to dismantling Joe Biden’s vast censorship enterprise at the nation’s highest court,” Bailey said in a statement.
The U.S. Has Failed to Pass AI Regulation. New York City Is Stepping Up
As the U.S. federal government struggles to meaningfully regulate AI — or even function — New York City is stepping into the governance gap.
The city introduced an AI Action Plan this week that Mayor Eric Adams calls a first of its kind in the nation. The set of roughly 40 policy initiatives is designed to protect residents against harm like bias or discrimination from AI. It includes the development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.
New York’s AI regulation could soon expand still further. City council member Jennifer Gutiérrez, chair of the body’s technology committee, today introduced legislation that would create an Office of Algorithmic Data Integrity to oversee AI in New York.
Several U.S. senators have suggested creating a new federal agency to regulate AI earlier this year, but Gutiérrez says she’s learned that there’s no point in waiting for action in Washington, DC. “We have a unique responsibility because a lot of innovation lives here,” she says. “It’s really important for us to take the lead.”
Americans Are Concerned About AI Data Collection, New Poll Shows
Most Americans who have an awareness of emerging artificial intelligence (AI) technology are worried that companies won’t use AI tools responsibly, according to survey results released this week by Pew Research Center.
There has been an increase in public discourse about AI this year due in part to the wide adoption of ChatGPT, a chatbot unveiled last November by the AI company OpenAI. Users are able to communicate with ChatGPT after initiating conversations through textual, visual and audio prompts. Global monthly web visits to the ChatGPT website were estimated to be at 1.43 billion in August, according to Reuters.
Technology leaders say AI development poses positive potential, particularly in the healthcare, drug development and transportation industries. But there is also risk and uncertainty associated with AI, as no one knows for certain what it could one day become.
Nearly half of American adults don’t want social media companies using their personal data for personalized user experiences, and 44% don’t like the idea of AI being used to identify people through voice analysis, according to the poll’s results.
Young People Are Increasingly Worried About Privacy in the AI Age
Younger consumers are more likely to exercise Data Subject Access Rights, according to a new Cisco study which found nearly half (42%) of 18-24 year-olds have done so compared with just 6% of 75 year-olds and older.
The number is on the up, too, at four percentage points higher in 2023 compared with 2022, suggesting growing concerns over data privacy. Of the 2,600 consumer participants, almost two-thirds (62%) expressed their concern about how organizations could be using their personal data for AI.
Overall, the study suggests that consumer lack of trust is on the rise, and companies need to act to better inform their users. The blame isn’t entirely on large corporations, though, according to VP and Chief Privacy Officer Harvey Jang: “As governments pass laws and companies seek to build trust, consumers must also take action and use technology responsibly to protect their own privacy.”
AI Makes Hiding Your Kids’ Identity on the Internet More Important Than Ever. But It’s Also Harder to Do.
The New York Times via The Seattle Times reported:
Historically, the main criticism of parents who overshare online has been the invasion of their progeny’s privacy, but advances in artificial intelligence-based technologies present new ways for bad actors to misappropriate the online content of children.
Among the novel risks are scams featuring deepfake technology that mimic children’s voices and the possibility that a stranger could learn a child’s name and address from just a search of their photo.
Amanda Lenhart, the head of research at Common Sense Media, a nonprofit that offers media advice to parents, pointed to a recent public service campaign from Deutsche Telekom that urged more careful sharing of children’s data.
The video featured an actress portraying a 9-year-old named Ella, whose fictional parents were indiscreet about posting photos and videos of her online. Deepfake technology generated a digitally aged version of Ella who admonishes her fictional parents, telling them that her identity has been stolen, her voice has been duplicated to trick them into thinking she’s been kidnapped and a nude photo of her childhood self has been exploited.
Empty Classroom Seats Reveal ‘Long Shadow’ of COVID Chaos on Britain’s Children
While this paints a picture of chaotic decision-making and rancorous divisions at the top of government, none of it is surprising. By far the most important testimony so far — much more essential than who said what about who on WhatsApp — came from England’s former children’s commissioner Anne Longfield, who told the inquiry children will be living under the “long shadow” of the pandemic for two decades to come.
It may seem odd to think of COVID in terms of silver linings. But I’ve often pondered how lucky we were that, unlike many pandemic-causing infectious diseases that carry the highest risk of death among the very young and very old, COVID was generally associated with mild symptoms in children.
But the government squandered this precious silver lining from the start. After the decision to close schools in March 2020, they should have been the first thing to reopen as infection rates started to fall in May that year.
Instead, they remained mostly closed as pubs and restaurants were allowed to reopen. Rishi Sunak threw almost a billion pounds at subsidizing people to eat out in August but couldn’t find the cash to put on outdoor enrichment activities over the summer for children stuck at home for months on end. Boris Johnson delayed imposing the social restrictions later that year to the extent that he was forced to take more drastic action when it eventually acted, again closing schools for weeks.
Stay in EU, Comply With EU Law: EU’s Digital Chief Warns X’s Musk
X owner Elon Musk will have to comply with European Union law and clamp down on illegal content on the social network if it wants to keep on doing “good business” in the region, the EU’s digital chief Věra Jourová said today.
The tech mogul denied a report last week that he was considering pulling X out of Europe to avoid new requirements for digital platforms. X is used by over 101 million Europeans in the bloc. Under the EU’s Digital Services Act (DSA), the company must swiftly take down content and ensure the network limits disinformation and cyberviolence.
Musk does “good business in [the] European Union, but it will be his decision and if he decides to stay in as well, he will have to comply with the EU law,” Jourová said.