AI Is Being Used to Give Dead, Missing Kids a Voice They Didn’t Ask For
These are some of the world’s most high-profile criminal cases involving children. These are stories of abuse, abduction, torture and murder that have long haunted the countries where the crimes occurred.
Now, some content creators are using artificial intelligence to recreate the likeness of these deceased or missing children, giving them “voices” to narrate the disturbing details of what happened to them. Experts say that while technological advances in AI bring creative opportunity, they also risk presenting misinformation and offending the victims’ loved ones. Some creators have defended their posts as a new way to raise awareness.
Despite TikTok’s attempts to remove such videos, many can still be found on the platform, some of which have generated millions of views.
Zoom Says It Won’t Use Your Calls to Train AI ‘Without Your Consent’ After Its Terms of Service Sparked Backlash and Prompted People to Talk About Ditching the Service
Zoom has responded to backlash over a part of its user agreement that seemed to say the video communications company could use customers’ meetings to train AI.
“You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law,” section 10.2 says, in part. One such purpose listed in this section is “machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models).”
Zoom users promptly bashed the site online and threatened to take their calls elsewhere.
Following the backlash, Zoom chief product officer Smita Hashim wrote in a blog post on Monday that the company added a sentence to its terms of service to clarify that “we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”
Wisconsin Hospital Reaches $2 Million Settlement for MyChart Pixels
Becker’s Hospital Review reported:
Milwaukee-based Froedtert Health has agreed to pay a $2 million settlement after a patient-led lawsuit accused the health system of sharing patient data put into MyChart with Facebook, Milwaukee Business Journal reported on Aug. 7.
The lawsuit alleged that Froedtert Health installed pixels, dubbed Meta Pixel, onto its website and patient portal that “automatically transmits to Facebook every click, keystroke and intimate detail about their medical treatment,” according to the publication.
Although the health system admitted no wrongdoing and has denied the allegations, it agreed to the $2 million settlement.
Under the settlement agreement, all persons, including employees, who logged into a MyChart patient portal between Feb. 1, 2017, and May 23, 2022, will receive a payout.
Facebook Execs Felt ‘Pressure’ From Biden White House to Censor COVID Vaccine Skepticism, Emails Show: Report
Facebook executives reportedly felt “pressure” to censor skepticism towards the COVID vaccine on its platform by the Biden White House and even predicted such actions would backfire, according to newly surfaced emails.
Public, the Substack newsletter founded by independent journalist Michael Shellenberger, reported Tuesday on emails from the Facebook Files, the internal documents from the Meta-owned platform obtained by House Republicans.
The report showed that Facebook’s Director of Strategic Response Rosa Birch attempting to push back on the vaccine-skeptic censorship requests, saying it would ” prevent hesitant people from talking through their concerns online and reinforce the notion that there’s a cover-up.”
Citing an April 2021 email to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg, Birch wrote, “We are facing continued pressure from external stakeholders, including the White House and the press, to remove more COVID-19 vaccine-discouraging content.”
Majority of Americans Are Concerned About Rapidly Developing AI: Poll
Most Americans across party lines say they are concerned about rapidly developing artificial intelligence (AI) technology, according to a new poll released Wednesday.
In a survey of 1,001 registered voters in the United States, 62% of respondents said they were mostly or somewhat concerned about growth in AI, while 21% said they were mostly or somewhat excited about it, and 16% said they were “totally neutral.
Voters affiliated with both political parties said they thought AI could eventually pose a threat to the existence of the human race, at 76% of all respondents, including 75% of Democrats and 78% of Republicans. Seventy-two percent of voters surveyed also said they preferred slowing down the development of AI.
Survey results also showed broad policy consensus in favor of regulating the AI industry. The vast majority of respondents, at 82%, said they don’t trust tech company executives to self-regulate, and 56% of voters said they would support having a federal agency regulate the use of AI — compared to 14% who would oppose a federal agency and 30% who were unsure.
Most Girls Get Unsolicited Messages on Social Media
More than half of 11- to 15-year-old girls using Instagram and Snapchat in the United States have been contacted by strangers in a way that made them feel uncomfortable, according to a report by Common Sense Media, a nonprofit organization that reviews and provides ratings for media and technology in order to safeguard children.
Meanwhile, as Statista’s Anna Fleck reports, some 48% of teen girls in the U.S. said they had been sent unsolicited messages over a messaging app, as 46% were contacted over TikTok and 30% on YouTube.
The report also reveals figures on how nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended at least weekly.
In terms of the most “addictive,” or the highest share of users who reported using it more than intended at least weekly, the order is as follows: Snapchat (37%), YouTube (34%), Instagram (33%) and then Messaging apps (30%).
AI Is Building Highly Effective Antibodies That Humans Can’t Even Imagine
At an old biscuit factory in South London, giant mixers and industrial ovens have been replaced by robotic arms, incubators, and DNA sequencing machines. James Field and his company LabGenius aren’t making sweet treats; they’re cooking up a revolutionary, AI-powered approach to engineering new medical antibodies.
The LabGenius approach yields unexpected solutions that humans may not have thought of, and finds them more quickly: It takes just six weeks from setting up a problem to finishing the first batch, all directed by machine learning models.
LabGenius has raised $28 million from the likes of Atomico and Kindred, and is beginning to partner with pharmaceutical companies, offering its services like a consultancy. Field says the automated approach could be rolled out to other forms of drug discovery too, turning the long, “artisanal” process of drug discovery into something more streamlined.
More and More Businesses Are Blocking ChatGPT on Work Devices
Organizations are increasingly banning the use of generative AI tools such as ChatGPT citing concerns over privacy, security and reputational damage.
In a new report published by BlackBerry, 66% of organizations it surveyed said that they will be prohibiting the infamous AI writer and others at the workplace, with a further 76% of IT decision-makers accepting that employers are allowed to control what software workers can use for their job.
What’s more, 69% of those organizations implementing bans said that they would be permanent or long term, such is the risk of harm the tools pose to company security and privacy.