The Legal Loophole That Lets the Government Search Your Phone
Despite the U.S. ethos that you’ll be innocent until proven guilty in a court of law, law enforcement finding an excuse to search your digital devices only requires a presumption of wrongdoing. The tech to do this already exists, and murky legislation lets it happen, speakers from the Legal Aid Society said at DEF CON last Friday.
The problem isn’t just the state of local law either, but it’s embedded in the Constitution. As Diane Akerman, digital forensics attorney at the Legal Aid Society explained, the Fourth Amendment hasn’t been updated to account for modern problems like digital data. The Fourth Amendment intends to protect people from “unreasonable searches and seizures” by the U.S. government. This is where we get legal protections like warrants, where law enforcement needs court approval to look for evidence in your home, car or elsewhere.
Today, that includes your digital belongings too, from your phone to the cloud and beyond, making way for legal loopholes as tech advancements outpace the law. For example, there’s no way to challenge a search warrant prior to it being executed, Akerman said. For physical evidence that makes some sense because we don’t want someone flushing evidence down a toilet.
That’s not how your social media accounts or data in the cloud work though, because those digital records are much harder to scrub. So, law enforcement can get a warrant to search your device, and there’s no process to litigate in advance whether the warrant is appropriate. Even if there’s reason for the warrant, Akerman and Allison Young, a digital forensics analyst at The Legal Aid Society, showed that officers can use intentionally vague language to search your entire cell phone when they know the evidence may only be in one account.
Google Is Looking Into Doling Out AI-Generated Life Advice
Recently, there’s been plenty of anxiety around companies investing in AI to replace creative types, such as professional writers. Now, the tech could be coming for life coaches.
Google‘s DeepMind division is internally testing generative AI‘s ability to perform “at least” 21 kinds of tasks, which include giving sensitive life advice to users, per a report from the New York Times. This comes, the Times notes, after Google’s AI experts reportedly warned company executives about letting people become too emotionally invested in chatbots in December.
There would also be tools for teaching users new skills or helping people manage their money or create meal plans, per the Times report. This would be quite a change from Google’s current stance on AI, which presently bars its Bard chatbot from giving these kinds of advice. As the Times noted, though, Google may not ever actually deploy these tools to the public; they’re just in testing right now.
Depending on your disposition towards AI, you may hope it stays that way. I certainly wouldn’t judge you for that.
Microsoft Might Be Saving Your Bing Chat Conversations
Uh-oh — Microsoft might be storing information from your Bing chats. This is probably totally fine as long as you’ve never chatted about anything you wouldn’t want anyone else reading, or if you thought your Bing chats would be deleted, or if you thought you had more privacy than you actually have.
In its terms of service, Microsoft updated new AI policies. Introduced on July 30 and going into effect on Sept. 30, the policy said: “As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.”
According to the Register’s reading of a new clause “AI Services” in Microsoft’s terms of service, Microsoft can store your conversations with Bing if you’re not an enterprise user — and we don’t know for how long.
The EU Wants to Cure Your Teen’s Smartphone Addiction
Glazed eyes. One-syllable responses. The steady tinkle of beeps and buzzes coming out of a smartphone’s speakers. Countries are now taking the first steps to rein in excessive — and potentially harmful — use of big social media platforms like Facebook, Instagram, and TikTok.
China wants to limit screen time to 40 minutes for children aged under eight, while the U.S. state of Utah has imposed a digital curfew for minors and parental consent to use social media. France has targeted manufacturers, requiring them to install a parental control system that can be activated when their device is turned on.
The EU has its own sweeping plans. It’s taking bold steps with its Digital Services Act (DSA) that, from the end of this month, will force the biggest online platforms — TikTok, Facebook, YouTube — to open up their systems to scrutiny by the European Commission and prove that they’re doing their best to make sure their products aren’t harming kids.
The penalty for non-compliance? A hefty fine of up to six percent of companies’ global annual revenue.
NYC Bans TikTok on City-Owned Devices
New York City is banning TikTok from city-owned devices and requiring agencies to remove the app within the next 30 days.
The directive issued Wednesday comes after a review by the NYC Cyber Command which a city official said found that TikTok “posed a security threat to the city’s technical networks.” Starting immediately, city employees are barred from downloading or using the app and accessing TikTok’s website from any city-owned devices.
The city cited US Office of Management and Budget guidelines discouraging TikTok’s use on government devices as well as federal legislation banning the app passed earlier this year.
For more than three years, the U.S. Congress has attempted to push through legislation banning TikTok nationwide, alleging that the app and its Chinese owner, Bytedance, can use the data it collects to spy on Americans.
As AI Shows Up in Doctors’ Offices, Most Patients Are Giving Permission as Experts Advise Caution
Artificial intelligence has been used “behind the scenes” in healthcare for decades, but with the growing popularity of new technologies such as ChatGPT, it’s now playing a bigger role in patient care — including during routine doctor’s visits.
Physicians may rely on AI to record conversations, manage documentation and create personalized treatment plans. And that raises the question of whether they must get patients’ permission first to use the technology during appointments.
In terms of HIPAA compliance with AI-generated documentation, things can get a little murky. “HIPAA does not specifically require patient consent for the use of AI — artificial intelligence wasn’t even a term when HIPAA was created, so it has some catching up to do,” said Manny Krakaris, CEO of Augmedix, a medical technology company in San Francisco.