Parents of Soldier Who Died by Suicide Sue US Over Army’s Actions After He Refused COVID-19 Shot
The parents of a 19-year-old soldier who died by suicide in 2021 say in a recently filed lawsuit that he faced months of threats, retaliation and bullying after objecting to getting the COVID-19 vaccine, court records show. The suit alleges that the Army didn’t protect Pfc. Noah Samuel-Siegel from a toxic commander and purposefully obscured the circumstances behind his death, including withholding information about his mental health.
He was stationed at Camp Humphreys, South Korea, at the time of his death. Margaret and Yeohonatan Samuel-Siegel filed the complaint against the U.S. in federal court in New Jersey on April 16. It contends that the Army intentionally and negligently caused them emotional distress. They are seeking $1 million in damages and want the military to reform how it addresses suicide prevention and post-death investigations, the couple said in a statement Wednesday.
“Noah enlisted to serve his country — not to be bullied to death,” they said. “The Army’s toxic leadership put Noah at risk. The Army then ignored protocols that could have saved him. The Army failed Noah, and then us, at every turn.” The Justice Department, which typically handles lawsuits filed against the federal government, did not immediately respond to a request for comment.
Digital IDs Take Root in US as States and Federal Agencies Chart Uncertain Paths
In January, then-President Joe Biden issued the sweeping Executive Order (EO), Strengthening and Promoting Innovation in the Nation’s Cybersecurity, which included a direct mandate for federal agencies to prioritize privacy-preserving digital identity systems. Mobile driver’s licenses and electronic identity documents were explicitly mentioned as key components in his last-minute modernization push.
For the first time, the White House officially recognized digital identity as not just a convenience issue, but a fundamental pillar of government service delivery and cybersecurity resilience. The Trump administration’s apparent decision to maintain EO 14144 indicates an acknowledgment of the importance of strengthening cybersecurity measures, even if the broader approach to digital identity and privacy may differ.
The Trump administration, however, has neither promoted nor expanded upon the digital identity initiatives outlined in Biden’s EO. There also has been no significant public communication or policy development from the administration regarding the implementation or advancement of privacy-preserving digital identity systems. This lack of emphasis suggests that while the order remains in place, it may not be a current priority for the administration.
The Miscalculations of COVID School Closures
On June 26, 2020, three months after the coronavirus pandemic had seized the U.S., the American Academy of Pediatrics (AAP), which represents about sixty-seven thousand pediatric physicians, issued guidance on reopening schools. “The AAP strongly advocates that all policy considerations for the coming school year should start with a goal of having students physically present in school,” the statement read.
The guidelines diverged from the Centers for Disease Control and Prevention’s recommendations for in-person learning — six feet of social distancing, for example, could shrink down to three — and the AAP was frank about the deficiencies and potential harms of remote learning.
“Evidence from spring 2020 school closures points to negative impacts on learning,” it stated. “Children and adolescents also have been placed at higher risk of morbidity and mortality from physical or sexual abuse, substance use, anxiety, depression, and suicidal ideation.”
After the statement was released, the New York Times ran an interview with Dr. Sean O’Leary, a pediatrician and one of the co-authors of the AAP guidelines. He stressed that, according to data already in hand from Asia, most children did not seem to get very sick from COVID-19 — if at all — or spread the virus to other kids or adults. Child-care centers that were still open during the pandemic did not significantly contribute to community spread, nor did schools in other nations that had resumed in-person classes.
U.S. Airports Are Getting Facial Recognition — Here’s What Travelers Should Know
Facial recognition technology is rapidly becoming a fixture in U.S. airports, designed to enhance security and streamline the experience for travelers.
This sophisticated technology works by analyzing facial features and matching them against a comprehensive database. The Transportation Security Administration, or TSA, has highlighted its potential to significantly reduce wait times at security checkpoints and boarding gates, making the travel experience more efficient for passengers.
Imagine walking through an airport without the hassle of showing your ID multiple times; this is what facial recognition promises. However, as with any technological advancement, it comes with its own set of challenges and considerations. As such, understanding how this technology works is crucial for travelers to navigate the modern airport landscape effectively.
NYC Bets on AI Surveillance to Clean up Subways, Predict Criminal Behavior
All public transit systems are digging themselves out of the tough pandemic years, but New York’s Metropolitan Transit Authority (MTA) has had an especially brutal stretch after a string of concerning crimes, and it’s now looking to AI to make the subways safer.
The MTA is working with AI companies to deploy technology that would analyze security footage as it’s being filmed, according to Gothamist. Without using facial recognition, it would identify “problematic behaviors” and scan for “potential trouble.” The goal is to predict which riders could be criminals and catch them before they act. “AI is the future,” says MTA Chief Security Officer Michael Kemper. “We’re working with tech companies literally right now and seeing what’s out there right now on the market, what’s feasible, what would work in the subway system.”
The system would issue automated alerts to the New York Police Department if it finds a potential issue. About 40% of cameras on subway platforms are monitored in real time, so this system could do the same for all cameras without needing to staff a human. That will come with the risk of the AI tool falsely accusing innocent riders.