Insights

Thoughts on AI in healthcare, nursing, and the future of clinical technology.

Follow for daily insights

6,000+ followers

Follow on LinkedIn

January 2025

I'm not sure I want AI reading between the lines of my voice during the hardest conversation of my life. But here's what just happened: our team's new research, led by UPenn's Jiyoun Song, analyzed 79 phone calls about palliative care. An algorithm predicted patients' decisions with 65% accuracy, not from their words, but from vocal energy and pitch. People who said yes spoke differently than those who declined, even before they consciously knew their answer. Here's the context that makes this matter: only 14% of people with serious illness who could benefit from palliative care actually receive it. Not because they don't want it, but because we haven't figured out how to have these conversations well. The AI isn't replacing clinical judgment. It's surfacing hesitation, readiness, uncertainty that clinicians might miss in a 10-minute phone call. What makes me uncomfortable: technology detecting things about me that I haven't consciously decided yet. What makes me hopeful: clinicians getting better at hearing what patients are actually telling them, even when the words don't match.

January 2025

A mother found her daughter's AI companion chat logs and everything changed. Her teen had been unraveling for months. Withdrawn. Anxious. The mother couldn't figure out why. Then she discovered the AI chatbot her daughter had been confiding in for hours each night. This isn't rare anymore. According to a Washington Post investigation published yesterday, a majority of teens now use AI companions for emotional support. One in three teens between 13 and 17 rely on these chatbots for social interaction and relationships. Parents are the last to know. OpenAI just launched parental controls after a California family sued them, alleging ChatGPT contributed to their 16-year-old son's death by suicide. The new features include alerts when teens appear 'in a moment of acute distress.' We're not just talking about homework help or entertainment. We're talking about identity formation. Emotional regulation. Social skills. The fundamental building blocks of human development. And we're outsourcing them to technology that didn't exist five years ago.

January 2025

I'm an associate editor at one of the top medical informatics journals. Last month, I sent 52 emails asking professors to review a scientific paper. Three said yes. That 6% response rate? That's the sound of scientific quality control breaking. Here's what most people don't see: Before any study reaches the public, volunteer experts read it for free. They catch errors. They spot bad methods. They're the reason you can trust published research. These volunteers, already overworked professors, are drowning. Why? AI lets researchers write papers 36-59% faster. Sounds great. Except my journal saw submissions triple. The reviewers? Still reading at human speed. Still unpaid. Still fitting this into nights and weekends between teaching and their own research. The 'solution' might make it worse. AI tools could help review papers, but most publishers don't offer them yet. And when they do, with AI writing papers and AI reviewing papers, can you spot the problem? A new Science study puts it bluntly: This system can't continue. We're redesigning the airplane while it's in the air and more passengers keep boarding.

January 2025

There's a career trajectory my mentors taught me years ago. Now I teach it to every PhD student and postdoc. Stage 1: Hustle and survive. Chase first-author papers. Prove you belong. Stage 2: Lead and thrive. Build teams. Learn to let go a little. Stage 3: Become the grumpy professor. Last author. Watch from the back of the room while trainees run the show. Reflecting on the past year, I analyzed my own papers this week and confirmed: I've reached Stage 3. 47 papers this year. In journals I used to cite, not publish in - Nature Digital Medicine, JAMA. Two career milestones I didn't expect to hit together: 200 publications and 6,000 citations. But here's what the numbers hide: I'm the last author on 87 of those 200 papers. The work was led by 400+ collaborators across 25+ countries who trusted me enough to think together. The most productive year of my career wasn't about me working harder. It was about finally getting out of the way. Early career, I measured success by what I could produce. Now I measure it by what happens when I'm not in the room.

December 2024

I found this meme I posted in 2018 and had to update it! Back then, the joke was about how we all front-loaded effort on Abstract/Intro/Aims… then limped to the finish line with Discussion sections that looked like a tired stick figure. Seven years later? Those Discussion sections hit the gym. Hard. The problem? That 'muscle' is often AI-inflated extrapolation, not actual insight. I've reviewed papers where ChatGPT-assisted Discussion sections make claims the Results would never support. The confidence is Olympic-level. The evidence is… not. We went from 'more research is needed' to 'this fundamentally transforms our understanding of human cognition' real fast. The irony: Reviewers can now spot the steroid use. That unnaturally buff Discussion section is becoming its own red flag.

November 2024

I want the AI bubble to burst. Not because I'm anti-AI. Because I care about patients. In the US, Healthcare has become Silicon Valley's $400 billion justification project. The Magnificent Seven tech stocks now depend on convincing investors that medicine NEEDS their AI infrastructure. So they're pushing adoption at 2.2 times the rate of any other industry. The result? 90%+ of corporate AI pilots fail to deliver ROI. 80% of healthcare AI projects never escape pilot purgatory. Yet digital health startups with any AI component command an 83% valuation premium, regardless of evidence. Here's what keeps me up at night: 'zombie algorithms.' When AI vendors fail, and 46% of digital health startups have less than 12 months of runway, they don't shut down cleanly. They become skeletal operations collecting maintenance fees while their algorithms silently degrade. AI models trained on 2024 data will progressively fail as disease patterns evolve. But hospitals remain contractually obligated and fully liable for errors. We're beta-testing unproven technologies from companies surviving on venture capital, not revenue. The dot-com crash killed 78% of internet companies. But it didn't kill the internet. It strengthened it by eliminating the garbage. Healthcare AI needs the same correction.

Want more insights?

I post daily on LinkedIn about AI in healthcare, research findings, and lessons from the field.

Follow on LinkedIn