January 2025
I'm not sure I want AI reading between the lines of my voice during the hardest conversation of my life.
But here's what just happened: our team's new research, led by UPenn's Jiyoun Song, analyzed 79 phone calls about palliative care. An algorithm predicted patients' decisions with 65% accuracy, not from their words, but from vocal energy and pitch. People who said yes spoke differently than those who declined, even before they consciously knew their answer.
Here's the context that makes this matter: only 14% of people with serious illness who could benefit from palliative care actually receive it. Not because they don't want it, but because we haven't figured out how to have these conversations well.
The AI isn't replacing clinical judgment. It's surfacing hesitation, readiness, uncertainty that clinicians might miss in a 10-minute phone call.
What makes me uncomfortable: technology detecting things about me that I haven't consciously decided yet. What makes me hopeful: clinicians getting better at hearing what patients are actually telling them, even when the words don't match.
77 10
View on LinkedIn