Research Speaking Writing & Press CITADEL Contact
Research · CV · Grants

Research.

Twenty years building AI tools that help nurses and doctors make better decisions. Two hundred and twenty papers. Thirty-plus grants.

About

Maxim Topaz

PhD · RN · MA · FAAN · FIAHSI · FACMI

Elizabeth Standish Gill Associate Professor of Nursing, Columbia University

Summary

My journey in healthcare began as an Army medic and Registered Nurse. I completed my PhD at the University of Pennsylvania as a Fulbright Fellow, focusing on AI for clinical decision support. I was among the first nurse scientists to complete a postdoc at Harvard Medical School.

I have secured over $25 million in research funding through more than 30 grants. My research focuses on developing AI-based tools for patient prioritization and risk prediction in home healthcare and hospital settings.

Current positions
Associate Professor (Tenured)Columbia University School of Nursing
2022, Present
Senior Research ScientistVNS Health
2018, Present
Affiliated FacultyData Science Institute, Columbia University
2018, Present
Education
Postdoctoral FellowshipHarvard Medical School & Brigham and Women's HospitalHealth Informatics
PhD in NursingUniversity of PennsylvaniaFulbright Fellow
Leadership
MEDINFO
Past President (2025)World's largest international health informatics conference
National Institutes of Health
AI Review Section Co-Chair
NAIL Collaborative
Co-DirectorNursing and Artificial Intelligence Leadership
Awards & recognition
  • Stanford University Top 2% Scientists in Health Informatics
  • Fellow, American Academy of Nursing (FAAN)
  • Fellow, International Academy of Health Sciences Informatics (FIAHSI)
  • Fellow, American College of Medical Informatics (FACMI)
  • Fulbright Fellowship
0
Publications
0+
Journals
0
Years active
Top 2%
Stanford world scientists
Funded research
$0M+

Total research funding · As PI, MPI, or Co-I across NIH, AHRQ, and foundation awards.

30+ grants. 220+ peer-reviewed publications. Stanford ranks Maxim Topaz in the top 2% of scientists worldwide for citation impact in the field of medical informatics.

Research focus: AI for clinical decision support, patient risk prediction in home healthcare, and reducing bias in clinical AI.

Active grants — Selected

Where the work happens.

Years
Role
Project
Funder
Award
2025–2030
MPI
ID-STIGMANLP for Stigmatizing Language in Birthing Care
NIH / NICHD R01
$2.85M
2025–2027
Contact PI
NurseAssist-AIAI Documentation & Decision Support in Home Healthcare
American Nurses Foundation ANF
$467K
2023–2027
Contact PI
ENGAGEReducing Stigmatizing Language in Home Healthcare
NIH / NIMHD R01
$2.7M
2023–2027
Contact PI
Speech Processing for RiskAutomated Speech to Predict Hospitalizations & ED Visits
NIH / NIA R01
$2.6M
2020–2024
PI
Homecare-CONCERNRisk Models for Preventable Hospitalizations
AHRQ R01
$1.5M
2019–2024
PI
PREVENTPatient Prioritization in Hospital-Homecare Transition
NIH / NINR R01
$3.4M
2024–2025
Co-I
M3AD StudyMultimorbidity & Alzheimer's Disease Across 3 Cities
NIH / NIA R56
$5.4M
2020–2022
MPI
NLP for DementiaImproving Identification of Alzheimer's in Home Healthcare
NIH / NIA R21
$474K
2019–2023
Contact PI
AI for Child SafetyAI-Assisted Identification of Child Abuse in Hospitals
Columbia Data Science Institute Pilot
$200K
2020–2022
MPI
Speech Recognition FeasibilityAudio-Recorded Nurse-Patient Communication for Risk
Amazon / Columbia CAIT Industry
$174K

+ 20 additional grants including K-awards, pilot grants, and international collaborations

Tools & platforms

Working software.

Built to support research, education, and consensus-building in biomedical and health informatics.

Delphi consensus platform Active, Round 1 open

DelphiAI

IMIA BMHI Recommendations Revision

An AI-assisted Delphi platform built for the IMIA BMHI Recommendations Task Force to revise the 2023 recommendations on AI in biomedical and health informatics education. Manages multiple Delphi rounds, collects expert input via a distraction-free card interface, and uses large language models to synthesize panel feedback between rounds and draft revised recommendation text.

  • Expert self-registration via a single shared link (no accounts, no passwords)
  • One-card-at-a-time review with speech-to-text dictation
  • AI-powered synthesis between rounds: per-item and round-level summaries
  • Auto-drafted proposed revisions for Round 2+ based on panel comments
Open DelphiAI ↗

Interested in collaborating on a research tool? Get in touch.

Peer-reviewed

Publications.

227+ peer-reviewed articles

Showing 227 publications
2026

Accelerating real-world prediction and research in Alzheimer's: The M3AD study.

Alzheimers Dement PubMed DOI

Desvarieux M, Rundek T, Ahsan H, Narvaez J, Diaz F, Malinsky D, Ruiz LM, Topaz M, Falconer T, Natarajan K, Noble J, Entwisle B, Puram D, Anand T, Chen HY, Jiang X, Gu Y, Cohen A, Terry MB, Pierce B, Andrews H, Rogalsky E, Farzana S, Gulotta G, Beard J, Landron D, Volchenboum SL, Ravaud P, Johnson J, Susser E, Rundle A, Wei Y, Tsinoremas N, Loewenstein D, Fried L, Aiello A, Mayeux R, Hripcsak G

2026

Leveraging patient and their surrogate caregiver communication with clinicians to predict palliative care decisions: A speech processing study.

Geriatr Nurs PubMed DOI

Song J, Beigi H, Davoudi A, Moon R, McDonald MV, Sridharan S, Stanley J, Bowles KH, Shang J, Stone PW, Topaz M

Why it matters When a loved one is seriously ill, families often struggle with the question: Is it time for comfort care? This study found that by listening carefully to how patients and families talk with nurses, we can better recognize when someone is ready for palliative care—helping more people spend their final days in peace rather than in hospitals.
2026

The association between higher body weight and stigmatizing language documented in hospital birth admission notes.

Int J Obes (Lond) PubMed DOI

Harkins SE, Hazi AK, Hulchafo II, Kim Scroggins J, Topaz M, Barcelona V

Why it matters New mothers deserve respect regardless of their weight. This study found that doctors and nurses sometimes write judgmental comments about heavier patients in their medical records—language that can follow women through their healthcare journey and affect how future providers treat them.
2026

Advancing healthcare with large language models: A scoping review of applications and future directions.

Int J Med Inform PubMed DOI

Zhang Z, Momeni Nezhad MJ, Bagher Hosseini SM, Zolnour A, Zonour Z, Hosseini SM, Topaz M, Zolnoori M

Why it matters ChatGPT is already in your doctor's office—helping write prescriptions, summarize visits, and answer patient questions. This comprehensive review maps out exactly how these AI tools are transforming healthcare and where the technology still falls short.
2025

Identifying and Reducing Stigmatizing Language in Home Health Care With a Natural Language Processing-Based System (ENGAGE): Protocol for a Mixed Methods Study.

JMIR Res Protoc PubMed DOI

Zhang Z, Gupta P, Potts-Thompson S, Prescott L, Morrison M, Sittig S, McDonald MV, Raymond C, Taylor JY, Topaz M

Why it matters What if software could catch the moment a nurse writes something hurtful in a patient's chart—and suggest kinder words instead? This project is building that tool, working to eliminate bias in medical records one note at a time.
2025

Patient Disability Status and the Use of Stigmatizing Language in Clinical Notes During Hospital Admission for Birth.

J Obstet Gynecol Neonatal Nurs PubMed DOI

Harkins SE, Hulchafo II, Scroggins JK, Walsh C, Didier M, Topaz M, Barcelona V

Why it matters Mothers with disabilities deserve the same respect as anyone else in the delivery room. But this study found their medical charts often contain more negative, judgmental language—words that can shape how nurses and doctors treat them for years to come.
2025

Understanding Gender-Specific Daily Care Preferences for Person-Centered Care: A Topic Modeling Study.

Stud Health Technol Inform PubMed DOI

Woo K, Min SH, Kim A, Choi S, Alexander GL, O'Malley TA, Moen MD, Topaz M

Why it matters Your grandfather and grandmother may need help with the same tasks—but they want that help delivered differently. By listening to what seniors actually say about their care, researchers discovered that men and women have surprisingly different preferences. One-size-fits-all care doesn't work.
2025

Voice for All: Evaluating the Accuracy and Equity of Automatic Speech Recognition Systems in Transcribing Patient Communications in Home Healthcare.

Stud Health Technol Inform PubMed DOI

Xu Z, Vergez S, Esmaeili E, Zolnour A, Briggs KA, Scroggins JK, Hosseini Ebrahimabad SF, Noble JM, Topaz M, Bakken S, Bowles KH, Spens I, Onorato N, Sridharan S, McDonald MV, Zolnoori M

Why it matters When Siri or Alexa mishears you, it's annoying. When medical AI mishears a patient, it could be dangerous. This study tested voice recognition across different patients and accents—finding troubling gaps that could put some people at risk.
2025

Toward equitable documentation: Evaluating ChatGPT's role in identifying and rephrasing stigmatizing language in electronic health records.

Nurs Outlook PubMed DOI

Zhang Z, Scroggins JK, Harkins S, Hulchafo II, Moen H, Tadiello M, Barcelona V, Topaz M

Why it matters Can ChatGPT spot bias that humans miss? Researchers put it to the test, asking the AI to find stigmatizing words in medical charts and suggest respectful replacements. The results show promise for making healthcare fairer—one chart at a time.
2025

Stigmatizing and Positive Language in Birth Clinical Notes Associated With Race and Ethnicity.

JAMA Netw Open PubMed DOI

Hulchafo II, Scroggins JK, Harkins SE, Moen H, Tadiello M, Cato K, Davoudi A, Goffman D, Aubey JJ, Green C, Topaz M, Barcelona V

Why it matters A landmark study published in JAMA found stark differences in how doctors and nurses write about Black and Hispanic mothers versus White mothers during childbirth. The biased language—often invisible to those writing it—may explain why maternal mortality rates differ so dramatically by race.
2025

Nonlinear Relationship Between Vital Signs and Hospitalization/Emergency Department Visits Among Older Home Healthcare Patients and Critical Vital Sign Cutoff for Adverse Outcomes: Application of Generalized Additive Model.

Clin Nurs Res PubMed DOI

Min SH, Song J, Evans L, Bowles KH, McDonald MV, Chae S, Sridharan S, Barrón Y, Topaz M

Why it matters A blood pressure of 140 might be fine for one patient and a red flag for another. This study found the exact vital sign cutoffs that signal danger for older adults at home—giving visiting nurses a better roadmap for when to act fast versus when to wait and watch.
2025

Comparing the influence of social risk factors on machine learning model performance across racial and ethnic groups in home healthcare.

Nurs Outlook PubMed DOI

Hobensack M, Davoudi A, Song J, Cato K, Bowles KH, Topaz M

Why it matters AI that predicts hospital risk doesn't work equally well for everyone—and that's a problem. This study revealed how social factors like unstable housing or low income throw off predictions for some patients, and offers solutions to make the technology fairer.
2025

The Overlooked Dark Side of Generative AI in Nursing: An International Think Tank's Perspective.

J Nurs Scholarsh PubMed DOI

Topaz M, Peltonen LM, Michalowski M, Pruinelli L, Ronquillo CE, Zhang Z, Babic A

Why it matters Everyone's talking about AI's promise in healthcare. But what about the risks nobody wants to mention? An international team of nursing AI experts pulls back the curtain on the dangers—from privacy violations to over-reliance on machines—and what we must do to protect patients.
2025

Symptom Burden: A Concept Analysis.

Nurs Sci Q PubMed DOI

Scharp D, Harkins SE, Topaz M

Why it matters Researchers clarified what 'symptom burden' means in healthcare - the combined weight of all symptoms a patient experiences - helping nurses better assess and address patient suffering.
2025

Building a Time-Series Model to Predict Hospitalization Risks in Home Health Care: Insights Into Development, Accuracy, and Fairness.

J Am Med Dir Assoc PubMed DOI

Topaz M, Davoudi A, Evans L, Sridharan S, Song J, Chae S, Barrón Y, Hobensack M, Scharp D, Cato K, Rossetti SC, Kapela P, Xu Z, Gupta P, Zhang Z, Mcdonald MV, Bowles KH

Why it matters What if your grandmother's home nurse could know—days in advance—that she was heading for the hospital? This AI does exactly that, tracking subtle changes over time to spot trouble brewing. And unlike many AI tools, this one works equally well for patients of every race and background.
2025

Beyond electronic health record data: leveraging natural language processing and machine learning to uncover cognitive insights from patient-nurse verbal communications.

J Am Med Inform Assoc PubMed DOI

Zolnoori M, Zolnour A, Vergez S, Sridharan S, Spens I, Topaz M, Noble JM, Bakken S, Hirschberg J, Bowles K, Onorato N, McDonald MV

Why it matters The earliest signs of dementia often hide in plain sight—in how someone answers a question, the words they choose, the stories they tell. AI can now detect these subtle clues in ordinary nurse-patient conversations, potentially catching memory decline years before traditional tests would.
2024

Social Determinants of Health in Digital Health Policies: an International Environmental Scan.

Yearb Med Inform PubMed DOI

Song J, Hobensack M, Sequeira L, Shin HD, Davies S, Peltonen LM, Alhuwail D, Alnomasy N, Block LJ, Chae S, Cho H, von Gerich H, Lee J, Mitchell J, Ozbay I, Lozada-Perezmitre E, Ronquillo CE, You SB, Topaz M

Why it matters An international team reviewed how different countries' digital health policies address social factors like poverty and housing that affect health, finding many gaps that need attention.
2024

Decoding disparities: evaluating automatic speech recognition system performance in transcribing Black and White patient verbal communication with nurses in home healthcare.

JAMIA Open PubMed DOI

Zolnoori M, Vergez S, Xu Z, Esmaeili E, Zolnour A, Anne Briggs K, Scroggins JK, Hosseini Ebrahimabad SF, Noble JM, Topaz M, Bakken S, Bowles KH, Spens I, Onorato N, Sridharan S, McDonald MV

Why it matters The AI transcribing your doctor's visit might work perfectly for some patients—and fail dangerously for others. This study uncovered significant accuracy gaps between how well voice recognition works for Black versus White patients, exposing a hidden bias baked into healthcare technology.