Medical large language models are vulnerable to data-poisoning attacks.
Alber DA, Yang Z, Alyakin A, Yang E, Rai S, Valliani AA, Zhang J, Rosenbaum GR, Amend-Thomas AK, Kurland DB, Kremer CM, Eremiev A, Negash B, Wiggan DD, Nakatsuka MA, Sangwon KL, Neifert SN, Khan HA, Save AV, Palla A, Grin EA, Hedman M, Nasir-Moin M, Liu XC, Jiang LY, Mankowski MA, Segev DL, Aphinyanaphongs Y, Riina HA, Golfinos JG, Orringer DA, Kondziolka D, Oermann EK.
Alber DA, et al. Among authors: yang z.
Nat Med. 2025 Jan 8. doi: 10.1038/s41591-024-03445-1. Online ahead of print.
Nat Med. 2025.
PMID: 39779928