Generative artificial intelligence (GenAI) shows potential for personalized care, psychoeducation, and even crisis prediction in mental health, yet responsible use requires ethical consideration and deliberation and perhaps even governance. This is the first published theme issue focused on responsible GenAI in mental health. It brings together evidence and insights on GenAI's capabilities, such as emotion recognition, therapy-session summarization, and risk assessment, while highlighting the sensitive nature of mental health data and the need for rigorous validation. Contributors discuss how bias, alignment with human values, transparency, and empathy must be carefully addressed to ensure ethically grounded, artificial intelligence-assisted care. By proposing conceptual frameworks; best practices; and regulatory approaches, including ethics of care and the preservation of socially important humanistic elements, this theme issue underscores that GenAI can complement, rather than replace, the vital role of human empathy in clinical settings. To achieve this, an ongoing collaboration between researchers, clinicians, policy makers, and technologists is essential.
Keywords: AI ethics; artificial intelligence; digital mental health ethics; large language model; model alignment; responsible AI in medicine.
© Oren Asman, John Torous, Amir Tal. Originally published in JMIR Mental Health (https://mental.jmir.org).