Jovana Petrović, Iva Binić, Maša Vacev and Stevo Lukić
Abstract / Povzetek
ChatGPT is one of the most advanced and rapidly evolving
large language model-based chatbots. It excels in everything
from handling simple questions to performing complex medical
examinations. While current technology cannot replace the
expertise and judgment of skilled psychiatrists, it can assist in
early detection of mental problems, patient evaluations,
differential diagnoses, psychotherapy and in planning and
conducting medical research. Ensuring privacy and adhering to
professional, ethical, and legal standards is crucial when
processing training data. This is especially important in mental
health settings, where disclosing sensitive personal information
increases the risk of data misuse and the potential for harmful
advice. Current uses of ChatGPT in mental health care are
constrained by its design as a general chatbot, rather than a
specialized psychiatric tool. Despite this, the model proves
useful for handling routine psychiatric and administrative tasks.
As GPT technology evolves, it holds significant promise for
psychiatry, including integration into diagnostics,
psychotherapy, and early detection of mental health issues. To
deploy these advancements responsibly and effectively, it is
crucial to develop and refine professional ethical standards and
practice guidelines.