3rd International Cumhuriyet Artificial Intelligence Applications Conference 2024, Sivas, Turkey, 21 - 22 November 2024, pp.17, (Summary Text)
Medical AI is one of the hot topics in the research and applied fields of medicine. Various research
mentions privacy as a major ethical challenge for medical uses of AI. The good news is most of the AI
tools are designed to replace physicians but to assist them. This reduces ethical challenges, while not
eliminating all. Researchers state that although we are far from consensus in ethical uses of medical AI,
we have more or less an agreement on key principles. If the medical data to be used to train AI is from a
narrow sample of patients, it can err with larger groups. On the other hand, some other problems can be
due to users. Thus, development of AI literacy is necessary. In other words, they have to learn which AI
tools to use for various purposes. When we consider early versions of medical AI, we realize that they
made sense for explanation and teaching, but fail as an assistant for clinical practice, but this situation
has been changing rapidly. Medical students are highly positive of medical AI, and believe that it will not
replace but complement human doctors. There is a realistic anxiety that in a group of medical areas,
especially radiology, AI will outperform human doctors. AI anxiety can also be due to perceived difficulty
to use AI. A solution to ethical problems in medical AI is a trustworthy AI model.