Dear Editor,

We found that the article on “Large language models for structured reporting in radiology: performance of GPT-4, ChatGPT-3.5, Perplexity and Bing. [1]” is interesting. Mallio et al. observed that artificial intelligence applications in medicine are rapidly expanding. Large language models (LLMs) have lately gained popularity as useful tools in radiology, and they are currently being explored for the crucial task of structured reporting [1]. Mallio et al. analyzed four LLMs models in terms of structured reporting knowledge and template proposal. LLMs have a lot of potential for creating structured reports in radiology, but further formal validations are needed [2].

Given how quickly AI is progressing, we both believe it is critical to thoroughly analyze any new technological application. AI, a computational tool, based its response on publicly available information that could have come from legitimate or questionable sources. Sensitive information cannot be created, reviewed, or approved by AI. Without human assessment, AI should not be used to develop, assess, or approve sensitive information [2]. A significant and contentious issue in AI is data accuracy. However, it is crucial to consider how AI should be applied ethically. The ChatGPT may generate information that is immediately relevant even without user interaction. Adequate intake control measures may stop abuse from increasing. However, it might still be beneficial. It may be necessary at this time to offer rules for the ethical and effective usage of AI. It is crucial to keep in mind that people, not AI, decide whether or not AI is effective and moral to employ.