تعداد نشریات | 20 |
تعداد شمارهها | 1,133 |
تعداد مقالات | 10,350 |
تعداد مشاهده مقاله | 43,912,121 |
تعداد دریافت فایل اصل مقاله | 9,863,057 |
Assessment of the Capability of ChatGPT-3.5 in Medical Physiology Examination in an Indian Medical School | ||
Interdiscip J Virtual Learn Med Sci | ||
مقاله 6، دوره 14، شماره 4 - شماره پیاپی 55، اسفند 2023، صفحه 311-317 اصل مقاله (627.47 K) | ||
نوع مقاله: Original Article | ||
شناسه دیجیتال (DOI): 10.30476/ijvlms.2023.98496.1221 | ||
نویسندگان | ||
Himel Mondal1؛ Anup Kumar Dhanvijay1؛ Ayesha Juhi1؛ Amita Singh1؛ Mohammed Jaffer Pinjar1؛ Anita Kumari1؛ Swati Mittal2؛ Amita Kumari1؛ Shaikat Mondal* 3 | ||
1Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India | ||
2Department of Physiology, Kalyan Singh Government Medical College Bulandshahr, Uttar Pradesh, India | ||
3Department of Physiology, Raiganj Government Medical College and Hospital, West Bengal, India | ||
چکیده | ||
Background: There has been increasing interest in exploring the capabilities of artificial intelligence (AI) in various fields, including education. Medical education is an area where AI can potentially have a significant impact, especially in helping students answer their customized questions. In this study, we aimed to investigate the capability of ChatGPT, a conversational AI model in generating answers to medical physiology exam questions in an Indian medical school. Methods: This cross-sectional study was conducted in March 2023 in an Indian Medical School, Deoghar, Jharkhand, India. The first mid-semester physiology examination was taken as the reference examination. There were two long essays, five short essay questions (total mark 40), and 20 multiple-choice questions (MCQ) (total mark 10). We generated the response from ChatGPT (in March 13 version) for both essay and MCQ questions. The essay-type answer sheet was evaluated by five faculties, and the average was taken as the final score. The score of 125 students (all first-year medical students) in the examination was obtained from the departmental registery. The median score of the 125 students was compared with the score of ChatGPT using Mann-Whitney U test. Results: The median score of 125 students in essay-type questions was 20.5 (Q1-Q3: 18-23.5) which corresponds to a median percentage of 51.25% (Q1-Q3: 45-58.75) (P=0.147). The answer generated by ChatGPT scored 21.5 (Q1-Q3: 21.5-22), which corresponds to 53.75% (Q1-Q3: 53.75-55) (P=0.125). Hence, ChatGPT scored like that of the students (P=0.4) in essay-type questions. In MCQ-type questions, ChatGPT answered 19 correctly in 20 questions (score=9.5), and this was higher than the median score of students (6) (Q1-Q3: 5-6.5) (P<0.0001). Conclusion: ChatGPT has the potential to generate answers to medical physiology examination questions. It has a higher capability to solve MCQ questions than essay-type ones. Although ChatGPT was able to provide answers that had the quality to pass the examination, the capability of generating high-quality answers for educational purposes is yet to be achieved. Hence, its usage in medical education for teaching and learning purposes is yet to be explored. | ||
کلیدواژهها | ||
Distance؛ Education؛ Artificial intelligence؛ ChatGPT؛ Physiology؛ Examination؛ Students؛ Medical | ||
مراجع | ||
| ||
آمار تعداد مشاهده مقاله: 785 تعداد دریافت فایل اصل مقاله: 251 |