Yavrum, FuatKocabaş, Dilara Özkoyuncu2026-01-242026-01-2420242587-0319https://search.trdizin.gov.tr/tr/yayin/detay/1294476https://doi.org/10.30565/medalanya.1531790https://hdl.handle.net/20.500.12868/3756Aim: This study aimed to assess ChatGPT-3.5's performance in ophthalmology, comparing its responses to clinical case-based and multiple-choice (MCQ) questions. Methods: ChatGPT-3.5, an AI model developed by OpenAI, was employed. It responded to 98 case-based questions from \"Ophthalmology Review: A Case-Study Approach\" and 643 MCQs from \"Review Questions in Ophthalmology\" book. ChatGPT's answers were compared to the books, and statistical analysis was conducted. Results: ChatGPT achieved an overall accuracy of 56.1% in case-based questions. Accuracy varied across categories, with the highest in the retina section (69.5%) and the lowest in the trauma section (38.2%). In MCQ, ChatGPT's accuracy was 53.5%, with the weakest in the optics section (32.6%) and the highest in pathology and uveitis (66.7% and 63.0%, respectively). ChatGPT performed better in case-based questions in the retina and pediatric ophthalmology sections than MCQ. Conclusion: ChatGPT-3.5 exhibits potential as a tool in ophthalmology, particularly in retina and pediatric ophthalmology. Further research is needed to evaluate ChatGPT's clarity and acceptability for open-ended questions.eninfo:eu-repo/semantics/openAccessArtificial intelligenceOphthalmologyChatGPTLarge language modelIs ChatGPT a Useful Tool for Ophthalmology Practice?Article10.30565/medalanya.1531790832212271294476