Koçak, Duygu2025-07-312025-07-312025Koçak D. Examination of ChatGPT's Performance as a Data Analysis Tool. Educ Psychol Meas. 2025 Jan 3:00131644241302721. DOI: 10.1177/00131644241302721.0013-16441552-3888https://doi.org/10.1177/00131644241302721https://hdl.handle.net/20.500.12868/2604This study examines the performance of ChatGPT, developed by OpenAI and widely used as an AI-based conversational tool, as a data analysis tool through exploratory factor analysis (EFA). To this end, simulated data were generated under various data conditions, including normal distribution, response category, sample size, test length, factor loading, and measurement models. The generated data were analyzed using ChatGPT-4o twice with a 1-week interval under the same prompt, and the results were compared with those obtained using R code. In data analysis, the Kaiser–Meyer–Olkin (KMO) value, total variance explained, and the number of factors estimated using the empirical Kaiser criterion, Hull method, and Kaiser–Guttman criterion, as well as factor loadings, were calculated. The findings obtained from ChatGPT at two different times were found to be consistent with those obtained using R. Overall, ChatGPT demonstrated good performance for steps that require only computational decisions without involving researcher judgment or theoretical evaluation (such as KMO, total variance explained, and factor loadings). However, for multidimensional structures, although the estimated number of factors was consistent across analyses, biases were observed, suggesting that researchers should exercise caution in such decisions.eninfo:eu-repo/semantics/openAccessChatGPTAccuracy Estimation PercentageArtificial IntelligenceData AnalysisExploratory Factor AnalysisRelative BiasExamination of ChatGPT's Performance as a Data Analysis ToolArticle10.1177/001316442413027218546416713975953785213996287Q1WOS:001388833800001Q2