Yazar "Yurdakul, Mustafa" seçeneğine göre listele
Listeleniyor 1 - 5 / 5
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Abc-based weighted voting deep ensemble learning model for multiple eye disease detection(Elsevier Sci Ltd, 2024) Uyar, Kubra; Yurdakul, Mustafa; Tasdemir, SakirBackground and objective: The unique organ that provides vision is eye and there are various disorders cause visual impairment. Therefore, the identification of eye diseases in early period is significant to take necessary precautions. Convolutional Neural Network (CNN), successfully used in various imageanalysis problems due to its automatic data-dependent feature learning ability, can be employed with ensemble learning. Methods: A novel approach that combines CNNs with the robustness of ensemble learning to classify eye diseases was designed. From a comprehensive evaluation of fifteen pre-trained CNN models on the Eye Disease Dataset (EDD), three models that exhibited the best classification performance were identified. Instead of employing traditional ensemble methods, these CNN models were integrated using a weighted-voting mechanism, where the contribution of each model was determined based on ABC (Artificial Bee Colony). The core innovation lies in our utilization of the ABC algorithm, a departure from conventional methods, to meticulously derive these optimal weights. This unique integration and optimization process culminates in ABCEnsemble, designed to offer enhanced predictive accuracy and generalization in eye disease classification. Results: To apply weighted-voting and determine the optimized-weights of the best-performing three CNN models, various optimization methods were analyzed. Average values for performance evaluation metrics were obtained with ABCEnsemble as accuracy 98.84%, precision 98.90%, recall 98.84%, and f1-score 98.85% applied to EDD. Conclusions: The eye diseases classification success of 93.17% obtained with DenseNet169 was increased to 98.84% by ABCEnsemble. The design of ABCEnsemble and the experimental findings of the proposed approach provide significant contributions to the related literature.Öğe Enhanced ore classification through optimized CNN ensembles and feature fusion(Springer International Publishing, 2025) Yurdakul, Mustafa; Uyar, Kübra; Taşdemir, ŞakirOre is a type of natural stone that contains economically valuable minerals or metals. Accurate classification of ore minerals is crucial for improving operational efficiency in mining, reducing environmental impacts, and determining market value. Traditional methods for classifying ores are often time-consuming, labor-intensive, and error-prone. Therefore, computer-aided systems offer a significant advantage in this field. In this study, various efficient Deep Learning (DL) approaches are utilized for the detection of ore types. Within the scope of the study, four different experiments (transfer learning, feature extraction and classification with SVM, feature selection with optimization algorithms, and ensemble methods) are conducted, and the methods are compared in terms of classification metrics. As a result of the experimental case studies, high accuracy rates between 95 and 98% are achieved. The most successful method is the ensemble method, weighted by grid search. The ensemble model, which combined AlexNet, VGG16, and Xception models, achieves remarkable results with an overall accuracy of 98.11%, precision of 98.18%, recall of 98.11%, and f1-score of 98.11% on the publicly available Ore Images Dataset (OID). This study demonstrates that efficient DL approaches can classify ores with very high accuracy and have significant potential applications in the mining industry. © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2025.Öğe MaxGlaViT: A Novel Lightweight Vision Transformer-Based Approach for Early Diagnosis of Glaucoma Stages From Fundus Images(Wiley, 2025) Yurdakul, Mustafa; Uyar, Kubra; Tasdemir, SakirGlaucoma is a prevalent eye disease that often progresses without symptoms and can lead to permanent vision loss if not detected early. The limited number of specialists and overcrowded clinics worldwide make it difficult to detect the disease at an early stage. Deep learning-based computer-aided diagnosis (CAD) systems are a solution to this problem, enabling faster and more accurate diagnosis. In this study, we proposed MaxGlaViT, a novel Vision Transformer model based on MaxViT to diagnose different stages of glaucoma. The architecture of the model is constructed in three steps: (i) the Multi Axis Vision Transformer (MaxViT) structure is scaled in terms of the number of blocks and channels, (ii) low-level feature extraction is improved by integrating the attention mechanism into the stem block, and (iii) high-level feature extraction is improved by using the modern convolutional structure. The MaxGlaViT model was tested on the HDV1 fundus image data set and compared to a total of 80 deep learning models. The results show that the MaxGlaViT model, which contains effective block structures, outperforms previous literature methods in terms of both parameter efficiency and classification accuracy. The model performs particularly high success in detecting the early stages of glaucoma. MaxGlaViT is an effective solution for multistage diagnosis of glaucoma with low computational cost and high accuracy. In this respect, it can be considered as a candidate for a scalable and reliable CAD system applicable in clinical settings.Öğe ROPGCViT: A Novel Explainable Vision Transformer for Retinopathy of Prematurity Diagnosis(Ieee-Inst Electrical Electronics Engineers Inc, 2025) Yurdakul, Mustafa; Uyar, Kubra; Tasdemir, Sakir; Atabas, IrfanRetinopathy of Prematurity (ROP) is a severe disease that occurs in premature babies due to abnormal development of retinal vessels and can lead to permanent vision loss. Fundus images are critical in the diagnosis of ROP; however, the examination of fundus images is a subjective, time-consuming, and error-prone process that requires experience. This situation can lead to delayed diagnosis and inaccurate evaluations. Therefore, the need for computer-aided diagnosis (CAD) systems is increasing day by day. Deep learning (DL) methods have a high potential in analyzing such complex images. In this study, a total of 50 DL models, 25 Convolutional Neural Network (CNN), and 25 Vision Transformer (ViT) models were tested to diagnose ROP from fundus images. Furthermore, the ROPGCViT model based on the Global Context Vision Transformer (GCViT) was proposed. GCViT was enhanced with Squeeze-and-Excitation (SE) block and Residual Multilayer Perceptron (RMLP) structures to effectively learn local and global context information. With a dataset of 1099 fundus images, the performance of the model was evaluated in terms of accuracy, precision, recall, f1-score, and Cohen's kappa score. To enhance explainability, the Gradient-Weighted Class Activation Mapping (Grad-CAM) method was utilized to visualize the regions of fundus images the model focused on during classification, providing insights into its decision-making process. ROPGCViT outperformed both 50 DL models and methods in the literature with 94.69% accuracy, 94.84% precision, 94.69% recall, 94.60% f1-score, and Cohen's kappa score of 93.10%. Additionally, the Grad-CAM visualizations demonstrated the ability of the model to focus on clinically relevant regions, enhancing trust and interpretability for experts. The proposed ROPGCViT model provides a robust solution for ROP diagnosis with high accuracy, flexibility, and generalization capacity.Öğe Webserver-Based Mobile Application for Multi-class Chestnut (Castanea sativa) Classification Using Deep Features and Attention Mechanisms(Springer, 2025) Yurdakul, Mustafa; Uyar, Kubra; Tasdemir, SakirChestnut (Castanea sativa) is a nutritious food with fiber, vitamins C and B group, minerals such as potassium, magnesium, and iron. In addition to being a nutritious food, chestnuts are used in various fields such as medicine, cosmetics, and energy. All the mentioned characteristics make it a demanded product worldwide. To determine the market price of chestnuts, it is necessary to have a good classification. In traditional approaches, producers classify chestnuts according to their external appearance; however, this is tedious, time-consuming, and prone to errors. There is a need for computer-aided systems to analyze the chestnut varieties. Therefore, a camera system was set up and images of chestnuts belonging to 'Aland & imath;z', 'Ayd & imath;n', 'Simav', and 'Zonguldak' varieties were captured to create a novel dataset. Moreover, a deep-based mobile application was developed to classify chestnut types. After testing the 16 state-of-the-art convolutional neural network (CNN) models, the three most successful models from the 16 CNN models were used as feature extractors, and the extracted features were classified using Decision Tree (DT), Naive Bayes (NB), Support Vector Machine (SVM), Adaboost, and Xtreme Gradient Boosting (XGB) algorithms. Finally, attention modules were integrated to CNN models to enhance accurate classification of chestnut images. The highest result achieved by MobileNet with attention mechanism was accuracy of 99.65%, precision of 99.62%, recall of 99.67%, f1-score of 99.64%, kappa score of 100%, and area under the curve (AUC) of 100%. The chestnut dataset can be used in literature studies for different purposes and the proposed framework can be utilized as a computer-aided decision support system for experts in farming.












