Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Tasdemir, Sakir" seçeneğine göre listele

Listeleniyor 1 - 5 / 5
Sayfa Başına Sonuç
Sıralama seçenekleri
  • [ X ]
    Öğe
    Abc-based weighted voting deep ensemble learning model for multiple eye disease detection
    (Elsevier Sci Ltd, 2024) Uyar, Kubra; Yurdakul, Mustafa; Tasdemir, Sakir
    Background and objective: The unique organ that provides vision is eye and there are various disorders cause visual impairment. Therefore, the identification of eye diseases in early period is significant to take necessary precautions. Convolutional Neural Network (CNN), successfully used in various imageanalysis problems due to its automatic data-dependent feature learning ability, can be employed with ensemble learning. Methods: A novel approach that combines CNNs with the robustness of ensemble learning to classify eye diseases was designed. From a comprehensive evaluation of fifteen pre-trained CNN models on the Eye Disease Dataset (EDD), three models that exhibited the best classification performance were identified. Instead of employing traditional ensemble methods, these CNN models were integrated using a weighted-voting mechanism, where the contribution of each model was determined based on ABC (Artificial Bee Colony). The core innovation lies in our utilization of the ABC algorithm, a departure from conventional methods, to meticulously derive these optimal weights. This unique integration and optimization process culminates in ABCEnsemble, designed to offer enhanced predictive accuracy and generalization in eye disease classification. Results: To apply weighted-voting and determine the optimized-weights of the best-performing three CNN models, various optimization methods were analyzed. Average values for performance evaluation metrics were obtained with ABCEnsemble as accuracy 98.84%, precision 98.90%, recall 98.84%, and f1-score 98.85% applied to EDD. Conclusions: The eye diseases classification success of 93.17% obtained with DenseNet169 was increased to 98.84% by ABCEnsemble. The design of ABCEnsemble and the experimental findings of the proposed approach provide significant contributions to the related literature.
  • [ X ]
    Öğe
    Histological tissue classification with a novel statistical filter-based convolutional neural network
    (Wiley, 2024) Unlukal, Nejat; Ulker, Erkan; Solmaz, Merve; Uyar, Kubra; Tasdemir, Sakir
    Deep networks have been of considerable interest in literature and have enabled the solution of recent real-world applications. Due to filters that offer feature extraction, Convolutional Neural Network (CNN) is recognized as an accurate, efficient and trustworthy deep learning technique for the solution of image-based challenges. The high-performing CNNs are computationally demanding even if they produce good results in a variety of applications. This is because a large number of parameters limit their ability to be reused on central processing units with low performance. To address these limitations, we suggest a novel statistical filter-based CNN (HistStatCNN) for image classification. The convolution kernels of the designed CNN model were initialized by continuous statistical methods. The performance of the proposed filter initialization approach was evaluated on a novel histological dataset and various histopathological benchmark datasets. To prove the efficiency of statistical filters, three unique parameter sets and a mixed parameter set of statistical filters were applied to the designed CNN model for the classification task. According to the results, the accuracy of GoogleNet, ResNet18, ResNet50 and ResNet101 models were 85.56%, 85.24%, 83.59% and 83.79%, respectively. The accuracy was improved by 87.13% by HistStatCNN for the histological data classification task. Moreover, the performance of the proposed filter generation approach was proved by testing on various histopathological benchmark datasets, increasing average accuracy rates. Experimental results validate that the proposed statistical filters enhance the performance of the network with more simple CNN models.
  • [ X ]
    Öğe
    MaxGlaViT: A Novel Lightweight Vision Transformer-Based Approach for Early Diagnosis of Glaucoma Stages From Fundus Images
    (Wiley, 2025) Yurdakul, Mustafa; Uyar, Kubra; Tasdemir, Sakir
    Glaucoma is a prevalent eye disease that often progresses without symptoms and can lead to permanent vision loss if not detected early. The limited number of specialists and overcrowded clinics worldwide make it difficult to detect the disease at an early stage. Deep learning-based computer-aided diagnosis (CAD) systems are a solution to this problem, enabling faster and more accurate diagnosis. In this study, we proposed MaxGlaViT, a novel Vision Transformer model based on MaxViT to diagnose different stages of glaucoma. The architecture of the model is constructed in three steps: (i) the Multi Axis Vision Transformer (MaxViT) structure is scaled in terms of the number of blocks and channels, (ii) low-level feature extraction is improved by integrating the attention mechanism into the stem block, and (iii) high-level feature extraction is improved by using the modern convolutional structure. The MaxGlaViT model was tested on the HDV1 fundus image data set and compared to a total of 80 deep learning models. The results show that the MaxGlaViT model, which contains effective block structures, outperforms previous literature methods in terms of both parameter efficiency and classification accuracy. The model performs particularly high success in detecting the early stages of glaucoma. MaxGlaViT is an effective solution for multistage diagnosis of glaucoma with low computational cost and high accuracy. In this respect, it can be considered as a candidate for a scalable and reliable CAD system applicable in clinical settings.
  • [ X ]
    Öğe
    ROPGCViT: A Novel Explainable Vision Transformer for Retinopathy of Prematurity Diagnosis
    (Ieee-Inst Electrical Electronics Engineers Inc, 2025) Yurdakul, Mustafa; Uyar, Kubra; Tasdemir, Sakir; Atabas, Irfan
    Retinopathy of Prematurity (ROP) is a severe disease that occurs in premature babies due to abnormal development of retinal vessels and can lead to permanent vision loss. Fundus images are critical in the diagnosis of ROP; however, the examination of fundus images is a subjective, time-consuming, and error-prone process that requires experience. This situation can lead to delayed diagnosis and inaccurate evaluations. Therefore, the need for computer-aided diagnosis (CAD) systems is increasing day by day. Deep learning (DL) methods have a high potential in analyzing such complex images. In this study, a total of 50 DL models, 25 Convolutional Neural Network (CNN), and 25 Vision Transformer (ViT) models were tested to diagnose ROP from fundus images. Furthermore, the ROPGCViT model based on the Global Context Vision Transformer (GCViT) was proposed. GCViT was enhanced with Squeeze-and-Excitation (SE) block and Residual Multilayer Perceptron (RMLP) structures to effectively learn local and global context information. With a dataset of 1099 fundus images, the performance of the model was evaluated in terms of accuracy, precision, recall, f1-score, and Cohen's kappa score. To enhance explainability, the Gradient-Weighted Class Activation Mapping (Grad-CAM) method was utilized to visualize the regions of fundus images the model focused on during classification, providing insights into its decision-making process. ROPGCViT outperformed both 50 DL models and methods in the literature with 94.69% accuracy, 94.84% precision, 94.69% recall, 94.60% f1-score, and Cohen's kappa score of 93.10%. Additionally, the Grad-CAM visualizations demonstrated the ability of the model to focus on clinically relevant regions, enhancing trust and interpretability for experts. The proposed ROPGCViT model provides a robust solution for ROP diagnosis with high accuracy, flexibility, and generalization capacity.
  • [ X ]
    Öğe
    Webserver-Based Mobile Application for Multi-class Chestnut (Castanea sativa) Classification Using Deep Features and Attention Mechanisms
    (Springer, 2025) Yurdakul, Mustafa; Uyar, Kubra; Tasdemir, Sakir
    Chestnut (Castanea sativa) is a nutritious food with fiber, vitamins C and B group, minerals such as potassium, magnesium, and iron. In addition to being a nutritious food, chestnuts are used in various fields such as medicine, cosmetics, and energy. All the mentioned characteristics make it a demanded product worldwide. To determine the market price of chestnuts, it is necessary to have a good classification. In traditional approaches, producers classify chestnuts according to their external appearance; however, this is tedious, time-consuming, and prone to errors. There is a need for computer-aided systems to analyze the chestnut varieties. Therefore, a camera system was set up and images of chestnuts belonging to 'Aland & imath;z', 'Ayd & imath;n', 'Simav', and 'Zonguldak' varieties were captured to create a novel dataset. Moreover, a deep-based mobile application was developed to classify chestnut types. After testing the 16 state-of-the-art convolutional neural network (CNN) models, the three most successful models from the 16 CNN models were used as feature extractors, and the extracted features were classified using Decision Tree (DT), Naive Bayes (NB), Support Vector Machine (SVM), Adaboost, and Xtreme Gradient Boosting (XGB) algorithms. Finally, attention modules were integrated to CNN models to enhance accurate classification of chestnut images. The highest result achieved by MobileNet with attention mechanism was accuracy of 99.65%, precision of 99.62%, recall of 99.67%, f1-score of 99.64%, kappa score of 100%, and area under the curve (AUC) of 100%. The chestnut dataset can be used in literature studies for different purposes and the proposed framework can be utilized as a computer-aided decision support system for experts in farming.

| Alanya Alaaddin Keykubat Üniversitesi | Kütüphane | Açık Bilim Politikası | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Alanya Alaaddin Keykubat Üniversitesi, Alanya, Antalya, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim