Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Kuran, Alican" seçeneğine göre listele

Listeleniyor 1 - 4 / 4
Sayfa Başına Sonuç
Sıralama seçenekleri
  • [ X ]
    Öğe
    Artificial intelligence system for automatic maxillary sinus segmentation on cone beam computed tomography images
    (Oxford Univ Press, 2024) Bayrakdar, Ibrahim Sevki; Elfayome, Nermin Sameh; Hussien, Reham Ashraf; Gulsen, Ibrahim Tevfik; Kuran, Alican; Gunes, Ihsan; Al-Badr, Alwaleed
    Objectives The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model.Methods In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, area under curve (AUC), Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) values.Results F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96, respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively.Conclusions Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.
  • [ X ]
    Öğe
    Deep learning model for automated segmentation of sphenoid sinus and middle skull base structures in CBCT volumes using nnU-Net v2
    (Springer, 2025) Gulsen, Ibrahim Tevfik; Kuran, Alican; Evli, Cengiz; Baydar, Oguzhan; Basar, Kevser Dinc; Bilgir, Elif; Celik, Ozer
    ObjectiveThe purpose of this study is the development of a deep learning model based on nnU-Net v2 for the automated segmentation of sphenoid sinus and middle skull base anatomic structures in cone-beam computed tomography (CBCT) volumes, followed by an evaluation of the model's performance.Material and methodsIn this retrospective study, the sphenoid sinus and surrounding anatomical structures in 99 CBCT scans were annotated using web-based labeling software. Model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.01 for 1000 epochs. The performance of the model in automatically segmenting these anatomical structures in CBCT scans was evaluated using a series of metrics, including accuracy, precision, recall, dice coefficient (DC), 95% Hausdorff distance (95% HD), intersection on union (IoU), and AUC.ResultsThe developed deep learning model demonstrated a high level of success in segmenting sphenoid sinus, foramen rotundum, and Vidian canal. Upon evaluation of the DC values, it was observed that the model demonstrated the highest degree of ability to segment the sphenoid sinus, with a DC value of 0.96.ConclusionThe nnU-Net v2-based deep learning model achieved high segmentation performance for the sphenoid sinus, foramen rotundum, and Vidian canal within the middle skull base, with the highest DC observed for the sphenoid sinus (DC: 0.96). However, the model demonstrated limited performance in segmenting other foramina of the middle skull base, indicating the need for further optimization for these structures.
  • [ X ]
    Öğe
    Deep learning-based 3D automatic segmentation of impacted canines in CBCT scans
    (Bmc, 2025) Unal, Turkan; Kuran, Alican; Gulsen, Ibrahim Tevfik; Kizilay, Fatma Nur; Gulsen, Emine; Ozudogru, Semanur; Gordeli, Kadir
    BackgroundImpacted canines are one of the most frequently encountered dental anomalies in maxillofacial practice. Accurate localization of these teeth is crucial for treatment planning, and Cone Beam Computed Tomography (CBCT) offers detailed 3D imaging for this purpose. However, manual segmentation on CBCT scans is time-consuming and subject to inter-observer variability. This study aimed to develop a deep learning model based on nnU-Net v2 for the automatic segmentation of impacted canines and to evaluate its performance using both classification and segmentation metrics.MethodsA total of 159 CBCT scans containing impacted canines were retrospectively collected and annotated using web-based segmentation software. Model training was performed using the nnU-Net v2 architecture with a learning rate of 0.00001 for 1000 epochs. The performance of the model was evaluated using recall and precision. In addition, segmentation performance was assessed using Dice Similarity Coefficient (DSC), 95% Hausdorff Distance (95% HD in mm), and Intersection over Union (IoU).ResultsThe nnU-Net v2 model achieved high performance in the detection and segmentation of impacted canines. The values obtained for recall and precision 0.90 and 0.82, respectively. The segmentation metrics were also favorable, with a DSC of 0.84, 95% HD of 7.07 mm, and IoU of 0.74, indicating good overlap between predicted and reference segmentations.ConclusionsThe results suggest that the nnU-Net v2-based deep learning model can effectively and autonomously segment impacted canines in CBCT volumes. Its strong performance highlights the potential of artificial intelligence to improve diagnostic efficiency in dentomaxillofacial radiology.
  • [ X ]
    Öğe
    Development of a YOLOv8-based deep learning model for detecting and segmenting dental restorations and dental applications in panoramic radiographs of mixed dentition
    (Springernature, 2025) Kuran, Alican; Gulsen, Ibrahim Tevfik; Kizilay, Fatma Nur; Gulsen, Emine; Asar, Mustafa Enes; Ozudogru, Semanur; Unal, Turkan
    Background The objective of this study was to develop a deep learning (DL) model for the detection and segmentation of six types of dental restorations and applications in panoramic radiographs of paediatric patients with mixed dentition.Material and methods A total of 2,033 panoramic radiographs were labelled for six different dental restorations. The dataset was divided into three parts: 80% for training, 10% for validation, and 10% for testing. The YOLOv8 model was trained for 500 epochs with a learning rate of 0.01. The success of the model was evaluated using sensitivity, precision and F1 score metrics.Results The YOLOv8 multiclass-DL model achieved high performance, with an overall F1 score of 0.89, supported by a sensitivity of 0.85 and precision of 0.93. Among the evaluated restoration types, dental fillings achieved the highest F1-score of 0.97, followed by stainless steel crowns with 0.94, space maintainers with 0.93, pulpotomies with 0.90, and root canal fillings with 0.84. The lowest performance was observed in the detection of dental brackets, which reached an F1-score of only 0.46.Conclusion YOLOv8-based DL models demonstrate a high level of success in detecting and segmenting dental restorations in panoramic radiographs of patients in the mixed dentition period.

| Alanya Alaaddin Keykubat Üniversitesi | Kütüphane | Açık Bilim Politikası | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Alanya Alaaddin Keykubat Üniversitesi, Alanya, Antalya, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim