Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Karadağ, Özge Öztimur" seçeneğine göre listele

Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
  • [ X ]
    Öğe
    Empirical evaluation of the effectiveness of variational autoencoders on data augmentation for the image classification problem
    (2020) Karadağ, Özge Öztimur; Çiçek, Özlem Erdaş
    In the last decade, deep learning methods have become the key solution for various machine learning problems. One major drawback of deep learning methods is that they require large datasets to have a good generalization performance. Researchers propose data augmentation techniques for generating synthetic data to overcome this problem. Traditional methods, such as flipping, rotation etc., which are referred as transformation based methods in this study are commonly used for obtaining synthetic data in the literature. These methods take as input an image and process that image to obtain a new one. On the other hand, generative models such as generative adversarial networks, auto-encoders, after trained with aset of image learn to generatesyntheticdata. Recently generative models are commonly used for data augmentation in various domains. In this study, we evaluate the effectiveness of a generative model, variational autoencoders (VAE), on the image classification problem. For this purpose, we train a VAE using CIFAR-10 dataset and generate synthetic samples with this model. We evaluate the classification performance using various sized datasets and compare the classification performances on four datasets; dataset without augmentation, dataset augmented with VAE and two datasets augmented with transformation based methods. We observe that the contribution of data augmentation is sensitive to the size of the dataset and VAE augmentation is as effective as the transformation based augmentation methods.
  • [ X ]
    Öğe
    Experimental assessment of the performance of data augmentation with generative adversarial networks in the image classification problem
    (Institute of Electrical and Electronics Engineers Inc., 2019) Karadağ, Özge Öztimur; Erdaş Çiçek, Özlem
    Deep Learning algorithms have almost become a key standard for majority of vision and machine learning problems. Despite its common usage and high performance for many applications, they have certain disadvantages. One major problem with deep learning methods is the size of the dataset to be used for training. The methods require a large dataset for an adequate training. However, a large dataset may not be available for all problems. In such a case, researchers apply data augmentation methods to obtain a larger dataset from a given dataset. For the image classification problem, conventional method for data augmentation is the application of transformation based methods; such as flipping, rotation, blurring etc. Recently, generative models which apply deep learning methods are also commonly used for data augmentation. On the other hand, in case of a too large dataset the classifiers may overfit and end up with a lack of generalization. In this study, we explore the usage of generative adversarial networks for data augmentation in the image classification problem. We evaluate the classification performance with three types of augmentation methods. Dataset is first augmented by two conventional methods; Gaussian blurring and dropout of regions, then by generative adversarial networks. Meanwhile, we observe the behavior of the classifier for various sized datasets with and without data augmentation. We observe that especially in datasets of certain sizes generative adversarial networks can be effectively used for data augmentation. © 2019 IEEE.
  • [ X ]
    Öğe
    Imitating by generating: Deep generative models for imitation of interactive tasks
    (Frontiers Media Sa, 2020) Butepage, Judith; Ghadirzadeh, Ali; Karadağ, Özge Öztimur; Bjorkman, Marten; Kragic, Danica
    To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks "hand-shake," "hand-wave," "parachute fist-bump," and "rocket fist-bump." We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.

| Alanya Alaaddin Keykubat Üniversitesi | Kütüphane | Açık Bilim Politikası | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Alanya Alaaddin Keykubat Üniversitesi, Alanya, Antalya, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim