A generative model based adversarial security of deep learning and linear classifier models


Creative Commons License

Sivaslioglu S., Catak F. O., ŞAHİNBAŞ K.

Informatica (Slovenia), cilt.45, sa.1, ss.33-64, 2021 (ESCI) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 45 Sayı: 1
  • Basım Tarihi: 2021
  • Doi Numarası: 10.31449/inf.v45i1.3234
  • Dergi Adı: Informatica (Slovenia)
  • Derginin Tarandığı İndeksler: Emerging Sources Citation Index (ESCI), Scopus, Aerospace Database, Applied Science & Technology Source, Biotechnology Research Abstracts, Central & Eastern European Academic Source (CEEAS), Communication Abstracts, Compendex, Computer & Applied Sciences, Metadex, zbMATH, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.33-64
  • Anahtar Kelimeler: adversarial machine learning, generative models, autoencoders
  • İstanbul Medipol Üniversitesi Adresli: Evet

Özet

In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. With the rapid developments of deep learning techniques, it is critical to take the security concern into account for the application of the algorithms. While machine learning offers significant advantages in terms of the application of algorithms, the issue of security is ignored. Since it has many applications in the real world, security is a vital part of the algorithms. In this paper, we have proposed a mitigation method for adversarial attacks against machine learning models with an autoencoder model that is one of the generative ones. The main idea behind adversarial attacks against machine learning models is to produce erroneous results by manipulating trained models. We have also presented the performance of autoencoder models to various attack methods from deep neural networks to traditional algorithms by using different methods such as non-targeted and targeted attacks to multi-class logistic regression, a fast gradient sign method, a targeted fast gradient sign method and a basic iterative method attack to neural networks for the MNIST dataset.