Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack

Haibo Zhang, Kouichi Sakurai

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


Deep learning has become one of the most popular research topics today. Researchers have developed cutting-edge learning algorithms and frameworks around deep learning, applying them to a wide range of fields to solve real-world problems. However, we are more concerned about the security risks associated with deep learning models, such as adversarial attacks, which this article will discuss. Attackers can use the deep learning model to create the conditions for an attack, maliciously manipulating the input images to deceive the classification model and produce false positives. This paper proposes a method of pre-denoising all input images to prevent adversarial attacks by adding a purification layer before the classification model. The method in this paper is proposed based on the basic architecture of Conditional Generative Adversarial Networks. It adds the image perception loss to the original algorithm Pix2pix to achieve more efficient image recovery. Our method can recover noise-attacked images to a level close to the actual image to ensure the correctness of the classification results. Experimental results show that our approach can quickly recover noisy images, and the recovery accuracy is 20.22% higher than the previous state-of-the-art.

Original languageEnglish
Pages (from-to)169031-169043
Number of pages13
JournalIEEE Access
Publication statusPublished - 2021

All Science Journal Classification (ASJC) codes

  • General Engineering
  • General Computer Science
  • Electrical and Electronic Engineering
  • General Materials Science


Dive into the research topics of 'Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack'. Together they form a unique fingerprint.

Cite this