Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack

Haibo Zhang, Kouichi Sakurai

研究成果: ジャーナルへの寄稿学術誌査読

3 被引用数 (Scopus)


Deep learning has become one of the most popular research topics today. Researchers have developed cutting-edge learning algorithms and frameworks around deep learning, applying them to a wide range of fields to solve real-world problems. However, we are more concerned about the security risks associated with deep learning models, such as adversarial attacks, which this article will discuss. Attackers can use the deep learning model to create the conditions for an attack, maliciously manipulating the input images to deceive the classification model and produce false positives. This paper proposes a method of pre-denoising all input images to prevent adversarial attacks by adding a purification layer before the classification model. The method in this paper is proposed based on the basic architecture of Conditional Generative Adversarial Networks. It adds the image perception loss to the original algorithm Pix2pix to achieve more efficient image recovery. Our method can recover noise-attacked images to a level close to the actual image to ensure the correctness of the classification results. Experimental results show that our approach can quickly recover noisy images, and the recovery accuracy is 20.22% higher than the previous state-of-the-art.

ジャーナルIEEE Access
出版ステータス出版済み - 2021

!!!All Science Journal Classification (ASJC) codes

  • 工学一般
  • コンピュータサイエンス一般
  • 電子工学および電気工学
  • 材料科学一般


「Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。