TY - GEN
T1 - Towards Understanding the Space of Unrobust Features of Neural Networks
AU - Bingli, Liao
AU - Kanzaki, Takahiro
AU - Vargas, Danilo Vasconcellos
N1 - Funding Information:
VI. ACKNOWLEDGMENTS This work was supported by JST, ACT-I Grant Number JP-50243 and JSPS KAKENHI Grant Number JP20241216.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/6/8
Y1 - 2021/6/8
N2 - Despite the convolutional neural network has achieved tremendous monumental success on a variety of computer vision-related tasks, it is still extremely challenging to build a neural network with doubtless reliability. Previous works have demonstrated that the deep neural network can be efficiently fooled by human imperceptible perturbation to the input, which actually revealed the instability for interpolation. Like human-beings, an ideally trained neural network should be constrained within desired inference space and maintain correctness for both interpolation and extrapolation. In this paper, we develop a technique to verify the correctness when convolutional neural networks extrapolate beyond training data distribution by generating legitimated feature broken images, and we show that the decision boundary for convolutional neural network is not well formulated based on image features for extrapolating.
AB - Despite the convolutional neural network has achieved tremendous monumental success on a variety of computer vision-related tasks, it is still extremely challenging to build a neural network with doubtless reliability. Previous works have demonstrated that the deep neural network can be efficiently fooled by human imperceptible perturbation to the input, which actually revealed the instability for interpolation. Like human-beings, an ideally trained neural network should be constrained within desired inference space and maintain correctness for both interpolation and extrapolation. In this paper, we develop a technique to verify the correctness when convolutional neural networks extrapolate beyond training data distribution by generating legitimated feature broken images, and we show that the decision boundary for convolutional neural network is not well formulated based on image features for extrapolating.
UR - http://www.scopus.com/inward/record.url?scp=85113739590&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113739590&partnerID=8YFLogxK
U2 - 10.1109/CYBCONF51991.2021.9464137
DO - 10.1109/CYBCONF51991.2021.9464137
M3 - Conference contribution
AN - SCOPUS:85113739590
T3 - 2021 5th IEEE International Conference on Cybernetics, CYBCONF 2021
SP - 91
EP - 94
BT - 2021 5th IEEE International Conference on Cybernetics, CYBCONF 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 5th IEEE International Conference on Cybernetics, CYBCONF 2021
Y2 - 8 June 2021 through 10 June 2021
ER -