TY - JOUR
T1 - DARTSRepair
T2 - Core-failure-set guided DARTS for network robustness to common corruptions
AU - Ren, Xuhong
AU - Chen, Jianlang
AU - Juefei-Xu, Felix
AU - Xue, Wanli
AU - Guo, Qing
AU - Ma, Lei
AU - Zhao, Jianjun
AU - Chen, Shengyong
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China ( 61906135 , 62020106004 , 92048301 ); and in part by the Tianjin Science and Technology Plan Project (20JCQNJC01350). This research was also supported in part by Canada CIFAR AI Chairs Program, Amii RAP program, the Natural Sciences and Engineering Research Council of Canada (NSERC No.RGPIN-2021-02549, No.RGPAS-2021-00034, No.DGECR-2021-00019), as well as JSPS KAKENHI Grant No.JP20H04168, No.JP21H04877, JST-Mirai Program Grant No.JPMJMI20B8.
Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/11
Y1 - 2022/11
N2 - Network architecture search (NAS), in particular the differentiable architecture search (DARTS) method, has shown a great power to learn excellent model architectures on the specific dataset of interest. In contrast to using a fixed dataset, in this work, we focus on a different but important scenario for NAS: how to refine a deployed network's model architecture to enhance its robustness with the guidance of a few collected and misclassified examples that are degraded by some real-world unknown corruptions having a specific pattern (e.g., noise, blur, etc.). To this end, we first conduct an empirical study to validate that the model architectures can be definitely related to the corruption patterns. Surprisingly, by just adding a few corrupted and misclassified examples (e.g., 103 examples) to the clean training dataset (e.g., 5.0×104 examples), we can refine the model architecture and enhance the robustness significantly. To make it more practical, the key problem, i.e., how to select the proper failure examples for the effective NAS guidance, should be carefully investigated. Then, we propose a novel core-failure-set guided DARTS that embeds a K-center-greedy algorithm for DARTS to select suitable corrupted failure examples to refine the model architecture. We use our method for DARTS-refined DNNs on the clean as well as 15 corruptions with the guidance of four specific real-world corruptions. Compared with the state-of-the-art NAS as well as data-augmentation-based enhancement methods, our final method can achieve higher accuracy on both corrupted datasets and the original clean dataset. On some of the corruption patterns, we can achieve as high as over 45% absolute accuracy improvements.
AB - Network architecture search (NAS), in particular the differentiable architecture search (DARTS) method, has shown a great power to learn excellent model architectures on the specific dataset of interest. In contrast to using a fixed dataset, in this work, we focus on a different but important scenario for NAS: how to refine a deployed network's model architecture to enhance its robustness with the guidance of a few collected and misclassified examples that are degraded by some real-world unknown corruptions having a specific pattern (e.g., noise, blur, etc.). To this end, we first conduct an empirical study to validate that the model architectures can be definitely related to the corruption patterns. Surprisingly, by just adding a few corrupted and misclassified examples (e.g., 103 examples) to the clean training dataset (e.g., 5.0×104 examples), we can refine the model architecture and enhance the robustness significantly. To make it more practical, the key problem, i.e., how to select the proper failure examples for the effective NAS guidance, should be carefully investigated. Then, we propose a novel core-failure-set guided DARTS that embeds a K-center-greedy algorithm for DARTS to select suitable corrupted failure examples to refine the model architecture. We use our method for DARTS-refined DNNs on the clean as well as 15 corruptions with the guidance of four specific real-world corruptions. Compared with the state-of-the-art NAS as well as data-augmentation-based enhancement methods, our final method can achieve higher accuracy on both corrupted datasets and the original clean dataset. On some of the corruption patterns, we can achieve as high as over 45% absolute accuracy improvements.
UR - http://www.scopus.com/inward/record.url?scp=85133979319&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133979319&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2022.108864
DO - 10.1016/j.patcog.2022.108864
M3 - Article
AN - SCOPUS:85133979319
SN - 0031-3203
VL - 131
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 108864
ER -