Auxiliary Training: Towards Accurate and Robust Models

Linfeng Zhang Tsinghua University; Institute for interdisciplinary Information Core Technology Muzhou Yu Institute for interdisciplinary Information Core Technology; Xi’an Jiaotong University Tong Chen Tsinghua University Zuoqiang Shi Tsinghua University Chenglong Bao Tsinghua University Kaisheng Ma Tsinghua University

Machine Learning mathscidoc:2206.41005

CVPR, 2020.4
Training process is crucial for the deployment of the network in applications which have two strict requirements on both accuracy and robustness. However, most existing approaches are in a dilemma, i.e. model accuracy and robustness forming an embarrassing tradeoff – the improvement of one leads to the drop of the other. The challenge remains for as we try to improve the accuracy and robustness simultaneously. In this paper, we propose a novel training method via introducing the auxiliary classifiers for training on corrupted samples, while the clean samples are normally trained with the primary classifier. In the training stage, a novel distillation method named input-aware self distillation is proposed to facilitate the primary classifier to learn the robust information from auxiliary classifiers. Along with it, a new normalization method - selective batch normal- ization is proposed to prevent the model from the negative influence of corrupted images. At the end of the training period, a L2-norm penalty is applied to the weights of primary and auxiliary classifiers such that their weights are asymptotically identical. In the stage of inference, only the primary classifier is used and thus no extra computation and storage are needed. Extensive experiments on CIFAR10, CIFAR100 and ImageNet show that noticeable improvements on both accuracy and robustness can be observed by the proposed auxiliary training. On average, auxiliary training achieves 2.21% accuracy and 21.64% robustness (measured by corruption error) improvements over traditional training methods on CIFAR100. Codes have been released on github.
No keywords uploaded!
[ Download ] [ 2022-06-15 22:28:41 uploaded by Baocl ] [ 600 downloads ] [ 0 comments ]
  title={Auxiliary Training: Towards Accurate and Robust Models},
  author={Linfeng Zhang, Muzhou Yu, Tong Chen, Zuoqiang Shi, Chenglong Bao, and Kaisheng Ma},
Linfeng Zhang, Muzhou Yu, Tong Chen, Zuoqiang Shi, Chenglong Bao, and Kaisheng Ma. Auxiliary Training: Towards Accurate and Robust Models. 2020. In CVPR.
Please log in for comment!
Contact us: | Copyright Reserved