Abstract
Object detection plays a crucial role and has wide-ranging applications in computer vision. Nevertheless, object detectors are susceptible to adversarial examples. Some works have been presented to improve the adversarial robustness of object detectors, which, however, often come at the loss of some prediction accuracy. In this paper, we propose a novel adversarial training method that integrates the contrastive learning into the training process to reduce the loss of accuracy. Specifically, we add a contrastive learning module to the primary feature extraction backbone of the target object detector to extract contrastive features. During the training process, the contrastive loss and detection loss are used together to guide the training of detectors. Contrastive learning ensures that clean and adversarial examples are more clustered and are further away from decision boundaries in the high-level feature space, thus increasing the cost of adversarial examples crossing decision boundaries. Numerous experiments on PASCAL-VOC and MS-COCO have shown that our proposed method achieves significantly superior defense performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: S &P, pp. 39–57 (2017)
Chen, P., Kung, B., Chen, J.: Class-aware robust adversarial training for object detection. In: CVPR, pp. 10420–10429 (2021)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.E.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607 (2020)
Chow, K.H., et al.: Adversarial objectness gradient attacks in real-time object detection systems. In: TPS-ISA, pp. 263–272 (2020)
Everingham, M., Gool, L.V., Williams, C.K.I., Winn, J.M., Zisserman, A.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)
Gao, S., et al.: Detecting adversarial examples on deep neural networks with mutual information neural estimation. IEEE Trans. Depend. Secure Comput. (2023). https://doi.org/10.1109/TDSC.2023.3241428
Gao, S., Yao, S., Li, R.: Transferable adversarial defense by fusing reconstruction learning and denoising learning. In: INFOCOMW, pp. 1–6 (2021)
Gao, S., Yu, S., Wu, L., Yao, S., Zhou, X.: Detecting adversarial examples by additional evidence from noise domain. IET Image Process. 16(2), 378–392 (2022)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: CVPR, pp. 1735–1742 (2006)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: CVPR, pp. 9726–9735 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
Li, Y., Bian, X., Lyu, S.: Attacking object detectors via imperceptible patches on background. arXiv preprint arXiv:1809.05966 (2018)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Liu, Q., et al.: Learning part segmentation through unsupervised domain adaptation from synthetic vehicles. In: CVPR, pp. 19118–19129 (2022)
Lu, J., Sibai, H., Fabry, E.: Adversarial examples that fool detectors. arXiv preprint arXiv:1712.02494 (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: CVPR, pp. 2574–2582 (2016)
Pan, Z., Chen, Y., Zhang, J., Lu, H., Cao, Z., Zhong, W.: Find beauty in the rare: contrastive composition feature clustering for nontrivial cropping box regression. In: AAAI, pp. 2011–2019 (2023)
Papernot, N., McDaniel, P.D., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: S &P, pp. 582–597 (2016)
Redmon, J.: Darknet: open source neural networks in C (2013-2016). http://pjreddie.com/darknet/
Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788 (2016)
Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: CVPR, pp. 6517–6525 (2017)
Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NeurIPS, pp. 91–99 (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)
Sun, M., et al.: Can shape structure features improve model robustness under diverse adversarial settings? In: ICCV, pp. 7506–7515 (2021)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Wei, X., Liang, S., Chen, N., Cao, X.: Transferable adversarial attacks for image and video object detection. In: IJCAI, pp. 954–960 (2019)
Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: CVPR, pp. 3733–3742 (2018)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.L.: Adversarial examples for semantic segmentation and object detection. In: ICCV, pp. 1378–1387 (2017)
Xu, W., Huang, H., Pan, S.: Using feature alignment can improve clean average precision and adversarial robustness in object detection. In: ICIP, pp. 2184–2188 (2021)
Zhang, H., Wang, J.: Towards adversarially robust object detection. In: ICCV, pp. 421–430 (2019)
Zhang, Y., Zhu, H., Song, Z., Koniusz, P., King, I.: Spectral feature augmentation for graph contrastive learning and beyond. In: AAAI, pp. 11289–11297 (2023)
Acknowledgement
This work is supported in part by the National Natural Science Foundation of China under Grant No. 62101480, the Yunnan Foundational Research Project under Grant No. 202201AT070173 and No. 202201AU070034, Yunnan Province Education Department Foundation under Grant No.2022j0008, in part by the National Natural Science Foundation of China under Grant 62162067, Research and Application of Object detection based on Artificial Intelligence, in part by the Yunnan Province expert workstations under Grant 202205AF150145.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zeng, W., Gao, S., Zhou, W., Dong, Y., Wang, R. (2024). Improving the Adversarial Robustness of Object Detection with Contrastive Learning. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14433. Springer, Singapore. https://doi.org/10.1007/978-981-99-8546-3_3
Download citation
DOI: https://doi.org/10.1007/978-981-99-8546-3_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8545-6
Online ISBN: 978-981-99-8546-3
eBook Packages: Computer ScienceComputer Science (R0)