Skip to main content
Log in

Causal deep learning for explainable vision-based quality inspection under visual interference

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

Vision-based quality inspection is a key step to ensure the quality control of complex industrial products. However, accurate defect recognition for complex products with information-rich, structure-irregular and significantly different patterns is still a tough problem, since it causes the strong visual interference. This paper proposes a causal deep learning method (CDLM) to tackle the explainable vision-based quality inspection under visual interference. First, a structural causal model for defect recognition of complex industrial products is constructed and a causal intervention strategy to overcome the background interference is generated. Second, a defect-guided recognition neural network (DGRNN) is constructed, which can realize accurate defect recognition under the training of CDLM via feature-wise causal intervention using two sub-networks with feature difference mechanism. Finally, the causality between defect features and defective product labels can guide the DGRNN to complete the accurate and explainable learning of defect in a causal direction of optimization. Quantitative experiments show that the proposed method achieves recognition accuracy of 94.09% and 93.95% on two fabric datasets respectively, which outperforms the cutting-edge inspection models. Besides, Grad-CAM visualization experiments show that the proposed method successfully captures the data causality and realizes the explainable defect recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

The experimental data used in this study are from ‘ZJU-Leaper: A Benchmark Dataset for Fabric Defect Detection and a Comparative Study’ (Zhang et al., 2020a, 2020b). The DOI of this article is https://doi.org/https://doi.org/10.1109/tai.2021.3057027 and the data are available by visiting https://github.com/nico-zck/ZJU-Leaper-Dataset or http://www.qaas.zju.edu.cn/zju-leaper/.

Abbreviations

CDLM:

Causal deep learning method

IQI:

Intelligent quality inspection

DL:

Deep learning

DNN:

Deep neural networks

CIP:

Complex industrial products

CPF:

Complex patterned fabrics

SCM:

Structural causal model

DGRNN:

Defect-guided recognition neural network

CIM:

Causal intervention module

FDM:

Feature difference module

BK-Net:

Background knowledge network

DD-Net:

Defect detection network

VQA:

Visual question and answering

Init_DS:

Initial down-sampling

FE:

Feature extraction

Conv:

Convolution

BN:

Batch normalization

FC:

Fully-connected

SA:

Spatial attention

CA:

Channel attention

ACC:

Accuracy

PRE:

Precision

REC:

Recall

AUC:

Area under curve

ROC:

Receiver operating characteristic

PAR:

Parameter amount

FLOPs:

Floating-point operations

FPS:

Frame per second

References

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China under grant [No. 52275478] and [No. 52375485], Key R&D Program of Shandong Province of China under grant [No. 2021CXGC011004], Young Elite Scientists Sponsorship Program by CAST under grand [No. 2021QNRC001] and Key R&D Plan of Xinjiang Uyghur Autonomous Region of China under grant [No. 2022B01057-1].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junliang Wang.

Ethics declarations

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The setting of hyperparameters in the training of DNN is always an empirical problem. In this paper, five different learning rate optimization strategies are selected for experiments, namely constant learning rate, i.e., no scheduler, cosine annealing learning rate, exponential learning rate and its parameter gamma is set to 0.98, step learning rate and its parameter step is set to 5 and gamma is set to 0.9, and one cycle learning rate. The comparison results are shown in Fig. 14.

Fig. 14
figure 14

Comparison of different learning rate optimization strategies

The results show that different learning rate optimization strategies have slight differences in the training of CDLM, but all of these strategies can contribute to a decent score. In this paper, the cosine annealing optimization strategy is chosen because it can make the DGRNN achieve faster convergence speed and more efficient fitting result. Besides, as shown in the training results on the FD_2 data set, it can be seen that the cosine annealing strategy can enable DGRNN to achieve a higher recognition accuracy in the early iterative training stage. However, it also has some shortcomings, such as the existence of certain oscillations after convergence. Therefore, trying different learning rate optimization strategies to assist the training of DGRNN in other visual inspection scenarios is encouraged.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, T., Liu, T., Wang, J. et al. Causal deep learning for explainable vision-based quality inspection under visual interference. J Intell Manuf 36, 1363–1384 (2025). https://doi.org/10.1007/s10845-023-02297-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10845-023-02297-9

Keywords