Skip to main content
Log in

Photoplethysmographic waveform detection for determining hatching egg activity via deep neural network

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

It is essential to classify dead embryos and live embryos accurately in developing a successful vaccine. The deep learning-based classification of heartbeat signals to determine embryo activity is considered to be the most effective, but generally speaking, existing detection methods are either harmful to embryos or inefficient. The photoplethysmographic (PPG) waveform was used in this study for embryo activity detection. The PPG technique is non-invasive and works based on detection of optical absorption intensity in the blood. We rescaled the original data to weight each feature equally, which allows the CNN model to treat every feature in the data equally without neglecting low-intensity features. We also constructed a novel detection model capable of powerful feature extraction. Our model is based on the CNN structure and GRU. The CNN structure is the basic feature extractor. We added a channel attention mechanism to recalibrate the feature map channel, which enhances the network’s ability to extract useful features. The GRU module captures timing characteristics to compensate for the inability of the CNN to extract temporal information. We validated our approach on experimental data to find that it outperforms several baseline methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Tong, Q., Romanini, C.E.B., Exadaktylos, V., McGonnell, I.M., Berckmans, D., Bahr, C., Bergoug, H., Roulston, N., Guinebretiere, M., Eterradossi, N., Verhelst, R., Demmers, T.G.M.: Detection of embryo mortality and hatch using thermal differences among incubated chicken eggs. Livestock Sci. 183, 19–23 (2016). https://doi.org/10.1016/j.livsci.2015.11.004

    Article  Google Scholar 

  2. Liu, L., Ngadi, M.O.: Detecting fertility and early embryo development of chicken eggs using near-infrared hyperspectral imaging. Food Bioprocess Technol. 6(9), 2503–2513 (2013). https://doi.org/10.1016/j.livsci.2015.11.004

    Article  Google Scholar 

  3. Lawrence, K. C., Smith, D. P., Windham, W. R., Heitschmidt, G. W., Park, B.: Egg embryo development detection with hyperspectral imaging. In: Conference on Optics for Natural Resoures, Agriculture, and Foods, 63810T (2006). https://doi.org/10.1117/12.686303

  4. Xu, Q. L., Cui, F. Y.: Non-destructive detection on the fertility of injected SPF eggs in vaccine manufacture. In: Proceedings of the 26th Chinese Control and Decision Conference, pp. 1574–1579 (2014). https://doi.org/10.1109/CCDC.2014.6852418

  5. Shan, B.: Fertility detection of middle-stage hatching egg in vaccine production using machine vision. In: Proceedings of the 2010 Second International Workshop on Education Technology and Computer Science, pp. 95–98 (2010). https://doi.org/10.1109/ETCS.2010.540

  6. Gamboa, J.C.B.: Deep learning for time-series analysis. (2017)

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  8. Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks. IEEE Trans Pattern Anal. 42(8), 2011–2023 (2017). https://doi.org/10.1109/TPAMI.2019.2913372

    Article  Google Scholar 

  9. van Cho, K., Bart, M., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. (2014). https://arxiv.org/abs/1409.1259

  10. Huang, G., Liu, Z., Van, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017). https://doi.org/10.1109/CVPR.2017.243

  11. Geng, L., Yan, T., Xiao, Z., Xi, J., Li, Y.: Hatching eggs classification based on deep learning. Multimedia Tools Appl. 77, 22071–22082 (2018). https://doi.org/10.1007/s11042-017-5333-2

    Article  Google Scholar 

  12. Geng, L., Xu, Y., Xiao, Z.T., Tong, J.: DPSA: dense pixelwise spatial attention network for hatching egg fertility detection. J. Electron Imag. 29(2), 023011 (2020). https://doi.org/10.1117/1.JEI.29.2.023011

    Article  Google Scholar 

  13. Lea, C., Vidal, R., Reiter, A., Hager, G. D.: Temporal convolutional networks: A unified approach to action segmentation. In: Proceedings of the 14th European Conference on Computer Vision (ECCV), pp. 47–54 (2016). https://doi.org/10.1007/978-3-319-49409-8_7

  14. Geng, L., Wang, H., Xiao, Z., et al.: Fully convolutional network with gated recurrent unit for hatching egg activity classification. IEEE Access 7, 92378–92387 (2019). https://doi.org/10.1109/ACCESS.2019.2925508

    Article  Google Scholar 

  15. Ludovic, T., Philippe. G., Brahim, C.: Parametric exponential linear unit for deep convolutional neural networks. In: Proceedings of the 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 207–214 (2017). https://doi.org/10.1109/ICMLA.2017.00038

  16. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034 (2015). https://doi.org/10.1109/ICCV.2015.123

  17. Diederik, P., Jimmy, L.: Adam: a method for stochastic optimization. (2014).

  18. Karen, S., Andrew S.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICLR). (2015). https://arxiv.org/abs/1409.1556v1

  19. Aaron, O., Sander, D., Heiga, Z., Karen, S., Oriol, V., Alex, G., Nal, K., Andrew, S., Koray, K.: WaveNet: a generative model for raw audio. raw audio (2016). https://arxiv.org/abs/1609.03499v2

  20. Sundermeyer, M., Schlüter, R., Ney, H.: LSTM neural networks for language modeling. In: Proceedings of the 13th Annual Conference of the International Speech Communication Association (INTERSPEECH), vol. 1, pp. 194–197 (2012). https://doi.org/10.21437/Interspeech.2012-65

Download references

Acknowledgements

This work was supported by the Program for Innovative Research Team in University of Tianjin (Grant No. TD13-5034), and the Natural Science Foundation of Tianjin City (Grant No. 18JCYBJC15300).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhitao Xiao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Geng, L., Guo, Q., Xiao, Z. et al. Photoplethysmographic waveform detection for determining hatching egg activity via deep neural network. SIViP 16, 955–963 (2022). https://doi.org/10.1007/s11760-021-02040-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-021-02040-y

Keywords