Skip to main content

Explainable Nonlinear Modelling of Multiple Time Series with Invertible Neural Networks

  • Conference paper
  • First Online:
Intelligent Technologies and Applications (INTAP 2021)

Abstract

A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mappings are assumed invertible, and are modeled as shallow neural networks, so that their inverse can be numerically evaluated, and their parameters can be learned using a technique inspired in deep learning. Due to the function inversion, the backpropagation step is not straightforward, and this paper explains the steps needed to calculate the gradients applying implicit differentiation. Whereas the model explainability is the same as that for linear VAR processes, preliminary numerical tests show that the prediction error becomes smaller.

The work in this paper was supported by the SFI Offshore Mechatronics grant 237896/O30.

L. M. Lopez-Ramos and K. Roy—Equal contribution in terms of working hours.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Notice that VAR models encode lagged interactions, and other linear models such as structural equation models (SEM) or structural VAR (SVAR) are available if interactions at a small time scale are required. In this paper, for the sake of simplicity, we focus on learning non-linear VAR models. However, our algorithm designs can also accomodate the SEM and SVAR frameworks without much difficulty.

References

  1. Ardizzone, L., et al.: Analyzing inverse problems with invertible neural networks. arXiv preprint arXiv:1808.04730 (2018)

  2. Bussmann, B., Nys, J., Latré, S.: Neural additive vector autoregression models for causal discovery in time series data. arXiv preprint arXiv:2010.09429 (2020)

  3. Chen, Z., Sarma, S.V. (eds.): Dynamic Neuroscience. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-71976-4

    Book  MATH  Google Scholar 

  4. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4), 303–314 (1989)

    Article  MathSciNet  Google Scholar 

  5. Farnoosh, R., Hajebi, M., Mortazavi, S.J.: A semiparametric estimation for the nonlinear vector autoregressive time series model. Appl. Appl. Math. 12(1), 6 (2017)

    MathSciNet  MATH  Google Scholar 

  6. Fujita, A., Severino, P., Sato, J.R., Miyano, S.: Granger causality in systems biology: modeling gene networks in time series microarray data using vector autoregressive models. In: Ferreira, C.E., Miyano, S., Stadler, P.F. (eds.) BSB 2010. LNCS, vol. 6268, pp. 13–24. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15060-9_2

    Chapter  Google Scholar 

  7. Giannakis, G.B., Shen, Y., Karanikolas, G.V.: Topology identification and learning over graphs: accounting for nonlinearities and dynamics. Proc. IEEE 106(5), 787–807 (2018)

    Article  Google Scholar 

  8. Granger Clive, W.: Some recent developments in a concept of causality. J. Econom. 39(1–2), 199–211 (1988)

    Article  MathSciNet  Google Scholar 

  9. Ioannidis, V.N., Shen, Y., Giannakis, G.B.: Semi-blind inference of topologies and dynamical processes over dynamic graphs. IEEE Trans. Signal Process. 67(9), 2263–2274 (2019)

    Article  MathSciNet  Google Scholar 

  10. Jin, M., Li, M., Zheng, Y., Chi, L.: Searching correlated patterns from graph streams. IEEE Access 8, 106690–106704 (2020)

    Article  Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  12. Lütkepohl, H.: New Introduction to Multiple Time Series Analysis. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-27752-1

  13. Money, R., Krishnan, J., Beferull-Lozano, B.: Online non-linear topology identification from graph-connected time series. arXiv preprint arXiv:2104.00030 (2021)

  14. Morioka, H., Hälvä, H., Hyvarinen, A.: Independent innovation analysis for nonlinear vector autoregressive process. In: International Conference on Artificial Intelligence and Statistics, pp. 1549–1557. PMLR (2021)

    Google Scholar 

  15. Nassif, F., Beheshti, S.: Automatic order selection in autoregressive modeling with application in eeg sleep-stage classification. In: ICASSP 2021–IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5135–5139 (2021). https://doi.org/10.1109/ICASSP39728.2021.9414795

  16. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

  17. Shen, Y., Giannakis, G.B., Baingana, B.: Nonlinear structural vector autoregressive models with application to directed brain networks. IEEE Trans. Signal Process. 67(20), 5325–5339 (2019)

    Article  MathSciNet  Google Scholar 

  18. Shen, Y., Giannakis, G.B.: Online identification of directional graph topologies capturing dynamic and nonlinear dependencies. In: 2018 IEEE Data Science Workshop (DSW), pp. 195–199 (2018). https://doi.org/10.1109/DSW.2018.8439119

  19. Tank, A., Covert, I., Foti, N., Shojaie, A., Fox, E.B.: Neural granger causality. IEEE Trans. Pattern Anal. Mach. Intell. (01), 1 (2021). https://doi.org/10.1109/TPAMI.2021.3065601

  20. Tank, A., Cover, I., Foti, N.J., Shojaie, A., Fox, E.B.: An interpretable and sparse neural network model for nonlinear granger causality discovery. arXiv preprint arXiv:1711.08160 (2017)

  21. Yanuar, F.: The estimation process in Bayesian structural equation modeling approach. J. Phys: Conf. Ser. 495, 012047 (2014). https://doi.org/10.1088/1742-6596/495/1/012047

    Article  Google Scholar 

  22. Zaman, B., Lopez-Ramos, L.M., Romero, D., Beferull-Lozano, B.: Online topology identification from vector autoregressive time series. IEEE Trans. Signal Process. 69, 210–225 (2020)

    Article  MathSciNet  Google Scholar 

  23. Zhou, R., Liu, J., Kumar, S., Palomar, D.P.: Parameter estimation for student’s t VAR model with missing data. In: Acoustics, Speech and Signal Processing (ICASSP) 2021 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 5145–5149 (2021). https://doi.org/10.1109/ICASSP39728.2021.9414223

Download references

Acknowledgement

The authors would like to thank Emilio Ruiz Moreno for helping us manage a more elegant derivation of the gradient of \(g_i(\cdot )\).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luis Miguel Lopez-Ramos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lopez-Ramos, L.M., Roy, K., Beferull-Lozano, B. (2022). Explainable Nonlinear Modelling of Multiple Time Series with Invertible Neural Networks. In: Sanfilippo, F., Granmo, OC., Yayilgan, S.Y., Bajwa, I.S. (eds) Intelligent Technologies and Applications. INTAP 2021. Communications in Computer and Information Science, vol 1616. Springer, Cham. https://doi.org/10.1007/978-3-031-10525-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10525-8_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10524-1

  • Online ISBN: 978-3-031-10525-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics