Abstract
A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mappings are assumed invertible, and are modeled as shallow neural networks, so that their inverse can be numerically evaluated, and their parameters can be learned using a technique inspired in deep learning. Due to the function inversion, the backpropagation step is not straightforward, and this paper explains the steps needed to calculate the gradients applying implicit differentiation. Whereas the model explainability is the same as that for linear VAR processes, preliminary numerical tests show that the prediction error becomes smaller.
The work in this paper was supported by the SFI Offshore Mechatronics grant 237896/O30.
L. M. Lopez-Ramos and K. Roy—Equal contribution in terms of working hours.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Notice that VAR models encode lagged interactions, and other linear models such as structural equation models (SEM) or structural VAR (SVAR) are available if interactions at a small time scale are required. In this paper, for the sake of simplicity, we focus on learning non-linear VAR models. However, our algorithm designs can also accomodate the SEM and SVAR frameworks without much difficulty.
References
Ardizzone, L., et al.: Analyzing inverse problems with invertible neural networks. arXiv preprint arXiv:1808.04730 (2018)
Bussmann, B., Nys, J., Latré, S.: Neural additive vector autoregression models for causal discovery in time series data. arXiv preprint arXiv:2010.09429 (2020)
Chen, Z., Sarma, S.V. (eds.): Dynamic Neuroscience. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-71976-4
Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4), 303–314 (1989)
Farnoosh, R., Hajebi, M., Mortazavi, S.J.: A semiparametric estimation for the nonlinear vector autoregressive time series model. Appl. Appl. Math. 12(1), 6 (2017)
Fujita, A., Severino, P., Sato, J.R., Miyano, S.: Granger causality in systems biology: modeling gene networks in time series microarray data using vector autoregressive models. In: Ferreira, C.E., Miyano, S., Stadler, P.F. (eds.) BSB 2010. LNCS, vol. 6268, pp. 13–24. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15060-9_2
Giannakis, G.B., Shen, Y., Karanikolas, G.V.: Topology identification and learning over graphs: accounting for nonlinearities and dynamics. Proc. IEEE 106(5), 787–807 (2018)
Granger Clive, W.: Some recent developments in a concept of causality. J. Econom. 39(1–2), 199–211 (1988)
Ioannidis, V.N., Shen, Y., Giannakis, G.B.: Semi-blind inference of topologies and dynamical processes over dynamic graphs. IEEE Trans. Signal Process. 67(9), 2263–2274 (2019)
Jin, M., Li, M., Zheng, Y., Chi, L.: Searching correlated patterns from graph streams. IEEE Access 8, 106690–106704 (2020)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lütkepohl, H.: New Introduction to Multiple Time Series Analysis. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-27752-1
Money, R., Krishnan, J., Beferull-Lozano, B.: Online non-linear topology identification from graph-connected time series. arXiv preprint arXiv:2104.00030 (2021)
Morioka, H., Hälvä, H., Hyvarinen, A.: Independent innovation analysis for nonlinear vector autoregressive process. In: International Conference on Artificial Intelligence and Statistics, pp. 1549–1557. PMLR (2021)
Nassif, F., Beheshti, S.: Automatic order selection in autoregressive modeling with application in eeg sleep-stage classification. In: ICASSP 2021–IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5135–5139 (2021). https://doi.org/10.1109/ICASSP39728.2021.9414795
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
Shen, Y., Giannakis, G.B., Baingana, B.: Nonlinear structural vector autoregressive models with application to directed brain networks. IEEE Trans. Signal Process. 67(20), 5325–5339 (2019)
Shen, Y., Giannakis, G.B.: Online identification of directional graph topologies capturing dynamic and nonlinear dependencies. In: 2018 IEEE Data Science Workshop (DSW), pp. 195–199 (2018). https://doi.org/10.1109/DSW.2018.8439119
Tank, A., Covert, I., Foti, N., Shojaie, A., Fox, E.B.: Neural granger causality. IEEE Trans. Pattern Anal. Mach. Intell. (01), 1 (2021). https://doi.org/10.1109/TPAMI.2021.3065601
Tank, A., Cover, I., Foti, N.J., Shojaie, A., Fox, E.B.: An interpretable and sparse neural network model for nonlinear granger causality discovery. arXiv preprint arXiv:1711.08160 (2017)
Yanuar, F.: The estimation process in Bayesian structural equation modeling approach. J. Phys: Conf. Ser. 495, 012047 (2014). https://doi.org/10.1088/1742-6596/495/1/012047
Zaman, B., Lopez-Ramos, L.M., Romero, D., Beferull-Lozano, B.: Online topology identification from vector autoregressive time series. IEEE Trans. Signal Process. 69, 210–225 (2020)
Zhou, R., Liu, J., Kumar, S., Palomar, D.P.: Parameter estimation for student’s t VAR model with missing data. In: Acoustics, Speech and Signal Processing (ICASSP) 2021 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 5145–5149 (2021). https://doi.org/10.1109/ICASSP39728.2021.9414223
Acknowledgement
The authors would like to thank Emilio Ruiz Moreno for helping us manage a more elegant derivation of the gradient of \(g_i(\cdot )\).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lopez-Ramos, L.M., Roy, K., Beferull-Lozano, B. (2022). Explainable Nonlinear Modelling of Multiple Time Series with Invertible Neural Networks. In: Sanfilippo, F., Granmo, OC., Yayilgan, S.Y., Bajwa, I.S. (eds) Intelligent Technologies and Applications. INTAP 2021. Communications in Computer and Information Science, vol 1616. Springer, Cham. https://doi.org/10.1007/978-3-031-10525-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-10525-8_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-10524-1
Online ISBN: 978-3-031-10525-8
eBook Packages: Computer ScienceComputer Science (R0)