Skip to main content

TS-MVP: Time-Series Representation Learning by Multi-view Prototypical Contrastive Learning

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14180))

Included in the following conference series:

  • 1074 Accesses

Abstract

IoT and wearable devices generate large amounts of time series data daily, providing opportunities for the development of human-computer interaction and digital services through learning powerful representations from these rich data. While Masked Autoencoders (MAE) have been used for time series representation learning, contrastive learning has superior performance. However, existing contrastive learning methods often utilize perturbation operations that may disrupt the local and global structure of time series data, and they do not explicitly model the relationship between downstream classification tasks. In this paper, we propose a framework based on multi-view prototypical contrastive learning for learning multivariate time-series representations from unlabeled data. Our approach involves transforming the original data into time-based and feature-based views using innovative masking technology based on state transfer probabilities and then embedding them using an encoder along with the original data. Moreover, a novel prototype contrastive module is designed that learns similar outputs from different views using clustered soft labels generated by the original data and prototypes, which helps the model develop fine-grained representations that can be effectively integrated into classification tasks. We conducted experiments on four real-world time series datasets, and the results demonstrate that our proposed TS-MVP framework outperforms previous time series representation learning methods when training a linear classifier on top of the learned features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Anguita, D., Ghio, A., Oneto, L., Parra Perez, X., Reyes Ortiz, J.L.: A public domain dataset for human activity recognition using smartphones. In: Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pp. 437–442 (2013)

    Google Scholar 

  2. Assran, M., et al.: Masked Siamese networks for label-efficient learning. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) European Conference on Computer Vision, vol. 13691, pp. 456–473. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19821-2_26

  3. Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., Joulin, A.: Unsupervised learning of visual features by contrasting cluster assignments. Adv. Neural. Inf. Process. Syst. 33, 9912–9924 (2020)

    Google Scholar 

  4. Cerqueira, V., Torgo, L., Soares, C.: Early anomaly detection in time series: a hierarchical approach for predicting critical health episodes. Mach. Learn. 1–22 (2023)

    Google Scholar 

  5. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  6. Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297 (2020)

  7. Chen, X., He, K.: Exploring simple Siamese representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750–15758 (2021)

    Google Scholar 

  8. Cheng, Y., et al.: CUTS: neural causal discovery from irregular time-series data. arXiv preprint arXiv:2302.07458 (2023)

  9. Eldele, E., et al.: Time-series representation learning via temporal and contextual contrasting. arXiv preprint arXiv:2106.14112 (2021)

  10. Grill, J.B., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural. Inf. Process. Syst. 33, 21271–21284 (2020)

    Google Scholar 

  11. Kiyasseh, D., Zhu, T., Clifton, D.A.: CLOCS: contrastive learning of cardiac signals across space, time, and patients. In: International Conference on Machine Learning, pp. 5606–5615. PMLR (2021)

    Google Scholar 

  12. Kwapisz, J.R., Weiss, G.M., Moore, S.A.: Activity recognition using cell phone accelerometers. ACM SIGKDD Explor. Newsl. 12(2), 74–82 (2011)

    Article  Google Scholar 

  13. Li, H., Yu, S., Principe, J.: Causal recurrent variational autoencoder for medical time series generation. arXiv preprint arXiv:2301.06574 (2023)

  14. Liu, J., Zhong, L., Wickramasuriya, J., Vasudevan, V.: uWave: accelerometer-based personalized gesture recognition and its applications. Pervasive Mob. Comput. 5(6), 657–675 (2009)

    Article  Google Scholar 

  15. Mitrovic, J., McWilliams, B., Walker, J., Buesing, L., Blundell, C.: Representation learning via invariant causal mechanisms. arXiv preprint arXiv:2010.07922 (2020)

  16. van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)

  17. Ravuri, S., et al.: Skilful precipitation nowcasting using deep generative models of radar. Nature 597(7878), 672–677 (2021)

    Article  Google Scholar 

  18. Rebjock, Q., Kurt, B., Januschowski, T., Callot, L.: Online false discovery rate control for anomaly detection in time series. Adv. Neural. Inf. Process. Syst. 34, 26487–26498 (2021)

    Google Scholar 

  19. Seong, H.S., Moon, W., Lee, S., Heo, J.P.: Leveraging hidden positives for unsupervised semantic segmentation. arXiv preprint arXiv:2303.15014 (2023)

  20. Shao, R., Wu, T., Liu, Z.: Detecting and grounding multi-modal media manipulation. arXiv preprint arXiv:2304.02556 (2023)

  21. Shao, Z., Zhang, Z., Wang, F., Xu, Y.: Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1567–1577 (2022)

    Google Scholar 

  22. Su, B., Wen, J.R.: Temporal alignment prediction for supervised representation learning and few-shot sequence classification. In: International Conference on Learning Representations (2022)

    Google Scholar 

  23. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  24. Wickstrøm, K., Kampffmeyer, M., Mikalsen, K.Ø., Jenssen, R.: Mixing up contrastive learning: self-supervised representation learning for time series. Pattern Recogn. Lett. 155, 54–61 (2022)

    Article  Google Scholar 

  25. Yue, Z., et al.: Ts2vec: towards universal representation of time series. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 8980–8987 (2022)

    Google Scholar 

  26. Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A., Eickhoff, C.: A transformer-based framework for multivariate time series representation learning. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 2114–2124 (2021)

    Google Scholar 

  27. Zhang, X., Zeman, M., Tsiligkaridis, T., Zitnik, M.: Graph-guided network for irregularly sampled multivariate time series. arXiv preprint arXiv:2110.05357 (2021)

  28. Zhang, X., Zhao, Z., Tsiligkaridis, T., Zitnik, M.: Self-supervised contrastive pre-training for time series via time-frequency consistency. arXiv preprint arXiv:2206.08496 (2022)

Download references

Acknowledgements

We thank editors and reviewers for their suggestions and comments. This work was supported by National Key R&D Program of China (No. 2021YFC3340700), NSFC grants (No. 62136002 and No. 61972155), and Shanghai Trusted Industry Internet Software Collaborative Innovation Center.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoling Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhong, B., Wang, P., Pan, J., Wang, X. (2023). TS-MVP: Time-Series Representation Learning by Multi-view Prototypical Contrastive Learning. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14180. Springer, Cham. https://doi.org/10.1007/978-3-031-46677-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46677-9_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46676-2

  • Online ISBN: 978-3-031-46677-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics