Skip to main content

Continual Domain Adaption for Neural Machine Translation

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1965))

Included in the following conference series:

  • 845 Accesses

Abstract

Domain Neural Machine Translation (NMT) with small data- sets requires continual learning to incorporate new knowledge, as catastrophic forgetting is the main challenge that causes the model to forget old knowledge during fine-tuning. Additionally, most studies ignore the multi-stage domain adaptation of NMT. To address these issues, we propose a multi-stage incremental framework for domain NMT based on knowledge distillation. We also analyze how the supervised signals of the golden label and the teacher model work within a stage. Results show that the teacher model can only benefit the student model in the early epochs, while harms it in the later epochs. To solve this problem, we propose using two training objectives to encourage the early and later training. For early epochs, conventional continual learning is retained to fully leverage the teacher model and integrate old knowledge. For the later epochs, the bidirectional marginal loss is used to get rid of the negative impact of the teacher model. The experiments show that our method outperforms multiple continual learning methods, with an average improvement of 1.11 and 1.06 on two domain translation tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.datafountain.cn/special/BDCI2021/competition.

  2. 2.

    https://github.com/fxsjy/jieba.

  3. 3.

    https://github.com/alvations/sacremoses.

  4. 4.

    https://github.com/rsennrich/subword-nmt.

  5. 5.

    https://github.com/facebookresearch/fairseq.

References

  1. Aharoni, R., Goldberg, Y.: Unsupervised domain clusters in pretrained language models. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, vol. 1: Long Papers. Association for Computational Linguistics (2020). https://arxiv.org/abs/2004.02105

  2. Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

  3. Berard, A.: Continual learning in multilingual NMT via language-specific embeddings. In: Proceedings of the Sixth Conference on Machine Translation, pp. 542–565. Association for Computational Linguistics (2021). https://aclanthology.org/2021.wmt-1.62

  4. Cao, Y., Wei, H.R., Chen, B., Wan, X.: Continual learning for neural machine translation. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3964–3974 (2021)

    Google Scholar 

  5. Dabre, R., Fujita, A.: Combining sequence distillation and transfer learning for efficient low-resource neural machine translation models. In: Proceedings of the Fifth Conference on Machine Translation, pp. 492–502. Association for Computational Linguistics (2020). https://aclanthology.org/2020.wmt-1.61

  6. Diddee, H., Dandapat, S., Choudhury, M., Ganu, T., Bali, K.: Too brittle to touch: comparing the stability of quantization and distillation towards developing low-resource MT models. In: Proceedings of the Seventh Conference on Machine Translation (WMT), pp. 870–885. Association for Computational Linguistics, Abu Dhabi (Hybrid) (2022). https://aclanthology.org/2022.wmt-1.80

  7. French, R.: Catastrophic interference in connectionist networks: can it be predicted, can it be prevented? Adv. Neural Inf. Process. Syst. 6 (1993)

    Google Scholar 

  8. Garcia, X., Constant, N., Parikh, A., Firat, O.: Towards continual learning for multilingual machine translation via vocabulary substitution. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1184–1192 (2021)

    Google Scholar 

  9. Gordon, M.A., Duh, K.: Explaining sequence-level knowledge distillation as data-augmentation for neural machine translation. arXiv preprint arXiv:1912.03334 (2019)

  10. Gu, S., Hu, B., Feng, Y.: Continual learning of neural machine translation within low forgetting risk regions. arXiv preprint arXiv:2211.01542 (2022)

  11. Jin, X., et al.: Lifelong pretraining: continually adapting language models to emerging corpora. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4764–4780 (2022)

    Google Scholar 

  12. Jooste, W., Way, A., Haque, R., Superbo, R.: Knowledge distillation for sustainable neural machine translation. In: Proceedings of the 15th Biennial Conference of the Association for Machine Translation in the Americas, vol. 2: Users and Providers Track and Government Track, pp. 221–230. Association for Machine Translation in the Americas, Orlando (2022). https://aclanthology.org/2022.amta-upg.16

  13. Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7482–7491 (2018)

    Google Scholar 

  14. Khayrallah, H., Thompson, B., Duh, K., Koehn, P.: Regularized training objective for continued training for domain adaptation in neural machine translation. In: Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 36–44. Association for Computational Linguistics, Melbourne (2018). https://doi.org/10.18653/v1/W18-2705. https://aclanthology.org/W18-2705

  15. Kim, Y., Rush, A.M.: Sequence-level knowledge distillation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1317–1327. Association for Computational Linguistics, Austin (2016). https://doi.org/10.18653/v1/D16-1139. https://aclanthology.org/D16-1139

  16. Lin, Z., et al.: Pre-training multilingual neural machine translation by leveraging alignment information. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2649–2663. Association for Computational Linguistics (2020). https://www.aclweb.org/anthology/2020.emnlp-main.210

  17. Post, M.: A call for clarity in reporting BLEU scores. In: Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186–191. Association for Computational Linguistics, Brussels (2018). https://doi.org/10.18653/v1/W18-6319. https://aclanthology.org/W18-6319

  18. Saunders, D., Stahlberg, F., de Gispert, A., Byrne, B.: Domain adaptive inference for neural machine translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 222–228 (2019)

    Google Scholar 

  19. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 27 (2014)

    Google Scholar 

  20. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  21. Wang, F., Yan, J., Meng, F., Zhou, J.: Selective knowledge distillation for neural machine translation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, vol. 1: Long Papers, pp. 6456–6466. Association for Computational Linguistics (2021). https://doi.org/10.18653/v1/2021.acl-long.504. https://aclanthology.org/2021.acl-long.504

  22. Wei, H.R., Huang, S., Wang, R., Dai, X.Y., Chen, J.: Online distilling from checkpoints for neural machine translation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 1932–1941. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1192. https://aclanthology.org/N19-1192

  23. Wu, Y., et al.: Large scale incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 374–382 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guotong Geng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, M., Zhang, H., Yu, C., Geng, G. (2024). Continual Domain Adaption for Neural Machine Translation. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1965. Springer, Singapore. https://doi.org/10.1007/978-981-99-8145-8_33

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8145-8_33

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8144-1

  • Online ISBN: 978-981-99-8145-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics