Skip to main content

FPLGen: A Personalized Dialogue System Based on Feature Prompt Learning

  • Conference paper
  • First Online:
Neural Computing for Advanced Applications (NCAA 2024)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2183))

Included in the following conference series:

  • 129 Accesses

Abstract

Humans with cognitive ability are able to identify pertinent characteristics in other people’s discussions easily and respond to conversations containing characteristics that already belong to a certain person. However, utilizing personality information for personalized response generation remains a non-trivial task. The system must consider both the user’s conversation history and personality description, posing challenges for coherent model training. In this work, we meticulously design a feature-prompted learning-based dialog generator. This innovative tool primarily harnesses personality description data and dialog history to generate responses. FPLGen uses a clustering mechanism to organize personality description texts into distinct, sparsely populated categories. These categories are merged with historical contextual information and transformed via conditional variational autoencoders. The system incorporates our unique information enhancement and feature-prompted learning strategies, enabling comprehensive dialog synthesis. To validate our model’s efficacy, we conducted experiments on the Chinese persona chat dataset. The results, compared to baseline models, provide irrefutable evidence that our FPLGen model excels at producing richer, more engaging personalized responses.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bao, S., et al.: PLATO-XL: exploring the large-scale pre-training of dialogue generation. arXiv preprint arXiv:2109.09519 (2021)

  2. Dong, L., Zhang, H., Yang, K., Zhou, D., Shi, J., Ma, J.: Crowd counting by using top-k relations: a mixed ground-truth CNN framework. IEEE Trans. Consum. Electron. 68(3), 307–316 (2022)

    Article  Google Scholar 

  3. Gu, Y., et al.: Eva2.0: Investigating open-domain Chinese dialogue systems with large-scale pre-training. Mach. Intell. Res. 20(2), 207–219 (2023). https://doi.org/10.1007/s11633-022-1387-3

  4. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)

  5. Kottur, S., Wang, X., Carvalho, V.: Exploring personalized neural conversational models. In: IJCAI, pp. 3728–3734 (2017)

    Google Scholar 

  6. Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.: A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 (2015)

  7. Li, L., Zhang, Y., Chen, L.: Personalized prompt learning for explainable recommendation. ACM Trans. Inform. Syst. 41(4), 1–26 (2023)

    Google Scholar 

  8. Lin, C.Y., Och, F.J.: Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In: Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pp. 605–612 (2004)

    Google Scholar 

  9. Liu, S., Cho, H.J., Freedman, M., Ma, X., May, J.: RECAP: retrieval-enhanced context-aware prefix encoder for personalized dialogue response generation. arXiv preprint arXiv:2306.07206 (2023)

  10. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  11. Ma, Z., Dou, Z., Zhu, Y., Zhong, H., Wen, J.R.: One chatbot per person: Creating personalized chatbots based on implicit user profiles. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 555–564 (2021)

    Google Scholar 

  12. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  13. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  14. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)

    Google Scholar 

  15. Song, H., Wang, Y., Zhang, K., Zhang, W.N., Liu, T.: BoB: BERT over BERT for training persona-based dialogue models from limited personalized data. arXiv preprint arXiv:2106.06169 (2021)

  16. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  17. Wolf, T., et al.: HuggingFace’s transformers: state-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019)

  18. Xing, C., et al.: Topic augmented neural response generation with a joint attention mechanism. arXiv preprint arXiv:1606.083402(2) (2016)

  19. Yan, H., Zhang, H., Shi, J., Ma, J., Xu, X.: Inspiration transfer for intelligent design: a generative adversarial network with fashion attributes disentanglement. IEEE Trans. Consum. Electron. 64(04), 1152–1163 (2023)

    Article  ADS  Google Scholar 

  20. Yu, Z., Xu, Z., Black, A.W., Rudnicky, A.: Strategy and policy learning for non-task-oriented conversational systems. In: Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 404–412 (2016)

    Google Scholar 

  21. Zeng, W., et al.: Pangu-\(\alpha \): large-scale autoregressive pretrained Chinese language models with auto-parallel computation. arXiv preprint arXiv:2104.12369 (2021)

  22. Zhang, M., Jing, X., Zhou, Z., Sun, M.: Rapid and restricted swing control via adaptive output feedback for 5-DOF tower crane systems. Mech. Syst. Signal Process. 212, 111283 (2024)

    Article  Google Scholar 

  23. Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., Weston, J.: Personalizing dialogue agents: i have a dog, do you have pets too? arXiv preprint arXiv:1801.07243 (2018)

  24. Zhang, X., Xiang, B., Liu, Z., Li, D.: Influence of temperature field on particle distribution in feeder pipeline with CFD simulation. China Powder Sci. Technol. 27(04), 93–103 (2021)

    Google Scholar 

  25. Zhang, Z., et al.: CPM: a large-scale generative Chinese pre-trained language model. AI Open 2, 93–99 (2021)

    Article  Google Scholar 

  26. Zhao, T., Zhao, R., Eskenazi, M.: Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. arXiv preprint arXiv:1703.10960 (2017)

  27. Zhong, H., Dou, Z., Zhu, Y., Qian, H., Wen, J.R.: Less is more: learning to refine dialogue history for personalized dialogue generation. arXiv preprint arXiv:2204.08128 (2022)

  28. Zhou, H., et al.: Eva: An open-domain Chinese dialogue system with large-scale generative pre-training. arXiv preprint arXiv:2108.01547 (2021)

  29. Zuobing, C., Qin, X., Shijie, Y., Jie, Z.: Numerical simulation of temperature field and influence factors of three leaves Rotarykiln for Ceramsite. China Powder Sci. Technol. 23(06), 6–10 (2017)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant No. 62273163, the Taishan Scholar Foundation of Shandong Province under Grant No. tsqn202312212, the Outstanding Youth Foundation of Shandong Province Under Grant No. ZR2023YQ056, the Key R&D Project of Shandong Province under Grant No. 2022CXGC010503, the Youth Fund of Natural Science Foundation of Shandong Province under Grant No. ZR2021QF130.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Ke Huang or Menghua Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chu, Y., Huang, K., Li, Y., Zhu, H., Li, P., Zhang, M. (2025). FPLGen: A Personalized Dialogue System Based on Feature Prompt Learning. In: Zhang, H., Li, X., Hao, T., Meng, W., Wu, Z., He, Q. (eds) Neural Computing for Advanced Applications. NCAA 2024. Communications in Computer and Information Science, vol 2183. Springer, Singapore. https://doi.org/10.1007/978-981-97-7007-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-7007-6_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-7006-9

  • Online ISBN: 978-981-97-7007-6

  • eBook Packages: Artificial Intelligence (R0)

Publish with us

Policies and ethics