Abstract
This paper describes a live demo of our autonomous social gaze model for an interactive virtual character situated in the real world. We are interested in estimating which user has an intention to interact, in other words which user is engaged with the virtual character. The model takes into account behavioral cues such as proximity, velocity, posture and sound, estimates an engagement score and drives the gaze behavior of the virtual character. Initially, we assign equal weights to these features. Using data collected in a real setting, we analyze which features have higher importance. We found that the model with weighted features correlates better with the ground-truth data.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ruhland, K., Peters, C.E., Andrist, S., Badler, J.B., Badler, N.I., Gleicher, M., Mutlu, B., McDonnell, R.: A review of eye gaze in virtual agents, social robotics and HCI: Behavior generation, user interaction and perception. Computer Graphics Forum 34(6), 299–326 (2015)
Sidner, C.L., Lee, C., Kidd, C.D., Lesh, N., Rich, C.: Explorations in engagement for humans and robots. Artificial Intelligence 166(1–2), 140–164 (2005)
Michalowski, M.P., Sabanovic, S., Simmons, R.: A spatial model of engagement for a social robot. In: 9th IEEE International Workshop on Advanced Motion Control, pp. 762–767. IEEE (2006)
Bohus D., Horvitz, E.: Learning to predict engagement with a spoken dialog system in open-world settings. In: Proceedings of the SIGDIAL 2009 Conference, The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Stroudsburg, PA, USA, pp. 244–252 (2009)
Foster, M.E., Gaschler, A., Giuliani, M.: How can I help you? comparing engagement classification strategies for a robot bartender. In: Proceedings of the 15th International Conference on Multimodal Interaction (ICMI 2013), Sydney, Australia (2013)
Yumak, Z., Magnenat-Thalmann, N: Multimodal and multi-party social interactions. In: Magnenat-Thalmann, N., Yuan, J., Thalmann, D., You, B. (eds) Context Aware Human-Robot and Human-Agent Interaction, pp. 275–298. Springer International Publishing (2016)
Yumak, Z., van den Brink, B., Egges, A.: Autonomous Social Gaze Model for an Interactive Virtual Character in Real-Life Settings, Computer Animation and Virtual Worlds (2017)
Kopp, S., Krenn, B., Marsella, S., Marshall, Andrew N., Pelachaud, C., Pirker, H., Thórisson, Kristinn R., Vilhjálmsson, H.: Towards a common framework for multimodal generation: the behavior markup language. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS, vol. 4133, pp. 205–217. Springer, Heidelberg (2006). doi:10.1007/11821830_17
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
van den Brink, B., Christyowidiasmoro, Yumak, Z. (2017). Social Gaze Model for an Interactive Virtual Character. In: Beskow, J., Peters, C., Castellano, G., O'Sullivan, C., Leite, I., Kopp, S. (eds) Intelligent Virtual Agents. IVA 2017. Lecture Notes in Computer Science(), vol 10498. Springer, Cham. https://doi.org/10.1007/978-3-319-67401-8_56
Download citation
DOI: https://doi.org/10.1007/978-3-319-67401-8_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67400-1
Online ISBN: 978-3-319-67401-8
eBook Packages: Computer ScienceComputer Science (R0)