Abstract
Recently we see a rising number of methods in the field of eXplainable Artificial Intelligence. To our surprise, their development is driven by model developers rather than a study of needs for human end users. The analysis of needs, if done, takes the form of an A/B test rather than a study of open questions. To answer the question “What would a human operator like to ask the ML model?” we propose a conversational system explaining decisions of the predictive model. In this experiment, we developed a chatbot called dr_ant to talk about machine learning model trained to predict survival odds on Titanic. People can talk with dr_ant about different aspects of the model to understand the rationale behind its predictions. Having collected a corpus of 1000+ dialogues, we analyse the most common types of questions that users would like to ask. To our knowledge, it is the first study which uses a conversational system to collect the needs of human operators from the interactive and iterative dialogue explorations of a predictive model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
You can download the model from the
[6] database with a following hook: archivist::aread(“pbiecek/models/42d51”).
- 3.
The source code is available at https://github.com/ModelOriented/xaibot.
References
Titanic dataset. https://www.kaggle.com/c/titanic/data
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019)
Baniecki, H., Biecek, P.: modelStudio: interactive studio with explanations for ML predictive models. J. Open Source Softw. (2019). https://doi.org/10.21105/joss.01798
Biecek, P.: DALEX: explainers for complex predictive models in R. J. Mach. Learn. Res. 19, 1–5 (2018)
Biecek, P., Burzykowski, T.: Explanatory Model Analysis. Explore, Explain and Examine Predictive Models (2020). https://pbiecek.github.io/ema/
Biecek, P., Kosinski, M.: archivist: an R package for managing, recording and restoring data analysis results. J. Stat. Softw. 82(11), 1–28 (2017)
El-Assady, M., et al.: Towards XAI: structuring the processes of explanations (2019)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an approach to evaluating interpretability of machine learning (2018). http://arxiv.org/abs/1806.00069
Gosiewska, A., Biecek, P.: Do Not Trust Additive Explanations. arXiv e-prints (2019)
Hoover, B., Strobelt, H., Gehrmann, S.: exBERT: a visual analysis tool to explore learned representations in transformers models (2019)
Jentzsch, S., Höhn, S., Hochgeschwender, N.: Conversational interfaces for explainable AI: a human-centred approach (2019)
Kuzba, M., Baranowska, E., Biecek, P.: pyCeterisParibus: explaining machine learning models with ceteris paribus profiles in Python. JOSS 4(37), 1389 (2019). http://joss.theoj.org/papers/10.21105/joss.01389
Lage, I., et al.: An evaluation of the human-interpretability of explanation (2019). http://arxiv.org/abs/1902.00006
Lipton, Z.C.: The mythos of model interpretability (2016). http://arxiv.org/abs/1606.03490
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: A grounded interaction protocol for explainable artificial intelligence. In: AAMAS (2019)
Madumal, P., Miller, T., Vetere, F., Sonenberg, L.: Towards a grounded dialog model for explainable artificial intelligence (2018). http://arxiv.org/abs/1806.08055
Miller, T.: Explanation in artificial intelligence: Insights from the social sciences (2017), http://arxiv.org/abs/1706.07269
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences (2017). http://arxiv.org/abs/1712.00547
Molnar, C., Casalicchio, G., Bischl, B.: Quantifying interpretability of arbitrary machine learning models through functional decomposition. arXiv e-prints (2019)
Mueller, S.T., Hoffman, R.R., Clancey, W.J., Emrey, A., Klein, G.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI (2019). http://arxiv.org/abs/1902.01876
Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: a unified framework for machine learning interpretability (2019). https://arxiv.org/abs/1909.09223
Pecune, F., Murali, S., Tsai, V., Matsuyama, Y., Cassell, J.: A model of social explanations for a conversational movie recommendation system (2019)
R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2019). https://www.R-project.org/
Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You?: Explaining the Predictions of Any Classifier (2016). https://doi.org/10.1145/2939672.2939778
Ribera, M., Lapedriza, À.: Can we do better explanations? a proposal of user-centered explainable AI. In: IUI Workshops (2019)
Rydelek, A.: xai2cloud: Deploys An Explainer To The Cloud (2020). https://modeloriented.github.io/xai2cloud
Scantamburlo, T., Charlesworth, A., Cristianini, N.: Machine decisions and human consequences (2018). http://arxiv.org/abs/1811.06747
Sokol, K., Flach, P.: Conversational explanations of machine learning predictions through class-contrastive counterfactual statements. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Organization, IJCAI-18, pp. 5785–5786, July 2018. https://doi.org/10.24963/ijcai.2018/836
Sokol, K., Flach, P.: Glass-box: explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Organization, IJCAI-18, pp. 5868–5870 (2018). https://doi.org/10.24963/ijcai.2018/865
Sokol, K., Flach, P.: One explanation does not fit all. KI - Künstliche Intelligenz 34(2), 235–250 (2020). https://doi.org/10.1007/s13218-020-00637-y
Tan, H.F., Song, K., Udell, M., Sun, Y., Zhang, Y.: Why should you trust my interpretation? Understanding uncertainty in LIME predictions (2019). http://arxiv.org/abs/1904.12991
Tomsett, R., Braines, D., Harborne, D., Preece, A., Chakraborty, S.: Interpretable to whom? A role-based model for analyzing interpretable machine learning systems (2018)
Trestle Technology, LLC: plumber: an API Generator for R (2018)
Werner, C.: Explainable AI through rule-based interactive conversation. In: EDBT/ICDT Workshops (2020)
Acknowledgments
We would like to thank 3 anonymous reviewers for their insightful comments and suggestions. Michał Kuźba was financially supported by the ‘NCN Opus grant 2016/21/B/ST6/0217’.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kuźba, M., Biecek, P. (2020). What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations. In: Koprinska, I., et al. ECML PKDD 2020 Workshops. ECML PKDD 2020. Communications in Computer and Information Science, vol 1323. Springer, Cham. https://doi.org/10.1007/978-3-030-65965-3_30
Download citation
DOI: https://doi.org/10.1007/978-3-030-65965-3_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-65964-6
Online ISBN: 978-3-030-65965-3
eBook Packages: Computer ScienceComputer Science (R0)