Skip to main content

What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations

  • Conference paper
  • First Online:
ECML PKDD 2020 Workshops (ECML PKDD 2020)

Abstract

Recently we see a rising number of methods in the field of eXplainable Artificial Intelligence. To our surprise, their development is driven by model developers rather than a study of needs for human end users. The analysis of needs, if done, takes the form of an A/B test rather than a study of open questions. To answer the question “What would a human operator like to ask the ML model?” we propose a conversational system explaining decisions of the predictive model. In this experiment, we developed a chatbot called dr_ant to talk about machine learning model trained to predict survival odds on Titanic. People can talk with dr_ant about different aspects of the model to understand the rationale behind its predictions. Having collected a corpus of 1000+ dialogues, we analyse the most common types of questions that users would like to ask. To our knowledge, it is the first study which uses a conversational system to collect the needs of human operators from the interactive and iterative dialogue explorations of a predictive model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/mishushakov/dialogflow-web-v2.

  2. 2.

    You can download the model from the [6] database with a following hook: archivist::aread(“pbiecek/models/42d51”).

  3. 3.

    The source code is available at https://github.com/ModelOriented/xaibot.

References

  1. Titanic dataset. https://www.kaggle.com/c/titanic/data

  2. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019)

    Google Scholar 

  3. Baniecki, H., Biecek, P.: modelStudio: interactive studio with explanations for ML predictive models. J. Open Source Softw. (2019). https://doi.org/10.21105/joss.01798

  4. Biecek, P.: DALEX: explainers for complex predictive models in R. J. Mach. Learn. Res. 19, 1–5 (2018)

    MATH  Google Scholar 

  5. Biecek, P., Burzykowski, T.: Explanatory Model Analysis. Explore, Explain and Examine Predictive Models (2020). https://pbiecek.github.io/ema/

  6. Biecek, P., Kosinski, M.: archivist: an R package for managing, recording and restoring data analysis results. J. Stat. Softw. 82(11), 1–28 (2017)

    Article  Google Scholar 

  7. El-Assady, M., et al.: Towards XAI: structuring the processes of explanations (2019)

    Google Scholar 

  8. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an approach to evaluating interpretability of machine learning (2018). http://arxiv.org/abs/1806.00069

  9. Gosiewska, A., Biecek, P.: Do Not Trust Additive Explanations. arXiv e-prints (2019)

    Google Scholar 

  10. Hoover, B., Strobelt, H., Gehrmann, S.: exBERT: a visual analysis tool to explore learned representations in transformers models (2019)

    Google Scholar 

  11. Jentzsch, S., Höhn, S., Hochgeschwender, N.: Conversational interfaces for explainable AI: a human-centred approach (2019)

    Google Scholar 

  12. Kuzba, M., Baranowska, E., Biecek, P.: pyCeterisParibus: explaining machine learning models with ceteris paribus profiles in Python. JOSS 4(37), 1389 (2019). http://joss.theoj.org/papers/10.21105/joss.01389

  13. Lage, I., et al.: An evaluation of the human-interpretability of explanation (2019). http://arxiv.org/abs/1902.00006

  14. Lipton, Z.C.: The mythos of model interpretability (2016). http://arxiv.org/abs/1606.03490

  15. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf

  16. Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: A grounded interaction protocol for explainable artificial intelligence. In: AAMAS (2019)

    Google Scholar 

  17. Madumal, P., Miller, T., Vetere, F., Sonenberg, L.: Towards a grounded dialog model for explainable artificial intelligence (2018). http://arxiv.org/abs/1806.08055

  18. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences (2017), http://arxiv.org/abs/1706.07269

  19. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences (2017). http://arxiv.org/abs/1712.00547

  20. Molnar, C., Casalicchio, G., Bischl, B.: Quantifying interpretability of arbitrary machine learning models through functional decomposition. arXiv e-prints (2019)

    Google Scholar 

  21. Mueller, S.T., Hoffman, R.R., Clancey, W.J., Emrey, A., Klein, G.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI (2019). http://arxiv.org/abs/1902.01876

  22. Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: a unified framework for machine learning interpretability (2019). https://arxiv.org/abs/1909.09223

  23. Pecune, F., Murali, S., Tsai, V., Matsuyama, Y., Cassell, J.: A model of social explanations for a conversational movie recommendation system (2019)

    Google Scholar 

  24. R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2019). https://www.R-project.org/

  25. Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You?: Explaining the Predictions of Any Classifier (2016). https://doi.org/10.1145/2939672.2939778

  26. Ribera, M., Lapedriza, À.: Can we do better explanations? a proposal of user-centered explainable AI. In: IUI Workshops (2019)

    Google Scholar 

  27. Rydelek, A.: xai2cloud: Deploys An Explainer To The Cloud (2020). https://modeloriented.github.io/xai2cloud

  28. Scantamburlo, T., Charlesworth, A., Cristianini, N.: Machine decisions and human consequences (2018). http://arxiv.org/abs/1811.06747

  29. Sokol, K., Flach, P.: Conversational explanations of machine learning predictions through class-contrastive counterfactual statements. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Organization, IJCAI-18, pp. 5785–5786, July 2018. https://doi.org/10.24963/ijcai.2018/836

  30. Sokol, K., Flach, P.: Glass-box: explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Organization, IJCAI-18, pp. 5868–5870 (2018). https://doi.org/10.24963/ijcai.2018/865

  31. Sokol, K., Flach, P.: One explanation does not fit all. KI - Künstliche Intelligenz 34(2), 235–250 (2020). https://doi.org/10.1007/s13218-020-00637-y

    Article  Google Scholar 

  32. Tan, H.F., Song, K., Udell, M., Sun, Y., Zhang, Y.: Why should you trust my interpretation? Understanding uncertainty in LIME predictions (2019). http://arxiv.org/abs/1904.12991

  33. Tomsett, R., Braines, D., Harborne, D., Preece, A., Chakraborty, S.: Interpretable to whom? A role-based model for analyzing interpretable machine learning systems (2018)

    Google Scholar 

  34. Trestle Technology, LLC: plumber: an API Generator for R (2018)

    Google Scholar 

  35. Werner, C.: Explainable AI through rule-based interactive conversation. In: EDBT/ICDT Workshops (2020)

    Google Scholar 

Download references

Acknowledgments

We would like to thank 3 anonymous reviewers for their insightful comments and suggestions. Michał Kuźba was financially supported by the ‘NCN Opus grant 2016/21/B/ST6/0217’.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michał Kuźba .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kuźba, M., Biecek, P. (2020). What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations. In: Koprinska, I., et al. ECML PKDD 2020 Workshops. ECML PKDD 2020. Communications in Computer and Information Science, vol 1323. Springer, Cham. https://doi.org/10.1007/978-3-030-65965-3_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65965-3_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65964-6

  • Online ISBN: 978-3-030-65965-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics