Explainable AI for Operational Research: a defining framework, methods, applications, and a research agenda

Koen W. De Bock*, Kristof Coussement, Arno De Caigny, Roman Słowiński, Bart Baesens, Robert N. Boute, Tsan Ming Choi, Dursun Delen, Mathias Kraus, Stefan Lessmann, Sebastián Maldonado, David Martens, María Óskarsdóttir, Carla Vairetti, Wouter Verbeke, Richard Weber

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

The ability to understand and explain the outcomes of data analysis methods, with regard to aiding decision-making, has become a critical requirement for many applications. For example, in operational research domains, data analytics have long been promoted as a way to enhance decision-making. This study proposes a comprehensive, normative framework to define explainable artificial intelligence (XAI) for operational research (XAIOR) as a reconciliation of three subdimensions that constitute its requirements: performance, attributable, and responsible analytics. In turn, this article offers in-depth overviews of how XAIOR can be deployed through various methods with respect to distinct domains and applications. Finally, an agenda for future XAIOR research is defined.

Original languageEnglish
Pages (from-to)1-24
Number of pages24
JournalEuropean Journal of Operational Research
DOIs
StateAccepted/In press - 2023

Bibliographical note

Publisher Copyright:
© 2023 Elsevier B.V.

Keywords

  • Decision analysis
  • Explainable artificial intelligence
  • Interpretable machine learning
  • XAI
  • XAIOR

Fingerprint

Dive into the research topics of 'Explainable AI for Operational Research: a defining framework, methods, applications, and a research agenda'. Together they form a unique fingerprint.

Cite this