Trustworthy enough? Evaluation of an AI decision support system for healthcare professionals - poster

Research output: Unpublished contribution to conferencePoster

8 Downloads (Pure)


Exploring end-users' needs and involving them in designing and evaluating an AI system should be a priority. Understanding the system is essential to assess whether to trust it or not. This paper discusses a use case of a decision support system integrated into a platform for healthcare professionals (medical call operators and nurses). The system navigates them via a prediction of what intervention should be taken when an accident happens to a patient at home. Our use case demonstrates the importance of human-centred evaluation methods and potential struggles with mixed methods as detected by differences between qualitative and quantitative approaches. A subjective scale in combination with group interviews was used to evaluate the trust towards the system. The results showed that while users expressed a relatively high trust in the scale, the qualitative insights indicated uncertainty and the need for better explainability to be able to trust the decision support system. In line with the results, we point out the need for better human-centered evaluation methods, as the current subjective scale needs to be complemented by qualitative methods to ensure rich insights.
Original languageEnglish
Publication statusAccepted/In press - Jun 2023
EventThe World Conference on eXplainable Artificial Intelligence - Lisboa, Portugal
Duration: 26 Jul 202328 Jul 2023


ConferenceThe World Conference on eXplainable Artificial Intelligence
Abbreviated titlexAI
Internet address


  • explainable AI (XAI)
  • trustworthy AI
  • DSS in healthcare
  • XAI evaluation
  • explainability needs


Dive into the research topics of 'Trustworthy enough? Evaluation of an AI decision support system for healthcare professionals - poster'. Together they form a unique fingerprint.

Cite this