Exploring end-users' needs and involving them in designing and evaluating an AI system should be a priority. Understanding the system is essential to assess whether to trust it or not. This paper discusses a use case of a decision support system integrated into a platform for healthcare professionals (medical call operators and nurses). The system navigates them via a prediction of what intervention should be taken when an accident happens to a patient at home. Our use case demonstrates the importance of human-centred evaluation methods and potential struggles with mixed methods as detected by differences between qualitative and quantitative approaches. A subjective scale in combination with group interviews was used to evaluate the trust towards the system. The results showed that while users expressed a relatively high trust in the scale, the qualitative insights indicated uncertainty and the need for better explainability to be able to trust the decision support system. In line with the results, we point out the need for better human-centered evaluation methods, as the current subjective scale needs to be complemented by qualitative methods to ensure rich insights.
Originele taal-2English
StatusAccepted/In press - jun 2023
EvenementThe World Conference on eXplainable Artificial Intelligence - Lisboa, Portugal
Duur: 26 jul 202328 jul 2023


ConferenceThe World Conference on eXplainable Artificial Intelligence
Verkorte titelxAI
Internet adres


Duik in de onderzoeksthema's van 'Trustworthy enough? Evaluation of an AI decision support system for healthcare professionals - poster'. Samen vormen ze een unieke vingerafdruk.

Citeer dit