Mapping usability heuristics to ethical principles in clinical decision support design
TimeThursday, April 152:57pm - 2:58pm EDT
Emerging collective intelligence and related phenomena have been discussed and debated for many decades. Joint cognitive systems, augmented intelligence, and macrocognitive systems are a few labels used to describe the evolution of collective decision-making capacity of joint agents. In healthcare, we often see these systems applied in the form of clinical decision support (CDS) systems. The human as a component of a system is sometimes overlooked, but humans often contribute significantly to the resiliency of a system.
Hoffman and colleagues (2009) remind us that the language used to describe phenomena subsequently frames design and work. Diversity in thought may influence and be influenced by diversity in language. A challenge to interprofessional design in healthcare seems to be communication across the diverse professions. Making connections or translating between the language of clinicians and human factors professionals might help facilitate a shared vocabulary.
In this paper, we map ethical principles to usability heuristics as a crosswalk between disciplines. The language of ethics was used for mapping because it is included in clinical training programs and healthcare professionals should be familiar with this discipline. We then connect to these mappings real life examples from different perspectives, from patients to clinicians to designers and evaluators of systems. Through describing lived experiences in healthcare settings, we highlight the diversity of both thought and language while making connections between these perspectives in talking about the design of healthcare systems.
Design heuristics and the ethical design of CDS systems
What value can we bring from this conversation to the clinician, designer, or the patient? Reflecting on our lived experiences and reviewing literature on this topic bring us to the formulation of a draft mapping of design heuristics to ethical principles for thinking about CDS systems. The four main ethical concepts that are most discussed are non-maleficence, benevolence, autonomy, and justice. Other important considerations are trust, transparency, and accountability. In addition, a CDS system must be useful, providing a clinician with actionable information that is usable, allowing the clinician to act on this information.
As a first pass and with the acknowledgment that there is much room for addition and improvement, we propose the mappings in Table 1. We hope that these mappings may be used as a tool for discussing ethical considerations with CDS systems by providing common language for discussion and helping evaluators and users to frame design discussions. In this proposal, we provide an outline format given constraints to the submission platform. Several visual representations will be provided in the presentation and paper.
Table 1. Design heuristics for ethical CDS systems
1. Justice and Equality -- Accessibility and inclusion of all relevant patient information.
2. Justice and Equality -- Accuracy and representativeness of the underlying population health data used to train the learning mode.
3. Justice, Equality, and Beneficence -- CDS tool provides information the clinician needs that is: accurate, useful/relevant, and usable.
4. Autonomy (of clinicians and patients) – Flexibility in design.
5. Transparency -- Visibility of system state.
a. Show why some information is presented as output and what information is omitted.
b. Show how the system addresses co-morbidities and conflicting guidance.
c. Show the clinical guidelines used.
d. Support ability for cross-checks.
6. Non-maleficence and Trust -- Error tolerance and modularity.
a. Prevent errors.
b. Aid in error recovery—if patient information was entered incorrectly or is no longer current, how do recommendations update? Help clinician course-correct when new information becomes available.
7. Beneficence -- Information presentation aids decision-making.
a. Information format is understandable, useful, usable.
b. Help the clinician balance conflicting guidance.
8. Justice and Equality, Trust, and Non-maleficence -- Match between CDS and real world, the system should speak the users’ language.
Cameron and Pimlott (2015) ask, “Could stories, poems, essays, metaphors, similes, analogies, photographs, paintings, music, and dance have the power to bring about change?” To discuss ethical considerations of emerging collective intelligence in the healthcare setting, we present narrative accounts of lived experience through the lenses of patient, healthcare professional, patient safety professional, human factors professional, and data scientist working in CDS and machine learning. In Researching Lived Experience, Max van Manen wrote, “Human science is concerned with action in that hermeneutic phenomenological reflection deepens thought and therefore radicalizes thinking and the acting that flows from it” (2016, p. 154). This reflection into lived experiences provides an opportunity to examine how CDS can be designed to be useful, usable, and ethical.
Experience as a patient
How could inaccurate data influence learning algorithms and text classification tasks? One author was notified about a critical lab test that was later discovered to be a mislabeled specimen from another patient. The other had a drug allergy recorded in the medical record when a more flexible system would have allowed the clinician to enter it as an adverse drug reaction to distinguish it from an anaphylactic reaction. How many mistakes, inaccuracies, or biased results are never identified or reconciled and how will learning health systems manage this issue? As information is buried under added layers, this presents a challenge to respect for autonomy or ensuring the system is designed to support a person’s agency for making choices about their health. Design principles that align with the ethical value of respect for patient and clinician autonomy are user control and freedom, visibility of the system state, and ease of recovering from error.
Experience as a healthcare professional
This narrative reflects on the ethical principle of justice and desire for equitable care. Patient care services, augmented by assistance through CDS systems, can be valuable for assisting in efficient and safe diagnostic, therapeutic, and monitoring activities. As part of an anticoagulation clinic, we utilized a CDS tool for anticoagulation therapy management. Patient and therapeutic parameters are retrieved by the computer and fed into an algorithm, and patient-specific recommendations are automatically supplied to the healthcare professional. This process provides useful assistance in sifting through the patient chart and seems more efficient than collecting information manually, but what useful information might this recommendation system miss that the clinician might come across through manual review? Does the decision support offer better support for one demographic or patient care scenario at the expense of another?
Experience as human factors and safety professionals
One author recalls the time during a usability evaluation of an electronic health record (EHR) when a participant toggled a data field from empty to “Yes” when the correct response was actually “No.” When pressed, he explained that no one in his group understood what that data field meant, but they had learned that if they did not click on it to switching the value to “Yes,” the interface would not allow them to continue with the task. In actuality, a “No” response would have allowed him to continue as well, but selecting “No” required an extra click on the interface. Because the interface did not explain the purpose of the data field and the implications associated with different responses and because a single click was sufficient to allow the clinician to continue with his task, that was where he stopped. To support the ethical principle of non-maleficence, a system should be error-tolerant and provide help and documentation.