Symbiotic Design Application in Healthcare: Preventing Hospital Acquired Infections
TimeThursday, April 152:00pm - 3:00pm EDT
The increasing digitization and prevalence of healthcare data have opened new possibilities for clinical decision support (CDS) tools driven by powerful computational technologies , including artificial intelligence (AI). However, these new tools pose new challenges as well. The ways in which human decision-makers and machine decision-aides interact can produce outsized positive  or negative [3-5] effects on the system as a whole. Because of the two-way relationship where both humans and machines can affect and be affected by the other, machines in these systems function like active cognitive teammates rather than passive tools. This relationship can be conceptualized as a human-machine symbiosis, where humans and machines are analogous to organisms in a working relationship. As in nature, this relationship can be mutually beneficial, beneficial to only one party while doing no harm to the other, or beneficial to one party while harming the other (i.e., mutualism, commensalism, or parasitism). The goal of design then becomes a fluid symbiosis to the mutual benefit of both humans and machines.
This symbiotic design process requires a coalescence of diverse expertise and techniques, which suggests symbiosis is also needed within these teams of people designing symbiotic human-machine teams. Multidisciplinary teams must include expertise in interaction design, cognitive agent design, computational disciplines, data science disciplines, and the domain of interest. This diversity of expertise will bring with it diverse and not always complementary perspectives on the goals, problems, and solutions that they are meant to jointly address. These teams must simultaneously cooperate while also continually cross-checking one another . Teams must develop a shared set of techniques through a fusion of user-centered design, cognitive systems engineering, and cognitive agent design , yet remain active individuals that continually act and react to the rest of the team (rather than passive receivers of information and instructions). We, therefore, believe it takes symbiosis (in the team) to achieve symbiosis (in the resultant solution), and likewise, the study of symbiosis in either human-machine or human-human teams directly informs the other.
We have begun to explore the process of symbiotic design in creating an AI-enabled CDS tool for hospital-acquired infection (HAI) prevention. HAIs contribute to increased mortality, morbidity, and the length of hospitalization in patients  and cost US hospitals $28-45 billion annually . The task complexity in finding and anticipating disease transmission routes among spatially and temporally distant patients makes infection prevention particularly well-suited for a symbiotic design approach to bolster human sensemaking with computational modeling. We discuss our experiences throughout this process, including our tenets of symbiotic design, the known pitfalls we have avoided, and the challenges we have faced along the way as a symbiotic team designing a symbiotic team.
Central Tenets of Symbiotic Design
All adaptive systems, including our own design team, risk falling into maladaptive patterns, two of which are: (1) getting stuck in outdated behaviors and (2) working at cross-purposes . Motivated by these known shortcomings, we explicitly configured our symbiotic design process with two central tenets.
Recurrent Bottom-Up Feedback
Technology-driven approaches, where design and development is primarily guided by technological limitations, are particularly vulnerable to solving the wrong problem , ultimately creating a tool that is not usable, useful, and desirable . To help ensure our symbiotic design process would instead be problem-driven, we scheduled recurrent feedback sessions with brought-in subject matter experts (SMEs) as full members of the team, and have engaged with end-users throughout the entire process, even after we felt we fully understood the domain. This was challenging because the more we understood about the domain and problems, the more these feedback sessions appeared unnecessary; however, we continued to discover key insights long after our initial research finished, which we likely would not have discovered if these sessions had not been fixed activities in our project timeline. Each of our process artifacts, including abstraction networks, user personas, scenarios, and wireframe designs, elicited different aspects of the problem domain beyond what was possible from initial user research alone. By tethering our design team to a continual source of bottom-up research, we enabled our team to explore and adjust to what was important in the domain as it became apparent over time.
Continual Model (Re-)Alignment
The differing perspectives and responsibilities of our hierarchically-organized team increased the risk of misaligned goals. Solutions advantageous to one sub-team’s responsibilities may hinder other team members to meet their responsibilities or undermine the long-term shared goals of the whole team . To mitigate these risks, we continuously shared and revised our process artifacts in recurrent full-team meetings. In addition to sharing updates, these artifacts and mechanisms helped detect misalignments by eliciting each others’ understanding (i.e., mental model) of the project. However, we found these standard procedures for aligning hierarchical, multidisciplinary teams were necessary but not sufficient for detecting misalignments. The abstracted information communicated at the highest levels of the organization (i.e., team leads) obscured some ways in which small misalignments were causing the overall team to work at cross purposes. These misalignments were only revealed through horizontal communication at the lower levels of the organization. Therefore, we needed a regular, structured communication mechanism between the people who did work beyond what was available between the people who coordinated work. Again, these meetings could appear unnecessary at times, but proved vital in detecting when our team was not well-coordinated and needed realignment.
We have already seen value in symbiotic design as a process to integrate multidisciplinary teams. We will discuss in more detail the mechanisms we created, the benefits and challenges we have experienced, the potentially disastrous miscommunications that we detected and addressed early, and the implications for human-machine teams.
 Sheikh, A., Bates, D. W., Wright, A., & Cresswell, K. (Eds.). (2017). Key Advances in Clinical Informatics: Transforming Health Care through Health Information Technology. Academic Press.
 Woods, D. D., & Hollnagel, E. (2006). Joint cognitive systems: Patterns in cognitive systems engineering. CRC Press.
 Sorkin, R. D., & Woods, D. D. (1985). Systems with human monitors: A signal detection analysis. Human-computer interaction, 1(1), 49-75.
 Smith, P. J., McCoy, C. E., & Layton, C. (1997). Brittleness in the design of cooperative problem-solving systems: The effects on user performance. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 27(3), 360-371.
 Wickens, C. D., & Dixon, S. R. (2007). The benefits of imperfect diagnostic automation: A synthesis of the literature. Theoretical Issues in Ergonomics Science, 8(3), 201-212.
 Watts-Perotti, J., & Woods, D. D. (2009). Cooperative advocacy: an approach for integrating diverse perspectives in anomaly response. Computer Supported Cooperative Work (CSCW), 18(2-3), 175-198.
 Rayo, M. F. (2017). Designing for collaborative autonomy: updating user-centered design heuristics and evaluation methods. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 61, No. 1, pp. 1091-1095). Sage CA: Los Angeles, CA: SAGE Publications.
 Voidazan, S., Albu, S., Toth, R., Grigorescu, B., Rachita, A., & Moldovan, I. (2020). Healthcare Associated Infections—A New Pathology in Medical Practice?. International journal of environmental research and public health, 17(3), 760.
 Stone, P. W. (2009). Economic burden of healthcare-associated infections: an American perspective. Expert review of pharmacoeconomics & outcomes research, 9(5), 417-422.
 Woods, D. D., & Branlat, M. (2011). Basic patterns in how adaptive systems fail. Resilience engineering in practice, 2, 1-21.
 Woods, D. D., & Roth. E. M. (1988). Cognitive Systems Engineering. In M Helander (Ed.), Handbook of Human-Computer Interaction. Elsevier.
 Sanders, E. B. N. (1992). Converging perspectives: product development research for the 1990s. Design Management Journal (Former Series), 3(4), 49-54.