TimeTuesday, April 132:00pm - 2:20pm EDT
LocationMedical and Drug Delivery Devices
Artificial Intelligence (AI) is a promising technology for improving healthcare practices and patient safety, including robot-assisted surgery, automated image diagnosis, health monitoring.
Human Factors scholars have discussed on how the introduction of AI will impact the healthcare system, and how those who work in this system should expect to adapt to this emerging trend of technologies. Topics of discussions involved the acceptability and applicability of AI in various healthcare settings, the co-learning processes between people and AI's, as well as how to design AI systems to best support the user-system interactions.
However, to the knowledge of the authors, there have been few studies focusing on how the Human Factors engineering design perspective can benefit the development of the AI itself. Particularly, the development of an AI requires efforts from scientists/engineers, who are not so often considered as the "user" of the product but have requirements on certain aspects of the product's usability and performance.
This presentation will propose a paradigm for an AI algorithm’s interaction with multiple stakeholders, including not only the patients and the healthcare professionals, but also the scientists and engineers who develop the algorithms. In addition, the authors will present a case study in how Human Factors design method was used to maximize the utility of collected data and minimize data collection errors and fatigue, in order to make this AI-embedded device friendly to the users, as well as to improve the AI performance.
In the Human Factors/Usability engineering process for developing a medical device, it is strongly recommended to involve “users” early in the design and testing process. Examples of such involvement include collecting user needs at the beginning, performing formative usability evaluations with representative users along the design process, and validating the product’s usability with testing in the end.
This process, however, mostly focuses on the “end users” – healthcare professionals and patients. However, in the development process of AI-based devices, we also need to consider data collection and development which are critical for training high-performing models. In addition to the needs of the end-users, the scientists/engineers responsible for creating the model should also be considered as stakeholders (if not the “user”).
The authors’ team is developing a device that is intended to inform clinical management in wound care. This device captures images of patient’s wounds and identifies non-healing regions of the wounds from the image with a deep-learning algorithm.
Since there is no existing publicly available imaging dataset available for training the algorithm, one of the biggest challenges confronting the team is to collect the imaging data and develop the dataset. The team works with multiple hospitals with burn care specialists in running clinical studies to collect this data.
Preliminary devices were built and used by wound care professionals (including Physicians, Nurses, Medical Assistants, etc.) to acquire images of the patients’ wounds. The image data collected were then transferred to the data scientists on the team to use in algorithm training.
The design went through design iterations to refine and improve.
Iteration 1: The device contains the cameras and a touchscreen monitor for the user to identify the area being captured. During the use of the preliminary devices, several issues emerged. One of the biggest issues was regarding the image quality - the quality of the images was sometimes too low to be used in the algorithm training. This happened because: 1) the images were captured out of the cameras’ working distance; and 2) the images did not capture the intended area of the patient’s body surface. The team worked with data scientists and engineers to fully understand their needs and requirements for the imaging data and proposed a solution - to add a "guiding beam" to facilitate image acquisition. These improvements were focused in improving the quality of the data collected by the clinician users, to better meet the needs of the scientist users.
Iteration 2: A “Guiding beam” was added to the cameras. This beam is projected from the cameras to the patient’s body surface and shows the area captured by the cameras. However, poor-quality images were still taken. A review of the design of the system, as well as the use cases, revealed several causes for this problem: 1) Since the guiding beam can be turned on/off by the user, it is very easy for the user to forget to turn it back on. In this case, the user will take images without the beam, and as a result, the cameras can be placed far off the correct working distance; 2) The guiding beam uses 2 dots to indicate the distance – when the cameras are placed at the correct working distance, the 2 dots will overlap and show as one dot on the target wound. However, when cameras are placed close to but slightly out of the working distance, the two projected dots might still appear very close, as the surface of the wound is usually not flat. This leads to the user’s confusion to think that the cameras are placed at the correct distance and take images that do not work.
Iteration 3: Change the "guiding beam" to be controlled by the device instead of manually by the user; used color change to indicate when the cameras are placed at the appropriate working distance. This iteration is currently undergoing usability testing.
1) It is important to recognize the role as stakeholders of the scientists/engineers – and include their needs when considering the “user needs” in designing a device.
2) The "user needs" of the scientists/engineers can sometimes be different from what the end-user wants – for instance, the end-user might not care about generating garbage data as long as they have the desired output – but cleaning the data will always be a challenge for the developer team.
3) It is important to understand the scope of data being used. In this case, the imaging data needs to meet certain criteria to be used in algorithm training. When making changes to the design itself, we had to make sure that the images captured are not impacted.