Close

Presentation

Artificial Intelligence and design practice: a new approach to user-interface design for eHealth and mHealth
Event Type
Poster Presentation
TimeThursday, April 152:54pm - 2:55pm EDT
LocationDigital Health
DescriptionMuch information used in the clinical context is represented through visualizations: charts, imaging such as MRIs and CT Scans, among others. A big challenge when designing visualizations for healthcare is to assess the usefulness, readability and interpretation accuracy of the visualizations. The difficulties are due to the wide variety of information in addition to the highly specialized context: meaning, even when considering the same information, different agents may have different needs and competence to interpret it. It is virtually impossible to identify the needs and pain points of the users as the variables to consider would be overwhelming. Therefore, considering a material-discursive practice, the visualizations carry at least part of the information we need to identify. To gain an understanding of which best practices of information visualization design are adequate for each context, we propose to use the massive amount of data available as part of the solution. We are investigating ways to retrieve visualizations from across the medical literature and, to use artificial intelligence to predict the classification of different visualizations.

We believe that such a tool would provide development teams and researchers in the field of design and information visualization with the ability to more easily collect and analyze large amounts of data, identifying patterns and best practices to improve the quality of visualizations made available in healthcare. Through this study, researchers in health-related domains will gain the ability to learn from Big data sets, benefiting from the inherited system complexity to obtain new perspectives.

Through a thematic analysis, during a previous stage of this study, we identified ten different clusters of visualizations found in the medical literature. These clusters vary from four initial clusters, combined in pairs: illustration, industrial, information and interface. The present study used these to develop a deep neural network predictive model, identifying the class of images. As each image can belong to multiple classes, the multi-label Convolutional Neural Network (CNN) model is built to predict the different labels of images. A total of 1,034 images, were divided into training and testing sets with the ratio of (70-30) respectively. All the images were converted to the fixed size of (100*100) and “RGB” colour format. To improve the performance of the model, more samples of training images are generated using ImageDataGenerator library in Keras. The model used the initial four clusters as labels.

The model demonstrates the precision, recall and F1-scoree, for each label. Our preliminary results show that the labels, with higher frequency, have better outcomes. By analyzing the model output in regards to the false negatives and false positives, we were able to identify new insights that could indicate the need to re-organize of the clusters. Although further analysis is required to test the viability of the new insights, this corroborates to the premise that this method can further contribute to a more robust thematic analysis of visualizations.