LinkIt: A New Tool for Assessing Structural Knowledge and Providing Formative Feedback
Event Type
Oral Presentations
TimeWednesday, April 1412:30pm - 12:50pm EDT
LocationEducation and Simulation
DescriptionTraining and skill maintenance in healthcare require substantial investments of time and money. Although training has benefited from technologies such as simulation and augmented reality, methods for assessing training have lagged. In fact, instructors often still rely on paper and pencil assessments like multiple choice and fill in the blank. Not only are the reliability and validity of these measures unclear, these measures are inefficient, as developing, taking, and grading tests takes time.

Further, these traditional assessments do not reflect current understanding in cognitive science. For example, cognitive scientists envision semantic memory as a network, with nodes representing concepts and links between nodes representing relationships. This is often referred to as structural knowledge. Indeed, experts are often distinguished from novices not only by their performance but by the fact that their knowledge networks are more organized and consistent than novices (Schvaneveldt, et. al, 1985). As a result, one can measure knowledge growth by observing how novice networks develop to approximate the networks of experts (Goldsmith, Acton, & Johnson, 1991). This enables us to compare network representations of the student and the expert to identify what the student has mastered, what they still need to learn, misconceptions, and areas for additional attention. This makes instruction more specific, targeted, effective and efficient.

Two approaches have been investigated to this end. The first, Pathfinder algorithm (Schvaneveldt, 1990; Schvaneveldt, Durso & Dearholt, 1989), is a network-scaling method based on graph theory. Pathfinder starts with pairwise psychological distance data, such as relatedness to produce knowledge networks. Unfortunately, gathering pairwise relatedness ratings is time consuming and tedious. For example, in one method participants rate the relatedness of each pair of concepts in a domain. The number of pairs adds up fast. For example, a lesson involving 30 concepts would require 430 pairwise relatedness ratings. Slight variations on this theme (e.g., Target Method; Tossell, Schvaneveldt & Branaghan, 2010) failed to speed up the process.

The second promising approach is concept mapping (Jonassen & Marra, 1994), which asks students to draw concepts and labeled links between the concepts. These provide more direct representations of student and expert knowledge. Additionally, they do not require tedious pairwise ratings. On the other hand, concept maps can be difficult to score. They tend to be more qualitative.

This presentation introduces a software tool called LinkIt which combines concept mapping's straightforward simplicity with quantitative metrics provided by Pathfinder and other network analyses. Inspired by TPL-KATS (Hoeft, et al., 2003), LinkIt constrains concept maps to a limited set of concepts and relations. Unlike, TPL-KATS however, LinkIt score student networks by measuring their similarity to a canonical or expert network. Further, LinkIt provides formative feedback to students by drawing dashed or dotted labeled lines to indicate missing links or extraneous links. Thus, the same program accomplishes assessment and formative feedback.This is likely to produce more effective training and assessment.

Our presentation will demonstrate LinkIt with several different concept domains related to differential diagnosis, shock, and trauma medicine.

Hoeft, R. M., Jentsch, F. G., Harper, M. E., Evans, A. W. III, Bowers, C. A., & Salas, E. (2003). TPL-KATS -- concept map: A computerized knowledge assessment tool. Computers in Human Behavior, 19, 653-657.

Jonassen, D. H., & Marra, R. M. (1994). Concept mapping and other formalisms as mindtools for representing knowledge. ALT-J, 2(1), 50-56.

McKenzie K, Branaghan R, Capan M, Kowalski R, Weldon D, Arnold R, Blumenthal, J., Mayorga M, Huddleston J, Miller K. (2020) Understanding Clinician Mental Models in Sepsis Diagnosis: Implications for Decision-Support System Development. Decision Support Systems. Journal of Cognitive Engineering and Decision Making In Review.

Schvaneveldt, R. W. (1990). Pathfinder associative networks: Studies in knowledge organization. Ablex Publishing.

Schvaneveldt, R. W., Durso, F. T., Goldsmith, T. E., Breen, T. J., Cooke, N. M., Tucker, R. G., & DeMaio, J. C. (1985). Measuring the structure of expertise. International Journal of Man-Machine Studies, 23, 699-728.

Tossell, C., Schvaneveldt, R. W., & Branaghan, R.J. (2010). Assessing a New Method to Elicit Knowledge Structures. Cognitive Technology. 15 (2), 11-19.