Description
|
Background: The composition of tissue types present within a wound is a useful indicator of its healing progression and could be helpful in guiding its treatment. Additionally, this measure is clinically used in wound healing tools (e.g. BWAT) to assess risk and recommend treatment. However, the identification of wound tissue and the estimation of their relative composition is highly subjective and variable. This results in incorrect assessments being reported, leading to downstream impacts including inappropriate dressing selection, failure to identify wounds at risk of not healing, or failure to make appropriate referrals to specialists. Objective: To measure inter-and intra-rater variability in manual tissue segmentation and quantification among a cohort of wound care clinicians. To determine if an objective assessment of tissue types (i.e., size, amount) can be achieved using a deep convolutional neural network that predicts wound tissue types. The proposed objective measurement by machine learning model’s performance is reported in terms of mean intersection over union (mIOU) between model prediction and the ground truth labels. Finally, to compare the performance of the model wound tissue identification by a cohort of wound care clinicians. Methods: A dataset of 58 anonymized wound images of various types of chronic wounds from Swift Medical’s Wound Database was used to conduct the inter-rater and intra-rater agreement study. The dataset was split into 3 subsets, with 50% overlap between subsets to measure intra-rater agreement. Four different tissue types (epithelial, granulation, slough and eschar) within the wound bed were independently labelled by the 5 wound clinicians using a browser-based image annotation tool. Each subset was labelled at one-week intervals. Inter-rater and intra rater agreement was computed. Next, two separate deep convolutional neural networks architectures were developed for wound segmentation and tissue segmentation and are used in sequence in the proposed workflow. These models were trained using 465,187 wound image-label pairs and 17,000 image-label pairs respectively. This is by far the largest and most diverse reported dataset of labelled wound images used for training deep learning models for wound and wound tissue segmentation. This allows our models to be robust, unbiased towards skin tones and generalize well to unseen data. The deep learning model architectures were designed to be fast and nimble to allow them to run in near real-time on mobile devices. Results: We observed considerable variability when a cohort of wound clinicians was tasked to label the different tissue types within the wound using a browser-based image annotation tool. We report poor to moderate inter-rater agreement in identifying tissue types in chronic wound images. A very poor Krippendorff alpha value of 0.014 for inter-rater variability when identifying epithelization has been observed, while granulation is most consistently identified by the clinicians. The intra-rater ICC(3,1) (Intra-Class Correlation) however indicates raters are relatively consistent when labelling the same image multiple times over a period of time. Our deep learning models achieved a mean intersection over union (mIOU) of 0.8644 and 0.7192 for wound and tissue segmentation respectively. A cohort of wound clinicians, by consensus, rated 91% of the tissue segmentation results to be between fair and good in terms of tissue identification and segmentation quality. Conclusions: Our inter-rater agreement study validates that clinicians may exhibit considerable variability when identifying and visually estimating tissue proportion within the wound bed. The proposed deep learning model provides objective tissue identification and measurements to assist clinicians in documenting the wound more accurately. Our solution works on off-the-shelf mobile devices and was trained with the largest and most diverse chronic wound dataset ever reported and leading to a robust model when deployed. The proposed solution brings us a step closer to more accurate wound documentation and may lead to improved healing outcomes when deployed at scale. (2022-02-01)
|