Performance Improvement of Automated Melanoma Diagnosis System by Data Augmentation

Performance Improvement of Automated Melanoma Diagnosis System by Data Augmentation

Kana Kato, Mitsutaka Nemoto, Yuichi Kimura, Yoshio Kiyohara, Hiroshi Koga, Naoya Yamazaki, Gustav Christensen, Christian Ingvar, Kari Nielsen, Atsushi Nakamura, Takayuki Sota, Takashi Nagaoka
Vol. 9 (2020) p.62-70

Color information is an important tool for diagnosing melanoma. In this study, we used a hyperspectral imager (HSI), which can measure color information in detail, to develop an automated melanoma diagnosis system. In recent years, the effectiveness of deep learning has become more widely accepted in the field of image recognition. We therefore integrated the deep convolutional neural network with transfer learning into our system. We tried data augmentation to demonstrate how our system improves diagnostic performance. 283 melanoma lesions and 336 non-melanoma lesions were used for the analysis. The data measured by HSI, called the hyperspectral data (HSD), were converted to a single-wavelength image averaged over plus or minus 3 nm. We used GoogLeNet which was pre-trained by ImageNet and then was transferred to analyze the HSD. In the transfer learning, we used not only the original HSD but also artificial augmentation dataset to improve the melanoma classification performance of GoogLeNet. Since GoogLeNet requires three-channel images as input, three wavelengths were selected from those single-wavelength images and assigned to three channels in wavelength order from short to long. The sensitivity and specificity of our system were estimated by 5-fold cross-validation. The results of a combination of 530, 560, and 590 nm (combination A) and 500, 620, and 740 nm (combination B) were compared. We also compared the diagnostic performance with and without the data augmentation. All images were augmented by inverting the image vertically and/or horizontally. Without data augmentation, the respective sensitivity and specificity of our system were 77.4% and 75.6% for combination A and 73.1% and 80.6% for combination B. With data augmentation, these numbers improved to 79.9% and 82.4% for combination A and 76.7% and 82.2% for combination B. From these results, we conclude that the diagnostic performance of our system has been improved by data augmentation. Furthermore, our system succeeds to differentiate melanoma with a sensitivity of almost 80%.

READ FULL ARTICLE ON J-STAGE