Journal of Biomedical Translational Research
Research Institute of Veterinary Medicine, Chungbuk National University
Original Article

Deep learning approach for classification of chondrocytes in rats

Jin Seok Kang1,*https://orcid.org/0000-0002-4492-3101
1Department of Biomedical Laboratory Science, Namseoul University, Cheonan 31020, Korea
*Corresponding author: Jin Seok Kang, Department of Biomedical Laboratory Science, Namseoul University, Cheonan 31020, Korea. Tel: +82-41-580-2721, E-mail: kang@nsu.ac.kr

© Research Institute of Veterinary Medicine, Chungbuk National University.

Received: Feb 05, 2022; Revised: Feb 26, 2022; Accepted: Feb 28, 2022

Abstract

The rapid development of computer vision and deep learning has enabled these technologies to be applied to the automated classification and counting of microscope images, thereby relieving of some burden from pathologists in terms of performing tedious microscopic examination for analysis of a large number of slides for pathological lesions. Recently, the use of these digital methods has expanded into the field of medical image analysis. In this study, the Inception-v3 deep learning model was used for classification of chondrocytes from knee joints of rats. Knee joints were extracted, fixed in neutral buffered formalin, decalcified, processed and embedded in paraffin, and hematoxylin and eosin (H&E) stained. The H&E stained slides were converted into whole slide imaging (WSI), and the images were cropped to 79 × 79 pixels. The images were divided into training (60.42%) and test (39.58%) sets (46,349 and 30,360 images, respectively). Then, images containing chondrocytes were classified by Inception-v3 and accuracy was calculated. We visualized the images containing chondrocytes in WSIs by adding colored dots to patches. When images of chondrocytes in knee joints were evaluated, the accuracy was within the range of 91.20 ± 8.43%. Therefore, it is considered that the Inception-v3 deep learning model was able to distinguish chondrocytes from non-chondrocytes in knee joints of rats with a relatively high accuracy. The above results taken together confirmed that this deep learning model could classify the chondrocytes and this promising approach will provide pathologists a fast and accurate analysis of diverse tissue structures.

Keywords: rats; chondrocytes; digital pathology; classification; deep learning

INTRODUCTION

The explosive increase in the amount of pathological slides for diagnosis has placed a burden on pathologists. Furthermore, there has been an inconsistency in diagnostic results according to differences in the training field experience of researchers. This has made the development of novel tools capable of increasing accuracy and consistency a top priority [1]. Furthermore, the current pathological education system for biomedical researchers who need long-term training and preclinical and/or clinical experience cannot meet all of these demands [2].

Digital pathology is a sub-field of pathology that focuses on data management based on information generated from digitized specimen slides. The methods of this subfield potentially provide greater accuracy, reproducibility and standardization of pathology-based studies and preclinical and clinical trials [3].

For the preparation of digitized slides for specimens, it is common to use a digital scanner, which can generate whole slide imaging (WSI). WSI technology has advanced to a point where it has replaced the glass slide as the primary means of pathology evaluation [4]. Implementation of WSI is a multifaceted and inherently multidisciplinary endeavor requiring both contributions by pathologists and technologists, and executive leadership [5]. With the availability of WSI of tissue slides, image processing techniques have been exploited in recent decades to automate and optimize the analysis. WSI-based digital pathology has been adopted to enhance time efficiency and to reduce the cost [6], even to eliminate potentially human bias [7]. Basically, this approach has been applied to quantitative assessment of bone marrow hematopoietic lineages [8], mitosis [9] and collagenous tissue [10].

Various applications incorporating AI are being developed to assist the process of pathologic diagnosis, and detection and segmentation of specific objects [11]. Among these, deep learning achieved high performance on image classification and categorization, and it has been expanded into the field of medical image analysis [12].

The Inception-v3 model has the advantage of being able to simultaneously process multi-scale targets on multi-levels and collect feature maps with different features. Therefore, it is considered to be an efficient tool for classification [13]. Convolutional neural network (CNN)-based systems such as VGGNet or ResNet have also been proposed in medical image analysis [14].

As it is highly anticipated that automatic classification of specific cells may reduce slide reading time and the process of cell detection, AI-based analysis will be warranted in animal tissues. The purpose of this study is to classify chondrocytes in knee joints of rats and to evaluate accuracy of the results using Inception-v3 deep learning model.

MATERIALS AND METHODS

Data collection

We used 15 hematoxylin and eosin (H&E)-stained slides of knee joints of normal SD female rats. The slides were retrieved from historical background data.

All slides were scanned using a Panoramic Whole-slide Scanner (3D Histotech, Budapest, Hungary) at 20 × magnification in the Department of Biomedical Laboratory Science of Namseoul University. The staining intensity, contrast, and thresholding were not adjusted.

Deep learning

We used Inception-v3 model for training and testing, and evaluated classification accuracies. An overview of the approach is shown in Fig. 1. H&E-stained sections were scanned, converted, cropped, and used for supervised training of Inception-v3. Using this model, cropped images were classified as chondrocytes or non-chondrocytes. For assessment of Inception-v3 performance, classification accuracies were calculated.

jbtr-23-1-1-g1
Fig. 1. Procedure of digitization to deep learning analysis of tissue slides. WSI. whole slide imajing.
Download Original Figure
Slide annotation

We cropped the images of knee joint samples to 79 × 79 pixels. For computational learning of images, we divided the images into two parts as training and test set. The 65,797 images were randomly divided into training (70.4%) and test (29.6%) sets (46,349 and 19,448 images, respectively; Table 1).

Table 1. Distribution of training and test set data number of training and test data between chondrocytes and non-chondrocytes
Classification Number of training data Number of test data
Chondrocytes 2,544 3,550
Non-chondrocytes 43,805 15,898
Sum 46,349 19,448
Download Excel Table
Visualization of predicted patches

We visualized chondrocytes in WSIs by adding colored dots to patches predicted to be chondrocytes. This allowed pathologists to understand the classification method.

RESULTS

Collection of training and evaluation data

Chondrocytes and non-chondrocytes were present in knee joints from WSI profiles. We cropped the images as described above; representative images are shown in Fig. 2.

jbtr-23-1-1-g2
Fig. 2. Representative histopathological figures of chondrocytes or non-chondrocytes. (A) Chondrocytes (B) Non-chondrocytes.
Download Original Figure

Representative chondrocytes are shown in Fig. 2A. Non-chondrocytes such as bone cells, bone marrow cells and adipocytes are shown in Fig. 2B.

Inception-v3 classification of chondrocytes and non-chondrocytes

Inception-v3 evaluated square patches and predicted chondrocytes and non-chondrocytes. Confusion matrix of chondrocytes and non-chondrocytes showed the accuracies were 91.20 ± 8.43% (Table 2).

Table 2. Confusion matrix of chondrocytes and non-chondrocytes
Classification Reference data No. of data
Chondrocytes Non-chondrocytes
Chondrocytes 2,016 1,534 3,550
Non-chondrocytes 177 15,721 15,898
No. of data 2,193 17,255 19,448
Accuracy (%) 91.20 ± 8.431)

Data represent mean ± S.D.

Download Excel Table
Visualization of chondrocytes and non-chondrocytes

The data for knee joints are shown in Fig. 3. Representative original H&E images are shown in Fig. 3A. Inception-v3 colored chondrocytes are shown in blue and non-chondrocytes in red (Fig. 3B). Almost all chondrocytes were so identified as blue. However, a few non-chondrocytes were misclassified as chondrocytes.

jbtr-23-1-1-g3
Fig. 3. Visualization of chondrocytes or non-chondrocytes in the knee joints. (A) Original H&E image, (B) Visualized image. Chondrocytes as blue color (arrow) and non-chondrocytes as red (arrow head).
Download Original Figure

Discussion

In this study, we could classify chondrocytes from H&E-stained knee joint slides of rats using Inception-v3 model, showing the accuracy as 91.20 ± 8.43%. And we confirmed this model was applicable to classify the specific cells, making it possible to achieve automated classification of chondrocytes and non-chondrocytes in H&E stained slides.

In the image sets from knee joints, some cells such as trabecular bony tissues and adipocytes were misclassified as chondrocytes. To reduce misclassification, it seems that some tissues such as trabecular bony tissues, adipocytes and megakaryocytes may be excluded prior to deep learning. It was reported that misclassification was more frequent among histologically related tissues, where morphologies were shared at higher magnification [14]. Cartilage, periosteal tissue and attached skeletal muscle were excluded from the original images for quantification of bone marrow cellularity, prior to analysis [15]. For quantification of lung fibrosis, images containing large bronchi and vessels were manually excluded from the original images [16].

As Inception-v3 model showed the effective classification when the image size was 299 × 299, 151 × 151 and 79 × 79 [13], we cropped the image sized as 79 × 79 pixel in this study. During the processing of images, it seemed that resizing images might have affected image resolution, resulting in poor image quality, which might be related to low accuracy. For higher resolution, image resizing could be minimized and an image editor that can handle large-capacity images should be prepared.

In this study, the staining intensity, contrast and threshold were not adjusted, because it might induce potential artifact or possible processing noise during processing of images. For detection of tissue components, it is important to retain careful control of algorithm, image size, staining procedures and so on. Considering the time and manual labor needed for image processing, it is highly recommended to develop a commercial and user-friendly deep learning algorithm. This will provide opportunities for pathologists to use deep learning-based applications to save time and improve reading quality [17].

Taken together, we confirmed that Inception-v3 was applicable to classify rat chondrocytes and non-chondrocytes in knee joints in H&E stained slides. This promising approach will allow the rapid and accurate analysis of tissue characteristics including chondrocytes.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Acknowledgements

I would like to thank Ms. Ju Eun Lim and Ye Rin Kim for their technical assistance. Funding for this paper was provided by Namseoul University.

Ethics Approval

Not applicable.

REFERENCES

1.

Hamilton PW, Wang Y, McCullough SJ. Virtual microscopy and digital pathology in training and education. APMIS 2012;120:305-315.

2.

Kuo KH, Leo JM. Optical versus virtual microscope for medical education: a systematic review. Anat Sci Educ 2019;12:678-685.

3.

Pell R, Oien K, Robinson M, Pitman H, Rajpoot N, Rittscher J, Snead D, Verrill C. The use of digital pathology and image analysis in clinical trials. J Pathol Clin Res 2019;5:81-90.

4.

Bradley A, Jacobsen M. Toxicologic pathology forum: opinion on considerations for the use of whole slide images in GLP pathology peer review. Toxicol Pathol 2019;47:100-107.

5.

Zarella MD, Bowman D, Aeffner F, Farahani N, Xthona A, Absar SF, Parwani A, Bui M, Hartman DJ. A practical guide to whole slide imaging: a white paper from the digital pathology association. Arch Pathol Lab Med 2019;143:222-234.

6.

Farahani N, Parwani AV, Pantanowitz L. Whole slide imaging in pathology: advantages, limitations, and emerging perspectives. Pathol Lab Med Int 2015;7:23-33.

7.

Aeffner F, Zarella MD, Buchbinder N, Bui MM, Goodman MR, Hartman DJ, Lujan GM, Molani MA, Parwani AV, Lillard K, Turner OC, Vemuri VNP, Yuil-Valdes AG, Bowman D. Introduction to digital image analysis in whole-slide imaging: a white paper from the digital pathology association. J Pathol Inform 2019;10:9.

8.

Kozlowski C, Fullerton A, Cain G, Katavolos P, Bravo J, Tarrant JM. Proof of concept for an automated image analysis method to quantify rat bone marrow hematopoietic lineages on H&E sections. Toxicol Pathol 2018;46:336-347.

9.

Veta M, van Diest PJ, Willems SM, Wang H, Madabhushi A, Cruz-Roa A, Gonzalez F, Larsen AB, Vestergaard JS, Dahl AB, Ciresan DC, Schmidhuber J, Giusti A, Gambardella LM, Tek FB, Walter T, Wang CW, Kondo S, Matuszewski BJ, Precioso F, Snell V, Kittler J, de Campos TE, Khan AM, Rajpoot NM, Arkoumani E, Lacle MM, Viergever MA, Pluim JPW. Assessment of algorithms for mitosis detection in breast cancer histopathology images. Med Image Anal 2015;20:237-248.

10.

Liang L, Liu M, Sun W. A deep learning approach to estimate chemically-treated collagenous tissue nonlinear anisotropic stress-strain responses from microscopy images. Acta Biomater 2017;63:227-235.

11.

Komura D, Ishikawa S. Machine learning approaches for pathologic diagnosis. Virchows Arch 2019;475:131-138.

12.

Brent R, Boucheron L. Deep learning to predict microscope images. Nat Methods 2018;15: 868-870.

13.

Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition; 2016; Las Vegas, NV. p. 2818-2826.

14.

Hoefling H, Sing T, Hossain I, Boisclair J, Doelemeyer A, Flandre T, Piaia A, Romanet V, Santarossa G, Saravanan C, Sutter E, Turner O, Wuersch K, Moulin P. HistoNet: a deep learning-based model of normal histology. Toxicol Pathol 2021;49:784-797.

15.

Smith MA, Westerling-Bui T, Wilcox A, Schwartz J. Screening for bone marrow cellularity changes in cynomolgus macaques in toxicology safety studies using artificial intelligence models. Toxicol Pathol 2021;49:905-911.

16.

Gilhodes JC, Julé Y, Kreuz S, Stierstorfer B, Stiller D, Wollin L. Quantification of pulmonary fibrosis in a bleomycin mouse model using automated histological image analysis. PLOS ONE 2017;12:e0170561.

17.

Otálora S, Schaer R, Jimenez-del-Toro O, Atzori M, Müller H. Deep learning based retrieval system for gigapixel histopathology cases and the open access literature. J Pathol Inform 2019;10.