Artificial intelligence for magnifying endoscopy, endocytoscopy, and confocal laser endomicroscopy of the colorectum


Because magnifying endoscopy is considered to be more accurate at predicting the histology of colorectal polyps than nonmagnifying endoscopy, it has been attracting a lot of attention, especially in Japan. However, use of magnifying endoscopy is not yet widespread because of its limited availability and the difficulty in interpreting the acquired images. Application of artificial intelligence (AI) is now changing this situation because it helps less-skilled endoscopists to accurately interpret magnified images. Research in this field initially focused on magnifying endoscopy with narrow-band imaging as the target of AI. Most previously published retrospective studies have reported over 90% sensitivity in differentiation of neoplastic lesions; however, automatically indicating the region of interest (ROI) of the polyps that AI should analyze has been found to be challenging. To address this practical problem, some researchers have started to adopt contact endomicroscopy as a target for AI. Contact endomicroscopy includes endocytoscopy (520-fold magnification, Olympus, Tokyo, Japan) and confocal laser endomicroscopy (1000-fold magnification, Mauna Kea, Paris, France). These forms of contact endomicroscopy provide ultramagnified images that make it unnecessary to manually select the ROI because the entire image acquired by contact endomicroscopy is the ROI of the targeted polyps. This strength of contact endomicroscopy has contributed to early implementation of this technology into clinical practice, which may change the utility of magnifying endoscopy in clinical settings and help increase its use globally in the near future.


Colorectal cancer (CRC) is a major cause of cancer-related death worldwide. Colonoscopy with complete eradication of neoplastic lesions is considered a reliable means of reducing both the incidence and mortality related to CRCs . To improve the efficacy of colonoscopy, real-time prediction of the pathology of detected polyps during colonoscopy (ie, optical biopsy) is now encouraged by several endoscopy societies because it can achieve a significant reduction in the number of unnecessary polypectomies (of nonneoplastic polyps). The American Society of Gastrointestinal Endoscopy proposes the Preservation and Incorporation of Valuable Endoscopic Innovations (PIVI) for optical biopsy of diminutive polyps in which a “diagnose-and-leave” strategy is adopted for hyperplastic polyps provided the negative predictive value (NPV) for diminutive rectosigmoid adenomas is >90% when diagnosed with high confidence using an advanced endoscopic modality . However, the accuracy of prediction of neoplastic change varies significantly depending on the endoscopists’ skill; this remains a substantial hurdle in implementing optical biopsy as a substitute for histopathological evaluation . Artificial intelligence (AI) is now expected to reduce such inter-rater variability in optical diagnosis of colorectal polyps .

Because magnifying endoscopy is considered to be more accurate than nonmagnifying endoscopy [ , ] at predicting the histology of colorectal polyps, it has been widely adopted in specific countries, such as Japan. However, use of magnifying endoscopy is not widespread worldwide because of its limited availability and difficulty in interpreting the acquired magnified images. Application of AI is expected to change this situation because it can help less skilled endoscopists to accurately interpret relatively complicated magnifying images. Easier interpretation of magnifying endoscopic images is also expected to increase their popularity from a relatively “niche” status to a much more prominent one on a global scale. In this review, we explore the current status of incorporation of AI into magnifying endoscopy, including endocytoscopy and confocal laser endomicroscopy, by referring mainly to physician-initiated studies that have focused on colorectal polyp recognition.

Magnifying endoscopy using narrow-band imaging

The application of AI to magnifying narrow-band imaging (NBI) (Olympus, Tokyo, Japan) is the most eagerly investigated area in this field. The first application of AI was reported by Tischendorf et al and Gross et al , who achieved diagnostic accuracies of 85% and 93%, respectively. They used similar algorithms, these being based on a sequence of extracting 9 vessel features (eg, length, brightness, and perimeter) from magnifying NBI images and classifying these features into a 2-class pathological prediction (ie, neoplastic or not neoplastic) using a support vector machine, one of the classic machine learning methods. Gross et al prepared images of 434 small polyps (≤10 mm) and assessed the accuracy of AI, expert endoscopists, and nonexpert endoscopists in prediction of histology of these polyps. They found that AI had an accuracy of 93%, which was comparable to that of experts and superior to that of nonexperts, supported the contention that AI could be a powerful support for novice endoscopists. Following these studies, a research group at Hiroshima University in Japan achieved significant improvements in the further development of AI models designed for magnifying NBI images . Unlike the previous studies, they used a histogram of visual words in their algorithm to create a more robust system for image analysis. After conducting experimental, retrospective studies to assess the developed models, they prospectively evaluated their model in a clinical setting . In this prospective study, 41 patients underwent real-time AI inspection of colorectal polyps detected during colonoscopy. They reported that 88 diminutive polyps (<5 mm) were assessed with a sensitivity of 93% and a specificity of 93%. Though the authors did not elaborate on how to select the region of interest (ROI) from the captured NBI images for AI analysis, this study was considered a milestone in this research field because it was the first published prospective study. Apart from differentiation of neoplastic change, Tamai et al explored the possibility of AI predicting deeply invasive submucosal cancers (SM-d) . Endoscopic discrimination of SM-d is considered crucial because SM-d should be resected surgically because of its potential to metastasize to lymph node, whereas adenomas and slightly invasive submucosal cancers can safely be resected endoscopically. The authors constructed a CAD model based on a classical hand-crafted algorithm and evaluated its performance using 121 images from 121 lesions. Its achieved 84% sensitivity and 83% specificity in discriminating SM-m, which were considered excellent values given even expert endoscopists reportedly achieve only 74% sensitivity and 69% specificity in identifying such lesions .

Recently, 2 research teams conducted retrospective studies on newly developed CAD systems based on a deep-learning (convolutional neural network) algorithm. Byrne et al assessed their model by using it to examine 125 unaltered endoscopic videos depicting 125 diminutive polyps that had been captured using near focus endoscopy (CF-H190; Olympus). The AI model generated sufficient confidence to predict the histology of 85% of the polyps (106/125) and provided sensitivity 98%, specificity 83%, NPV 97%, and positive predictive value (PPV) 90% for identifying adenomas . This study is notable in that the developed AI showed excellent performance with video recordings, which contain a greater proportion of low-quality image frames (eg, blurred images, polyps with inadequate bowel preparation, and polyps that are far away) than static image, making it more challenging for AI to correctly identify a polyp’s histology. The researchers overcame this difficulty by adopting a credibility update mechanism whereby successive images of a polyp were comprehensively assessed by AI, and the credibility of prediction for each image accumulated to output a final prediction of the pathology of the targeted polyp. This technique mimics the human perceptual system, which promotes longitudinal coherence during assessment of colorectal polyps. Similarly, Chen et al developed an AI model based on a convolutional neural network, and assessed their model using more diminutive polyps (N = 284). Their model differentiated adenomas from hyperplastic polyps with 96% sensitivity, 78% specificity, PPV 90%, and NPV 92% . Both the studies by Byrne et al and Chen et al met the PIVI threshold required for adopting the diagnose-and-leave strategy for diminutive hyperplastic polyps .

One of the drawbacks of AI for magnifying NBI images, however, is the difficulty in automatically selecting the ROI of the detected polyps that requires analysis by the AI. Given that automated polyp detection has not been sufficiently accurate , manual selection of the area for AI analysis has been indispensable, preventing the realization of a fully automated system for assessing polyps [ , ]. The other weak point of using magnifying endoscopy is that the degree of magnification varies according to the extent to which the lever has been pushed (except for near-focus endoscopes, which provide fixed magnification), which can result in the AI incorrectly learning the training images.


Some researchers have started to adopt contact endomicroscopy as a target for AI with the aim of addressing this practical problem of magnifying endoscopy. Contact endomicroscopy, which allows in vivo contact microscopic imaging, includes endocytoscopy (520-fold magnification, CF-H290ECI, Olympus) and probe-based confocal laser endomicroscopy (p-CLE, 1000-fold magnification; Cellvizio, Mauna Kea). These contact endomicroscopes provide ultramagnifying images that make it unnecessary to manually select the ROI because the whole of the image acquired by contact endomicroscopy is exactly the ROI of the targeted polyp; however, normal magnifying endoscopy supplies up to 100-fold magnification. Figure 1 illustrates how differences in magnification affect selection of the ROI when taking optical biopsies. Endomicroscopes are also considered ideal for partnering with an AI system because they always provide focused, fixed-size images, which contributes to smooth image analysis using AI. Of course, it is important to bear in mind that endomicroscopy has limitations regarding availability and difficulties in capturing nonblurred, static images; however, the main strength of contact endomicroscopy, namely, easier application of AI, has contributed to early clinical implementation of AI. AI designed for endocytoscopy obtained regulatory approval by the Pharmaceutical and Medical Device Agency, a Japanese regulatory body, in 2018, and is now commercially available (EndoBRAIN; Cybernet, Tokyo).

Aug 9, 2020 | Posted by in GASTOINESTINAL SURGERY | Comments Off on Artificial intelligence for magnifying endoscopy, endocytoscopy, and confocal laser endomicroscopy of the colorectum
Premium Wordpress Themes by UFO Themes