Research

Magnetic Resonance Imaging-derived Radiomic Signature Predicts Locoregional Failure after Organ Preservation Therapy in Patients with Hypopharyngeal Squamous Cell Carcinoma

Che-Yu Hsu, Shih-Min Lin, Ngan Ming Tsang Yu-Hsiang Juan, Chun-Wei Wang, Weichung Wang, and Sung-Hsin Kuo
Clinical and Translational Radiation Oncology (Accepted), August, 2020

a Division of Radiation Oncology, Department of Oncology, National Taiwan University Hospital, Taipei, Taiwan
b Department of Radiation Oncology National Taiwan University Cancer Center, National Taiwan University College of Medicine, Taipei, Taiwan
c Cancer Research Center, National Taiwan University College of Medicine, Taipei, Taiwan
d Graduate Institute of Oncology, National Taiwan University College of Medicine, Taipei, Taiwan
e Department of Radiation Oncology, Chang Gung Memorial Hospital at LinKou, Taiwan
f Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Taoyuan, Taiwan
g Department of Mathematics, National Taiwan University, Taipei, Taiwan
h School of Traditional Chinese Medicine, Chang Gung University, Taoyuan, Taiwan

Highlights:

  • The first study to develop and validate an MRI-derived radiomic signature (RS) for the prediction of 1-year locoregional failure in pure hypopharyngeal squamous cell carcinoma (HPSCC) patients who received organ preservation therapy (OPT).
  • The RS-based model (RS of 0.0326 as the cut-off value) provides a novel and convenient approach for the prediction of the 1-year progression-free laryngectomy-free and overall survival in patients with HPSCC who received OPT.
  • The proposed RS-based model can help physicians characterize and facilitate decision-making for the clinical management of patients with locally advanced HPSCC.

Background and purpose:
To develop and validate a magnetic resonance imaging (MRI)-derived radiomic signature (RS) for the prediction of 1-year locoregional failure (LRF) in patients with hypopharyngeal squamous cell carcinoma (HPSCC) who received organ preservation therapy (OPT)

Material and methods:
A total of 800 MRI-based features of pretreatment tumors were obtained from 116 patients with HPSCC who received OPT from two independent cohorts. The least absolute shrinkage and selection operator regression model were used to select the features used to develop the RS. Harrell’s C-index and corrected C-index were used to evaluate the discriminative ability of RS. The Youden index was used to select the optimal cut-point for risk category. Results:

The RS yielded 1000 times bootstrapping corrected C-index of 0.8036 and 0.78235 in the experimental (n = 82) and validation cohorts (n = 34), respectively. With respect to the subgroup of patients with stage III/IV and cT4 disease, the RS also showed good predictive performance with corrected C-indices of 0.760 and 0.754, respectively. The dichotomized risk category using an RS of 0.0326 as the cut-off value yielded a 1-year LRF predictive accuracy of 79.27%, 79.41%, 76.74%, and 71.15% in the experimental, validation, stage III/IV, and cT4a cohorts, respectively. The low-risk group was associated with a significantly better progression-free laryngectomy-free and overall survival outcome in two independent institutions, stage III/IV, and cT4a cohorts.

Conclusion:
The RS-based model provides a novel and convenient approach for the prediction of the 1-year LRF and survival outcome in patients with HPSCC who received OPT.

Keywords:
Hypopharyngeal squamous cell carcinomaorgan preservation treatmentRadiomicsloco-regional failuresurvival

Convolutional Neural Network for the Detection of Pancreatic Cancer on CT Scans – Authors’ Reply

Wei-Chih Liao*, Amber L Simpson, and Weichung Wang
The Lancet Digital Health, 2(9), e454, September, 2020

Background:

We thank Garima Suman and colleagues for comments on our Article.1 Because segmentation was not the focus of our study, we did not store the initial segmentation and thus cannot assess variabilities between the initial and final segmentation. We agree that such information is useful and should be stored in future studies.
Because a study2 from the centre that provided the external dataset in our study (Medical Segmentation Decathlon Dataset [MSDD]) included 161 patients with pancreatic adenocarcinoma, Suman and colleagues inferred that MSDD included only 161 pancreatic adenocarcinomas. However, those 161 patients were selected from 391 patients with pancreatic adenocarcinoma undergoing resection between 2009 and 2012,2 whereas MSDD included 420 patients without information on inclusion period and treatment, and 281 patients with tumour labelling were used in our study. Given incomplete information and inconsistent numbers, we cannot exclude the possibility that some of those 281 external patients had non-pancreatic adenocarcinoma tumours, but we cannot verify this proposition. Therefore, our results of testing in the external dataset should be interpreted with caution.
We appreciate the providers of MSDD, the only public pancreatic tumour CT dataset of sufficient volume, for their tremendous efforts and generosity. On the other hand, our experience highlights the challenges posed by the paucity of public data and difficulties in verifying and using external datasets. Because MSDD was intended for a segmentation challenge, information such as outcomes and histology was not provided. When accessing MSDD we sought to request further information, and a subsequently added document3 clarified that the dataset included pancreatic adenocarcinomas, neuroendocrine tumours, and intraductal mucinous neoplasms. However, the diagnosis of each image and method of diagnosis remain unclear. Notably, imaging findings might overlap between various pancreatic tumours and even benign conditions such as chronic pancreatitis;4 therefore, in the local datasets we only included histologically or cytologically confirmed pancreatic adenocarcinomas. We understand that making such information publicly available might not be feasible given regulations on patient privacy and health data protection, which vary across regions and institutions.
We agree that transparent, carefully curated public datasets with detailed clinical information are needed to facilitate future research. Data sharing efforts are undertaken by individual investigators based on goodwill. Mitigating data paucity requires incentives for dataset providers and validated tools to facilitate data collection, processing, and de-identification. Standardising the process of dataset preparation and sharing is needed to enable precise dataset interpretation and use by external users.
W-CL and WW report grants from Taiwan Ministry of Science and Technology, during the conduct of the study. W-CL and WW have a patent pending—differentiation between pancreatic cancer and non-cancerous pancreas on contrast-enhanced CT by deep learning. AS declares no competing interests.

References:

1.Liu K-L Wu T Chen P-T et al.
Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation.
Lancet Digital Health. 2020; 2: e303-e313

2.Attiyeh MA Chakraborty J Doussot A et al.
Survival prediction in pancreatic ductal adenocarcinoma by quantitative computed tomography image analysis.
Ann Surg Oncol. 2018; 25: 1034-1042

3.Simpson AL Antonelli M Bakas S et al.
A large annotated medical image dataset for the development and evaluation of segmentation algorithms.
arXiv. 2019; (published online Feb 25.) (preprint)
http://arxiv.org/abs/1902.09063

4.To’o KJ Raman SS Yu NC et al.
Pancreatic and peripancreatic diseases mimicking primary pancreatic neoplasia.
Radiographics. 2005; 25: 949-965

Radiomic Analysis of Magnetic Resonance Imaging Predicts Brain Metastases Velocity and Clinical Outcome After Upfront Radiosurgery

Che-Yu Hsu*, Furen Xiao, Kao-Lang Liu, Ting-Li Chen, Yueh-Chou Lee, and Weichung Wang*
Neuro-Oncology Advances (Accepted), August, 2020

Background: Brain metastasis velocity (BMV) predicts outcomes after initial distant brain failure (DBF) following upfront stereotactic radiosurgery (SRS). We developed an integrated model of clinical predictors and pre-SRS MRI-derived radiomic scores (R_scores) to identify high-BMV patients upon initial identification of brain metastases (BMs).

Methods: In total, 256 patients with BMs treated with upfront SRS alone were retrospectively included. R-scores were built from 1,246 radiomic features in two target volumes by using the Extreme Gradient Boosting algorithm to predict high-BMV groups (BMV_H), as defined by BMV ≥ 4 or leptomeningeal disease at first DBF. Two R-scores and three clinical predictors were integrated into a predictive clinico-radiomic (CR) model.

Results: The related R-scores showed significant differences between BMV_H and low BMV (BMV_L), as defined by BMV &4 or no DBF (P&0.001). Regression analysis identified BMs number, perilesional edema, and extracranial progression as significant predictors. The CR model using these five predictors achieved a bootstrapping corrected C-index of 0.842 and 0.832 in the discovery and test set, respectively. Overall survival (OS) after first DBF was significantly different between the CR-predicted BMV_L and BMV_H groups (median OS: 26.7 vs. 13.0 months, P=0.016). Among patients with a diagnosis-specific graded prognostic assessment (DS-GPA) of 1.5-2 or 2.5-4, the median OS after initial SRS was 33.8 and 67.8 months for CR-predicted BMV_L, compared to 13.5 and 31.0 months for CR-predicted BMV_H (P&0.001 and &0.001), respectively.

Conclusion: Our CR model provides a novel approach showing good performance to predict BMV and clinical outcomes.

Radiomic Features Distinguish Pancreatic Cancer From Non-cancerous Pancreas

Wei-Chih Liao, Po-Ting Chen, Hui-Hsuan Yen, Dawei Chang, Kao-Lang Liu, Su-Yun Huang, Holger Roth, Ming-Shiang Wu, and Weichung Wang
Digestive Disease Week (DDW), 2020

Three Dimensional Brain Metastases Segmentation Using Coarse-to- Fine Neural Architecture Search with Boundary Loss

C Lee, C Hsu, Y Lee, P Wang, H R Roth, W Wang
Radiological Society of North America(RSNA), 2020

DAutomated Pancreas Segmentation Using Multi-institutional Collaborative Deep Learning

Pochuan Wang*, Chen Shen, Holger R. Roth, Dong Yang, Daguang Xu, Masahiro Oda, Kazunari Misawa, Po-Ting Chen, Kao-Lang Liu, Wei-Chih Liao, Weichung Wang, and Kensaku Mori
MICCAI Workshop on Distributed and Collaborative Learning, October, 2020

Deep Learning to Distinguish Pancreatic Cancer Tissue From Non-cancerous Pancreatic Tissue: a Retrospective Study With Cross-racial External Validation

Kao-Lang Liu, Tinghui Wu, Po-Ting Chen, Yuhsiang M. Tsai, Holger Roth, Ming-Shiang Wu, Wei-Chih Liao*, and Weichung Wang*
The Lancet Digital Health, 2(6), 303-313, June, 2020

Background: The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks’ potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation.

Methods: In this retrospective, diagnostic study, contrast-enhanced CT images of 370 patients with pancreatic cancer and 320 controls from a Taiwanese centre were manually labelled and randomly divided for training and validation (295 patients with pancreatic cancer and 256 controls) and testing (75 patients with pancreatic cancer and 64 controls; local test set 1). Images were preprocessed into patches, and a CNN was trained to classify patches as cancerous or non-cancerous. Individuals were classified as with or without pancreatic cancer on the basis of the proportion of patches diagnosed as cancerous by the CNN, using a cutoff determined using the training and validation set. The CNN was further tested with another local test set (101 patients with pancreatic cancers and 88 controls; local test set 2) and a US dataset (281 pancreatic cancers and 82 controls). Radiologist reports of pancreatic cancer images in the local test sets were retrieved for comparison.

Findings:Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0·973, specificity of 1·000, and accuracy of 0·986 (area under the curve [AUC] 0·997 (95% CI 0·992–1·000). In local test set 2, CNN-based analysis had a sensitivity of 0·990, specificity of 0·989, and accuracy of 0·989 (AUC 0·999 [0·998–1·000]). In the US test set, CNN-based analysis had a sensitivity of 0·790, specificity of 0·976, and accuracy of 0·832 (AUC 0·920 [0·891–0·948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0·983 vs 0·929, difference 0·054 [95% CI 0·011–0·098]; p=0·014) in the two local test sets combined. CNN missed three (1·7%) of 176 pancreatic cancers (1·1–1·2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1·0–3·3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92·1% in the local test sets and 63·1% in the US test set.

Interpretation:CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation.

Funding:Taiwan Ministry of Science and Technology

Methods: CT images of 70 patients with histologically confirmed pancreatic adenocarcinoma and 70 control subjects with normal pancreas were extracted from archive. Images of 10 patients and 10 controls were randomly selected as test set, and others were used as training set. The pancreas and tumor were manually labeled by a radiologist experienced in pancreatic imaging and served as ground truth. Each image was preprocessed by windowing and normalization before cutting into patches of 50 pixels in length and width. Patches in which cancer occupied >30% of the area were labeled as cancer, whereas patches that did not contain cancer and included pancreas occupying>50% of the area, either from controls or PC patients, were labeled as normal pancreas. A convolution neural network (CNN) was trained using one TITAN V (NVIDIA) to classify patches into cancer or normal using binary cross-entropy as loss function, and its performance was assessed on the test set. For patient-based analysis, normal patches from PC patients were excluded, and patients were classified as having PC if more than 50% of his/her patches were predicted as cancer by CNN.

Results: A total of 2522 patches of PC and 3808 patches of normal pancreas were generated and used for training, and the test set included 533 patches of PC and 949 patches of normal pancreas. For differentiation between PC and normal pancreas, the accuracy of the CNN by patch-based analysis was 77.1%, with an area under the receiver operating characteristic (ROC) curve (AUC) of 0.80 (Table and Figure). In patient-based analysis, the accuracy and AUC were 90% and 0.96, respectively. The computation time for analyzing the whole test set (1482 patches) was less than 5 seconds.

Conclusion: We developed a CNN that could differentiate between CTs of pancreas with PC and without PC with a 90% accuracy in patient-based analysis and 77.1% accuracy in patch-based analysis. This deep-learning model is a potential computer-aided diagnosis tool to facilitate early and accurate diagnosis of PC.

Constructing a Platform based on Deep Learning Model to Mimic the Self-Organization Process of CT Images Order for Automatically Recognizing Human Anatomy

Feng-Mao Lin, Chi-Wen Chen, Wei-Da Huang, Liangtsan Wu, Anthony Costa, Eric K. Oermann and Weichung Wang
Radiological Society of North America (RSNA), December, 2019

1. Cohesion Information Technology, TW; 2. Foxconn Health Technology Business Group, USA; 3. Department of Neurological Surgery, Icahn School of Medicine, USA; 4. Department of Mathematics, National Taiwan University, TW *Corresponding author

Purpose: To demonstrate the ability of a deep learning application to automatically identify computed tomography (CT) slice regions by major Human anatomy. This application will be deployed in National Health Insurance of Taiwan (NHI) to classify the around 458 million CT images in 2018.

Materials and Methods: 954 and 4095 of CT series were selected for training and testing correspondingly. The voxel spacing must > 0.6 mm, and each series must > 40 slices. Each image was standardized to 128^2 pixels. The AlexNet and ResNet were trained with greyscale images and the 3 color images (bone, liquid and air), correspondingly. The loss function is identical to Ke Yan, and et al. in 2018 and guides slice scores increased by slice order. Linear regression was used to adjust slice score of a series in which the r-square < 0.8. The series was split into 4 parts and a new slice score was estimated from two of the best parts. Manually annotated lung boundary was used to find the cutoff for measuring sensitivity and specificity. Results: The AlexNet and ResNet was trained for 2 days. The r-square of linear regression was to measure the linearity between the slice score and its order. The amount of series with r-square < 0.8 was reduced from 4.1% to 1.7% in AlexNet and 6.8% to 2.2% in ResNet by using our error-correction approach. Fig. 1 depicted the images with similar slice scores having a similar body part. Based on the lung boundary, the score variant of the lower boundary was larger than the upper boundary. The cutoff was selected based on the highest value of specificity*sensitivity. ResNet had the best prediction performance in training data and validation data (Spec. > 0.94, Sens. > 0.9). AlexNet provided the best prediction performance in NHI validation data (Spec. > 0.91 and Sens. > 0.94). The error correction slightly improved sensitivity and specificity. The specificity and sensitivity were both larger than 0.9 in NHI validation data by using AlexNet and ResNet.

Conclusion and Discussion: First, the preprocessing process could accelerate the training process and reach lower losses by using ResNet and AlexNet is efficient during the prediction. Fig 2. showed our error correction process successfully adjusting the slice score to the corresponding body part. Since the organ boundary was varied from person to person, this approach is good for large part Identification. Although we found ResNet and error correction could provide good prediction quality with small training data, the model proposed by Ke Yan, and et al. in 2018 trained with large training data is one of the states of art methods.

Clinical relevance / applications: NHI collected around 458 million medical CT images in 2018. Our application will deploy in one of the largest medical databases in the world. Precisely retrieve the certain images of Human Anatomy could accelerate related application development and reduce storage usage.

Acknowledgment: We appreciate that the NHI Artificial Intelligence Application Service Trial provided valuable data for our model validation in Taiwan.

The Radiological Society of North America (RSNA) is a non-profit organization with over 54,000 members from 136 countries around the world. It provides high-quality educational resources, including continuing education credits toward physicians’ certification maintenance, host the world’s largest radiology conference and publish two top peer-reviewed journals: Radiology and RadioGraphics. Source: RSNA 2019 Website https://rsna2019.rsna.org/

Differentiation between Pancreatic Cancer and Nontumorous Pancreas on Computed Tomography by Radiomics and Machine Learning

Po-Ting Chen, Huihsuan Yen, Dawei Chang, Wei-Chih Liao, Kao-Lang Liu, Holger R. Roth, Weichung Wang, Tinghui Wu
Radiological Society of North America (RSNA), December, 2019

1. Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, TW; 2. Data Science Degree Program, National Taiwan University & Academia Sinica, TW; 3. Institute of Applied Mathematical Sciences, National Taiwan University, TW; 4. Department of Internal Medicine, National Taiwan University Hospital and National Taiwan University College of Medicine, TW 5. NVIDIA, USA

Purpose: Pancreatic cancer (PC) is the most lethal cancer and the fourth leading cause of cancer deaths in the United States. Radiomics is a methodology that extracts quantitative statistics and features from medical images. The purpose of this study is to develop a machine learning model to differentiate PC from nontumorous pancreas (NP) on contrast-enhanced computed tomography (CT) using radiomic features.

Materials and Methods: Contrast-enhanced venous phase CT images of 100 cases with PC and 100 controls were reviewed by an expert radiologist. The tumors and pancreas in PC cases were manually labeled by the radiologist, whereas the pancreas was segmented by a pre-trained deep learning model in some of the NP cases. Data were split into training set (60 NP cases, 60 PC cases), validation set (20 NP cases, 19 PC cases), and test set (20 NP cases, 19 PC cases). Pancreas and tumors were cut into patches of 20 pixels by 20 pixels for subsequent extraction of radomic features. A total of 91 radiomic features were extracted and subject to eXtreme Gradient Boosting (XGBoost) model to perform classification.

Results: A total of 3596 patches of PC and 19446 patches of NP were generated and used for training, and the testing set included 691 patches of PC and 3889 patches of NP. For differentiation between PC and NP, the accuracy of the XGBoost by patch-based analysis was 93.43%, with an area under the receiver operating characteristic (ROC) curve (AUC) of 0.94712. In patient-based analysis, the accuracy, sensitivity, specificity and AUC were 95.12%, 0.90476, 1, and 0.95238, respectively. Top 10 features with highest feature importance score were median, 10 percentile, energy, skewness, 90 percentile, maximum, minimum, and kurtosis in first order statistics, dependence nonuniformity in gray level dependence matrix (GLDM), and cluster shade in gray level cooccurrence matrix (GLCM).

Conclusion: We developed a machine learning model that could differentiate between CTs of pancreas with PC and without PC with a 95.12% accuracy in patient-based analysis and 93.43% accuracy in patch-based analysis. Among the important features which our model selects, features in first order statistics have the highest importance score followed by features in higher order statistics related to nonuniformity.

CLINICAL RELEVANCE/APPLICATION: This model can accurately differentiate between cancerous and nontumorous pancreas and is a potential computer-aided diagnosis tool.

The Radiological Society of North America (RSNA) is a non-profit organization with over 54,000 members from 136 countries around the world. It provides high-quality educational resources, including continuing education credits toward physicians’ certification maintenance, host the world’s largest radiology conference and publish two top peer-reviewed journals: Radiology and RadioGraphics. Source: RSNA 2019 Website https://rsna2019.rsna.org/

Efficacy Evaluation of Optimal Patient Selection for Hypopharyngeal Cancer Organ Preservation Therapy using MRI-derived Radiomic Signature: Bi-institutional Propensity Score Matched Analysis.

Shihmin Lin, Cheyu Hsu, Yuehchou Lee, T. Li, S. Kuo, Weichung Wang
European Society for Medical Oncology (ESMO) Congress, October, 2019

1. Radiation Oncology, Chang Gung Medical Foundation – Linkou Chang Gung Memorial Hospital, TW; 2. Department Of Oncology, National Taiwan university hostipal, TW; 3. Mathematics, National Taiwan University, TW; 4. Radiation Oncology, National Taiwan University Hospital, TW; 5. Department Of Oncology, National Taiwan University Hospital, TW; 6. Institute Of Applied Mathematical Sciences, National Taiwan University, TW

Background: Early intracranial progression (ICP) reduce the efficacy of first line radiosurgery (SRS) for brain metastases. We aim to develop and validate a MR imaging-derived radiomic signature (RS) via deep learning approach for the prediction of 1-year disseminated ICP (DICP; more than or equal to 3 lesions and leptomeningeal carcinomatosis) in brain metastases patients treated with SRS.

Methods: A total of 1304 MRI-based radiomic features of pretreatment tumors were obtained from 208 patients with 451 lesions, who received first line SRS during August 2008 to January 2018. Variational autoencoder (VAE), trained with symmetric two encoded and decoded layers of neural network and 1,649,560 trainable parameters, was applied to reduce the dimensionality of radiomc features to 128 VAE-radiomic features. Penalized regression with 10-fold cross validation using least absolute shrinkage and selection operator performs features selection and construct RS to predict 1-year DICP events in train set of 150 patients, which was validated in test set of 58 patients. Harrell’s C-index was used to evaluate the discriminative ability of RS in both sets. The correlation of VAE-radiomic features and molecular features was analyzed by student t-test. Survival analysis was calculated using the Kaplan-Meier method.

Results: The RS yielded 1000 times bootstrapping corrected C-index of 0.746 and 0.747 for discrimination of 1-year DICP in the train and test cohorts, respectively. As for the subgroup of patients with lung (n=175) and breast (n=23) origin, the RS also showed good predictive performance with C-indices of 0.735 and 0.755, respectively. EGFR-mutation (n=113) and ER (n=22) status were associated with selected VAE-radiomic features No. 98 (p = 0.035) and No.127 (p=0.44), respectively. Dichotomized risk category using RS of -0.769 (Youden index) as cut-off point yielded median overall survival of 57.7 months in low risk compared to 20.5 months in high risk group (p < 0.01). Conclusions: The RS model provides a novel approach to predict 1-year DICP and survival in brain metastases receiving SRS, and is warranted to be integrated into GPA for optimal selection of patients treated with first line SRS.

The European Society for Medical Oncology (ESMO) Congress is the appointment in Europe for clinicians, researchers, patient advocates, journalists and the pharmaceutical industry from all over the world to get together, learn about the latest advances in oncology and translate science into better cancer patient care. Source: ESMO Congress 2019 Website https://www.esmo.org/Conferences/Past-Conferences/ESMO-Congress-2019

Radiographic Phenotyping to Identify Intracranial Disseminated Recurrence in Brain metastases Treated With Radiosurgery Using Contrast-enhanced MR Imaging

Cheyu Hsu, S. Kuo, Weichung Wang, T.W. Chen, Yuehchou Lee
European Society for Medical Oncology (ESMO) Congress, October, 2019

1. Radiation Oncology, National Taiwan university hostipal, TW; 2. Department Of Oncology, National Taiwan University Hospital, TW; 3. Institute Of Applied Mathematical Sciences, National Taiwan University, TW; 4. Department Of Oncology, College of Medicine, National Taiwan University, TW; 5. Mathematics, National Taiwan University, TW

Background: Early loco-regional failure (LRF) after organ preservation therapy (OPT) varies widely for hypopharyngeal squamous cell carcinoma (HPSCC) patients. We aim to develop and validate a MRI-derived radiomic signature RS for the prediction of 1year LRF in HPSCC treated with OPT, and investigate its efficacy between OPT and total laryngectomy (TL) cohort.

Methods: A total of 3912 MRI-based radiomic features (RF) of pretreatment tumors were obtained from 370 HPSCC patients, including OPT cohort1 (OPT1; n = 186), OPT cohort2 (OPT2; n=88), and TL cohort (TLc; n=96). Variational autoencoder (VAE), trained with symmetric two encoded and decoded layers of neural network was applied to reduce the dimensionality of original RF to 128 VAE-RF. Least absolute shrinkage and selection operator with 10-fold cross validation performs features selection and constructs RS to predict 1-year LRF events in OPT1, which was validated in OPT2 and TLC. Harrell’s Cindex was used to evaluate the discriminative ability of RS. Optimal cut-point for dichotomized RS risk category was determined via Youden index. Pair-wise propensity score matching (caliper 0.2) using pre-treatment variables (age, gender, TNM stage) was applied to compare the impact of OPT and TL under different RS risk categories.

Results: The RS yielded 1000 times bootstrapping corrected C-index of 0.753, 0.745 and 0.398 in the OPT1, OPT2 and TLC, respectively. Dichotomized risk category using Youden cut-point of RS yielded 1 year LRF predictive accuracy of 71.12%, 70.41%, and 41.74% in OPT1, OPT2 and TLC, respectively. In RS-high risk group, OPT were associated with poor progression-free survival (PFS, HR: 1.752, p=0.032), while in RS-low risk group, OPT did not deteriorate the PFS (HR: 0.774, p=0.416).

Conclusions: The RS-based model provides a novel to predict 1-year LRF and survival in patients with HPSCC who received OPT. The prediction performance discrepancy of MRI-derived RS in TLC also emphasizes the role of TL in RS-high risk group.

The European Society for Medical Oncology (ESMO) Congress is the appointment in Europe for clinicians, researchers, patient advocates, journalists and the pharmaceutical industry from all over the world to get together, learn about the latest advances in oncology and translate science into better cancer patient care. Source: ESMO Congress 2019 Website https://www.esmo.org/Conferences/Past-Conferences/ESMO-Congress-2019

Severe Stenosis Detection using 2D Convolutional Recurrent Network

Chiatse Wang, Chih-Kuo Lee, Yu-Cheng Huang, Wen-Jeng Lee, Tzung-Dau Wang, Weichung Wang, Cheng-Ying Chou, Junting Chen, Weidao Lee
European Society of Cardiology (ESC) Congress, September, 2019

1. Graduate Program of Data Science, National Taiwan University and Academia Sinica, TW; 2. National Taiwan University Hospital, TW; 3. National Taiwan University, TW

Purpose: Stenosis detection is a critical point in the diagnosis of coronary artery disease (CAD). Manually detecting stenosis over the complete coronary artery takes about 30 minutes. Our stenoses detector can detect all stenoses that were greater than 70% in less than 20 seconds per patient. This is a critical decrease in detection time.

Method: We develop a workflow to organize the raw data, perform image preprocessing including cross-sectional plane sequence generator, inference by trained models, then visualize the results. The model contains 23 thousand parameters that were constructed by a recurrent neural network (RNN) following a convolutional neural network (CNN). Our model was trained on the dataset provided by Rotterdam Coronary Artery Stenoses Detection and Quantification Evaluation Framework. To detect severe stenoses that were greater than 70%, we trained this model by classifying greater than 50% or not. This process helps to eliminate the bias caused by an imbalanced dataset.

Results: The present work demonstrated the feasibility of detecting severe stenoses using deep learning-based network. Further work to elaborate on the algorithm and incorporate it into the National Taiwan University Hospital diagnostic workflow.

The European Society for Medical Oncology (ESMO) Congress is the appointment in Europe for clinicians, researchers, patient advocates, journalists and the pharmaceutical industry from all over the world to get together, learn about the latest advances in oncology and translate science into better cancer patient care. Source: ESMO Congress 2019 Website https://www.esmo.org/Conferences/Past-Conferences/ESMO-Congress-2019

Differentiation Between Pancreatic Cancer and Normal Pancreas on Computed Tomography with Artificial Intelligence

Wei-Chih Liao, Wei-Chung Wang, Ting-Hui Wi, Kao-Lang Liu, Po-Ting Chen, Hui-Hsuan Yen, Holger R. Roth
Digestive Disease Week (DDW), May, 2019

1. Department of Internal Medicine, National Taiwan University Hospital and National Taiwan University College of Medicine, TW; 2. Institute of Applied Mathematical Sciences, National Taiwan University, TW; 3. Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, TW; 4. Data Science Degree Program, National Taiwan University & Academia Sinica, TW; 5. NVIDIA, USA

Background: Computed tomography (CT) is the major modality for detection and evaluation of pancreatic cancer (PC). However, approximately one-third of PCs <2 cm are missed by CT, and differentiation between PCs and benign pancreatic lesions are often challenging. Aim: To develop and test a deep neural network to differentiate between PC and normal pancreas.

Methods: CT images of 70 patients with histologically confirmed pancreatic adenocarcinoma and 70 control subjects with normal pancreas were extracted from archive. Images of 10 patients and 10 controls were randomly selected as test set, and others were used as training set. The pancreas and tumor were manually labeled by a radiologist experienced in pancreatic imaging and served as ground truth. Each image was preprocessed by windowing and normalization before cutting into patches of 50 pixels in length and width. Patches in which cancer occupied >30% of the area were labeled as cancer, whereas patches that did not contain cancer and included pancreas occupying>50% of the area, either from controls or PC patients, were labeled as normal pancreas. A convolution neural network (CNN) was trained using one TITAN V (NVIDIA) to classify patches into cancer or normal using binary cross-entropy as loss function, and its performance was assessed on the test set. For patient-based analysis, normal patches from PC patients were excluded, and patients were classified as having PC if more than 50% of his/her patches were predicted as cancer by CNN.

Results: A total of 2522 patches of PC and 3808 patches of normal pancreas were generated and used for training, and the test set included 533 patches of PC and 949 patches of normal pancreas. For differentiation between PC and normal pancreas, the accuracy of the CNN by patch-based analysis was 77.1%, with an area under the receiver operating characteristic (ROC) curve (AUC) of 0.80 (Table and Figure). In patient-based analysis, the accuracy and AUC were 90% and 0.96, respectively. The computation time for analyzing the whole test set (1482 patches) was less than 5 seconds.

Conclusion: We developed a CNN that could differentiate between CTs of pancreas with PC and without PC with a 90% accuracy in patient-based analysis and 77.1% accuracy in patch-based analysis. This deep-learning model is a potential computer-aided diagnosis tool to facilitate early and accurate diagnosis of PC.

Digestive Disease Week® (DDW) is the world’s largest gathering of physicians, researchers and industry in the fields of gastroenterology, hepatology, endoscopy and gastrointestinal surgery and is recognized as one of the top 50 medical meetings by HCEA. Source: DDW Homepage https://ddw.org/