Although advances in cancer survival have made great progress in recent years, they have not had as much impact on patients diagnosed with non-small cell lung cancer (NSCLC), a leading cause of cancer-related deaths in the U.S., due to late-stage detection, diagnosis, and treatment. As a result, treatment for these patients is most often through non-surgical methods, such as radiation therapy, immunotherapy, and chemotherapy.
Tumors are living systems continuously evolving and responding throughout cancer therapy which means it is not feasible to capture this important information in a single scan at a single point in time. However, medical imaging is ideally positioned to noninvasively monitor quantitative changes in lesions, such as treatment response and tumor size, over time.
Qualitative evaluation relies on human interpretation using criteria such as those outlined by RECIST whereas quantitative assessment has been performed by tedious and time-consuming manual volumetric measurements prone to human variabilities like fatigue and experience level. More recently, technological advances in deep learning are demonstrating positive outcomes that are quick, efficient, and do not require manual input. These advances are particularly evident in the use of convolutional neural networks (CNN), which can extract features from images and find non-linear relationships among complex data, and recurrent neural networks (RNN), which can integrate several points in time.
Millions of images leveraged to track tumor changes
Investigators of a retrospective proof of principle study looking to develop imaging biomarkers that can predict clinical outcomes free of human contribution evaluated deep learning CNNs combine with RNNs. Information from millions of ImageNet photographic images were leveraged to pretrain the ResNet CNN model. Transfer learning was then used to apply it to their datasets. Features of the CT images were extracted by the CNN for each time point scan and fed into the RNN for longitudinal evaluation.
Based on an analysis of a given series of CT scans spanning pretreatment through multiple follow-up points in time, the researchers' algorithm model assessed various clinical outcomes including survival, treatment response, progression, distant metastasis, locoregional recurrence, and pathologic response. While most earlier quantitative studies focused on developing imaging biomarkers from a single scan, this study demonstrated how deep learning CNNs together with RNNs can be modeled to automatically extract changes and accurately predict outcomes by using a range of time points.
Lung cancer patient datasets
Two datasets were developed from the pretreatment and posttreatment image data of 268 NSCLC patients with a total of 739 CT scans. These were used to develop machine learning models based on CNN and RNN neural networks that would be able to analyze the value of deep learning-based biomarkers in predicting survival and other clinical endpoints.
All patients had similar diagnoses of stage III NSCLC but differed in their specific disease burden which in turn guided selection of the treatment protocol they received. Based on the chosen regimen, patients were assigned to two different groups: those who received definitive chemoradiation were assigned to dataset A, while patients who were treated with chemoradiation followed by resection were assigned to dataset B.
Dataset A included 179 patients with stage III NSCLC who received definitive radiation therapy and chemotherapy (carboplatin/paclitaxel or cisplatin/etoposide) between 2003 and 2014. Nearly 53 percent of patients in this group were female between age 32 and 93 with a predominant diagnosis of stage IIIA disease (59 percent) and more than 58 percent categorized as adenocarcinoma histology. Patients received a median radiation dose of 66 Gy and more than 31 months of follow-up. Prediction of survival and other clinical endpoints including distant metastases, locoregional recurrence, and progression were evaluated.
Patient parameters from dataset A were also used to develop, train, and test deep learning biomarkers. Patients in this group were randomly split two-to-one into training/tuning and test groups. Deep learning models were developed using transfer learning of CNNs and RNNs based on single seed-point tumor localization.
More than 580 pretreatment and posttreatment serial CT scans from dataset A at one, three, and six months after radiation therapy were analyzed. To create a realistic clinical setting, not all patients received imaging scans at all time points.
Dataset B included 89 patients with stage III NSCLC who were treated with radiotherapy and chemotherapy before undergoing resection between 2001 and 2013. All 178 CT scans evaluated from this group were taken prior to surgery, specifically, before and after radiation therapy. Patients with distant metastasis, no survival data, or more than 120 days between chemoradiation and surgery were not included in this part of the study. Patients received a median radiation dose of 54 Gy and more than 37 months of follow-up.
An additional test for pathologic response validated at the time of surgery was performed on dataset B for further confirmation of a range of standard of care protocols. Residual tumor was categorized as responders or gross residual disease based on surgical pathologic reports.
Significant prediction for adapting treatment
Despite the differences in disease burden and treatment protocol between the two patient datasets, the survival CNN models that were trained using data from the first group were able to predict patient survival in dataset B including distant metastasis, progression, and locoregional recurrence. Additionally, dataset B had only a single follow-up scan compared to patients in dataset A who had up to three so there was less information for evaluation by the survival algorithm.
Deep learning-based prediction capability of two-year overall survival based on a single pretreatment scan was low but increased with the addition of each follow-up scan at one-, three-, and six-months. This was also the case for one-year survival, metastasis progression, and locoregional recurrence-free survival.
The CNN model also stratified patients into low and high risk mortality groups where researchers identified a significant difference in two-year overall survival between them when two follow-up CT scans at one and three months after completing definitive radiation treatment were available for assessment. This proved to be true for locoregional recurrence as well. Progression and distant metastasis outcomes required the third follow-up scan at six months to predict stratification for mortality risk.
Additionally, the survival neural network model was able to predict pathologic response in dataset B using only a single seed point at the center of the tumor and without volumetric segmentation typically acquired by time-consuming manual contours. When pathologic response prediction by the deep learning model was compared with the conventional standard for prediction - primary tumor size - the performance was comparable and the data points only weakly correlated. This suggested that radiographic characteristics other than tumor size were being assessed.
The CNN network captures the tumor region and immediate environment, which could supply additional information about tumor response and the potential for growth into normal tissue. Earlier input methods provided algorithms with precise manual delineations that did not provide much data regarding surround tissue.
No significant predictive ability was found for survival, treatment response, or other outcomes when deep learning models were compared to conventional clinical factors, which include tumor size, stage, and grade, as well as gender, age, and smoking status.
Clinical setting and workflow impact
The low cost and minimal human input characteristic of deep learning-based image biomarkers make the outcomes of this research important because of the potential impact for improving patient care with high quality decision support. By implementing deep learning approaches, phenotypic changes can be automatically extracted without using qualitative or quantitative methods prone to human variability, such as manual contours or qualitative visual interpretations.
In reality, the follow-up CT scans already exist as part of the clinical workflow containing important data about a patient's disease that may improve outcomes. A trained neural network could generate probable prognoses within a few seconds that physicians can immediately evaluate alongside other clinical parameters that support accurate and efficient patient assessment.
Clinical outcome predictions by deep learning models are expected to have a potentially significant impact on precision medicine, adaptive and personalized therapy, and assessing patient response during clinical trials.
- Deep Learning Predicts Lung Cancer Treatment Response from Serial Medical Imaging. Clinical Cancer Research