Deep-Learning based Tumor Segmentation in Multiparametric MR
Project Description
Preclinical MR imaging is a critical component in the co-clinical research pipeline, both in academia and in industry, to validate imaging biomarkers for detection of diseases and assessing therapeutic efficacy. To that end, T1- and T2- weighted MR images are routinely used to extract morphological and pathological information from tumor lesions. In this context, accurate localization and delineation of tumor boundaries are vital for assessing treatment response. Manual segmentation by experts, however, is time and labor intensive and suffers from inter- and intra-observer variability with limited reproducibility. In order to address this challenge our aim was to develop novel deep-learning based architecture DR2U-Net specially optimized to the task of automatically segmenting tumors from multiparametric MR images to alleviate manual effort and to circumvent observer variability in tumor delineation. We further extracted the radiomics features and assessed the sensitivity of the features to tumor segmentation probability boundaries. We tested five network architectures including U-Net, dense U-Net, Res-Net, recurrent residual UNet (R2UNet), and dense R2U-Net (D-R2UNet), which were compared against manual delineation by experts. To mitigate bias among multiple experts, the simultaneous truth and performance level estimation (STAPLE) algorithm was applied to create consensus maps. Multi-contrast D-R2UNet performed best with F1-score = 0.948; also radiomic features extracted from D-R2UNet were highly corelated to STAPLE-derived features with 67.13% of T1w and 53.15% of T2w exhibiting correlation ρ ≥ 0.9 (p ≤ 0.05). D-R2UNet-extracted features exhibited better reproducibility relative to STAPLE with 86.71% of T1w and 69.93% of T2w features found to be highly reproducible (CCC ≥ 0.9, p ≤ 0.05). Finally, 39.16% T1w and 13.9% T2w features were identified as insensitive to tumor boundary perturbations (Spearman correlation (−0.4 ≤ ρ ≤ 0.4).