Deep-Learning framework for identifying cross-scale correlation in multiscale preclinical imaging
Project Description
High resolution diagnostic anatomical and functional imaging techniques have become essential tools for tumor detection, diagnosis, staging of cancer and response assessment in both clinical and preclinical setting. Despite tremendous advancement in in-vivo radiological modalities there remains substantial uncertainty about whether quantitative features extracted from radiological images truly represents the tissues underlying biological process. Pathological analysis is regarded as the gold standard for determining the extent of malignancy by analyzing the protein expressions at the cellular level. The inherent challenge associated with pathological analysis includes biopsy or surgical removal of tissues, storing of tissue specimens, slicing and sectioning. Though radiological and pathology images are complementary in nature they are processed in disconnected silos due to the inherent challenges associated with multiscale image analysis. The challenges in multiscale image analysis includes image registration, wide variability in spatial resolution, slice orientation and positioning and image signal type across different modality [63-65]. In this study we propose to design a computational framework to solve three different challenges: (1) To realize accurate registration for multiscale imaging i.e. exvivo PET/MR with corresponding IHC and H&E data. (2) Validation and correlative analysis of cytometric and path-omics features extracted from H&E and IHC data to radiomics features extracted from ex-vivo PET/MR. (3) Feature Space integration of in-vivo PET/MR to pathology to predict response to therapy using machine-learning.