Projects

Deep-Learning framework for identifying cross-scale correlation in multiscale preclinical imaging

Developing a deep-learning based computational pipeline to integrate multiscale preclinical imaging consisting of in-vivo PET imaging, autoradiography and histopathology for analyzing biological interpretability and to improve therapeutic predictions. The framework consists of three different aspects (1) To develop automatic registration for multiscale imaging i.e., autoradiography (ex-vivo PET) with corresponding H&E and IHC data. (2) Developing a deep-learning based correlative analysis pipeline (CorrNet) for studying the correlation of autoradiography features (ex-vivo PET) to that histopathological features (H&E/IHC). (3) Correlate and integrate in-vivo data (PET) and histopathological data (H&E/IHC) in feature space by combining cytometric and radiomics features to predict therapeutic response

Deep-Learning to generate Standard-Count PET from Low-Count PET

Developed a deep-learning powered pipeline for generating quantitative Standard-Count preclinical PET (SC-PET) images from different realizations of Low-Count preclinical PET (LC-PET). For generating SC-PET images we developed a novel deep-learning architecture called Attention based Residual Dilated Network (ARD-Net) consisting of Enhancement Attention Modules (EAM) for efficient feature learning and feature consolidation. The performance of the architecture was evaluated utilizing a multi-objective evaluation framework which consists of fidelity based metrics, task based segmentation performance analysis followed by a task-based quantification analysis to test the robustness of our designed DL framework to extreme low-count PET quantitative and segmentation recovery.
The above pipeline was also replicated to design a self-supervised learning framework utilizing the Noise2Noise principle where SC-PET images were generated from Low-Count PET images only without corresponding ground truth (actual SC-PET). To realize the N2N principle we implemented the N2N Multi-Block Residual Networ (N2N-MBRNet) which consists of multiple residual units. [CODE]

Deep-Learning based Tumor Segmentation in Multiparametric MR

Developed a end-to-end pipeline for automatic tumor segmentation and quantitative analysis for preclinical multiparametric Triple Negative Breast Cancer (TNBC) PDX MR images. The pipeline consists of a novel deep-learning architecture called Dense Recurrent Residual U-Net (DR2U-Net) for automatic tumor segmentation. Further we extracted radiomics features from the segmented maps to validate the robustness of segmentation boundaries and estabilished the reporducibility of the framework. Currently our algorithm is deployed in the PIXI platform to be tested on multi-institutional datasets to estabilish generalizibility of the model and provide high-throughput reproducible radiomic analysis. [CODE] [PAPER]