Journal Paper

COPYRIGHT NOTICE: These materials are presented to ensure the timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. (*Corresponding Author; #Co-First Author)

2024

[MedIA] Lintao Zhang, Lihong Wang, Minhui Yu, Rong Wu, David C. Steffens, Guy G. Potter, Mingxia Liu*. Hybrid Representation Learning for Cognitive Diagnosis in Late-Life Depression Over 5 Years with Structural MRI. Medical Image Analysis, 94: 103135, 2024. [Code]

We describe the development of a hybrid representation learning (HRL) framework for predicting cognitive diagnosis over 5 years based on T1-weighted MRI data. Specifically, we first extract prediction-oriented MRI features via a deep neural network, and then integrate them with handcrafted MRI features via a Transformer encoder for cognitive diagnosis prediction. Two tasks are investigated in this work, including (1) identifying cognitively normal subjects with LLD and never-depressed older healthy subjects, and (2) identifying LLD subjects who developed CI (or even AD) and those who stayed cognitively normal over five years. To the best of our knowledge, this is among the first attempts to study the complex heterogeneous progression of LLD based on task-oriented and handcrafted MRI features. We validate the proposed HRL on 294 subjects with T1-weighted MRIs from two clinically harmonized studies.   Experimental results suggest that the HRL outperforms several classical machine learning and state-of-the-art deep learning methods in LLD identification and prediction tasks.

[IEEE TBME] Qianqian Wang, Wei Wang, Yuqi Fang, Pew-Thian Yap, Hongtu Zhu, Hong-Jun Li, Lishan Qiao, Mingxia Liu*. Leveraging Brain Modularity Prior for Interpretable Representation Learning of fMRI. Medical Image Analysis, 2024. [Code]

We propose a Brain Modularity-constrained dynamic Representation learning (BMR) framework for interpretable fMRI analysis, consisting of three major components: dynamic graph construction, dynamic graph learning via a novel modularity-constrained graph neural network, and prediction and biomarker detection for interpretable fMRI analysis. Especially, three core neurocognitive modules (i.e., salience network, central executive network, and default mode network) are explicitly incorporated into the MGNN, encouraging the nodes/ROIs within the same module to share similar representations.  To further enhance discriminative ability of learned features, we also encourage the MGNN to preserve the network topology of input graphs via a graph topology reconstruction constraint. Experimental results on 534 subjects with rs-fMRI scans from two datasets validate the effectiveness of the proposed method. The identified discriminative brain ROIs and functional connectivities can be regarded as potential fMRI biomarkers to aid in clinical diagnosis.

[Neural Networks] Yuqi Fang, Pew-Thian Yap, Weili Lin, Hongtu Zhu, Mingxia Liu*. Source-Free Unsupervised Domain Adaptation: A Survey. Neural Networks. 2024.

We provide a systematic literature review of existing source-free unsupervised domain adaptation (SFUDA) approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups: white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field. 

[PR] Hao Guan, Pew-Thian Yap, Andrea Bozoki, Mingxia Liu*. Federated Learning for Medical Image Analysis: A Survey. Pattern Recognition. 2024.

We conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We introduce the background of federated learning and present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three aspects of a federated learning system, including client end, server end, and communication techniques. We provide a review of benchmark medical image datasets and software platforms for current federated learning research. We conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. 

[Neural Networks] Linmin Wang, Qianqian Wang, Xiaochuan Wang, Yunling Ma, Limei Zhang,  Mingxia Liu*. Triplet-Constrained Deep Hashing for Chest X-Ray Image Retrieval in COVID-19 Assessment. Neural Networks, 173:  106182, 2024. [Code]

A new triplet-constrained deep hashing (TCDH) framework was designed for chest radiology image retrieval to facilitate automated analysis of COVID-19, including (a) feature extraction and (b) image retrieval. For feature extraction, we have introduced a triplet constraint and an image reconstruction task to enhance discriminative ability of learned features, and these features are then converted into binary hash codes to capture semantic information. Specifically, the triplet constraint is designed to pull closer samples within the same category and push apart samples from different categories. Additionally, an auxiliary image reconstruction task is employed during feature extraction to help effectively capture anatomical structures of images. For image retrieval, we utilize learned hash codes to conduct searches for medical images. Extensive experiments on 30,386 chest X-Ray images demonstrate the effectiveness of the proposed method in automated image search.

[Neural Networks] Junhao Zhang, Qianqian Wang, Xiaochuan Wang, Lishan Qiao, Mingxia Liu*. Preserving Specificity in Federated Graph Learning for fMRI-based Neurological Disorder Identification. Neural Networks, 169: 584-596, 2024. [Code]

A specificity-aware federated graph learning (SFGL) framework is designed for rs-fMRI analysis and automated brain disorder identification, with a server and multiple clients/sites for federated model aggregation and prediction.  At each client, our model is comprised of a shared branch and a personalized branch, where parameters of the shared branch are sent to the server while those of the personalized branch remain at each local site. This can facilitate knowledge sharing among sites and also helps preserve site specificity. In the shared branch, we employ a spatiotemporal attention graph isomorphism network and a Transformer to learn dynamic representations from fMRI. In the personalized branch, we integrate vectorized representations of demographic information and a functional connectivity network to preserve the site-specific characteristics. We then fuse fMRI representations learned from the two branches for prediction. Experimental results on two fMRI datasets with a total of 1,218 subjects suggest the effectiveness of SFGL. 

2023

[NatureCommunications] Yue Sun, Limei Wang, Kun Gao, Shihui Ying, Weili Lin, Kathryn L. Humphreys, Gang Li, Sijie Niu,  Mingxia Liu*, Li Wang*.  Self-supervised Learning with Application for Infant Cerebellum Segmentation and Analysis.  Nature Communications, 14: 4717, 2023.

Accurate tissue segmentation is critical to characterize early cerebellar development in the first two postnatal years. However, challenges in tissue segmentation arising from tightly-folded cortex, low and dynamic tissue contrast, and large inter-site data heterogeneity have limited our understanding of early cerebellar development. This paper proposes an accurate self-supervised learning framework for infant cerebellum segmentation. Our results on 360+ subjects suggest the first six months exhibit the most rapid and dynamic changes, with gray matter (GM) playing a dominant role in cerebellar growth over white matter (WM). We also find both GM and WM volumes are larger in males than females, and GM and WM volumes are larger in autistic males than neurotypical males. Application of our method to a larger population will fuel more cerebellar studies, ultimately advancing our comprehension of its structure and function in neurotypical and disordered development.

[NeuroImage] Hao Guan, Mingxia Liu*. DomainATM: Domain Adaptation Toolbox for Medical Data Analysis.  NeuroImage, 268: 119863, 2023.

We develop a Domain Adaptation Toolbox for Medical data analysis (DomainATM) — an open-source software package designed for fast facilitation and easy customization of domain adaptation methods for medical data analysis. The DomainATM is implemented in MATLAB with a user-friendly graphical interface, and it consists of a collection of popular data adaptation algorithms that have been extensively applied to medical image analysis and computer vision. The DomainATM facilitates fast feature-level and image-level adaptation, visualization, and performance evaluation of different adaptation methods for medical data analysis. The DomainATM enables the users to develop and test their own adaptation methods through scripting, greatly enhancing its utility and extensibility. Overview characteristics and usage of DomainATM are presented and illustrated with three example experiments, demonstrating its effectiveness, simplicity, and flexibility.

[MedIA] Yuqi Fang, Mingliang Wang, Guy G. Potter, Mingxia Liu*. Unsupervised Cross-Domain Functional MRI Adaptation for Automated Major Depressive Disorder Identification. Medical Image Analysis, 84: 102707, 2023. [Code][API]

A novel discrepancy-based unsupervised cross-domain fMRI adaptation framework (called UFA-Net) is designed for automated brain disorder identification. The proposed UFA-Net is designed to model spatiotemporal fMRI patterns of labeled source and unlabeled target samples via an attention-guided graph convolution module, and also leverage a maximum mean discrepancy constrained module for unsupervised cross-site feature alignment between two domains. Extensive evaluation on two imaging sites shows that the proposed method outperforms several state-of-the-art methods. 

[HBM] Yuqi Fang, Guy Potter, Di Wu, Hongtu Zhu, Mingxia Liu*. Addressing Multi-Site Functional MRI Heterogeneity through Dual-Expert Collaborative Learning for Brain Disease Identification. Human Brain Mapping, 44(11): 4256-4271, 2023. 

A dual-expert fMRI harmonization (DFH) framework is designed for automated disease diagnosis, by simultaneously exploiting a single labeled source domain/site and two unlabeled target domains for mitigating cross-site data distribution differences. The DFH consists of a domain-generic student model and two domain-specific teachers that are jointly trained to perform knowledge distillation through a deep collaborative learning module. A student model with strong generalizability is finally derived, which can be well adapted to unseen target domains and analysis of other brain diseases. Comprehensive experiments on 836 subjects with rs-fMRI data from 3 different sites show the superiority of our method. The discriminative brain functional connectivities identified by our method could be regarded as potential biomarkers for fMRI-based disease diagnosis.

[IEEE JBHI] Hao Guan, Ling Yue, Pew-Thian Yap, Shifu Xiao, Andrea Bozoki, Mingxia Liu*, Attention-Guided Autoencoder for Automated Progression Prediction of Subjective Cognitive Decline with Structural MRI. IEEE Journal of Biomedical and Health Informatics, 27 (6):  2980-2989, 2023

We propose an attention-guided autoencoder model for efficient cross-domain adaptation which facilitates the knowledge transfer from AD to SCD. The proposed model consists of 1) a feature encoder to learn shared subspace representations of different domains, 2) an attention component for automatically finding disease-related brain regions of interest, 3) a decoding module for reconstructing original input, and 4) a classification module for classification. The proposed model is straightforward to train and test with only 5-10 seconds on CPUs and is suitable for medical tasks with small datasets. 

[IEEE TNSRE] Yunling Ma, Qianqian Wang, Liang Cao, Long Li, Chaojun Zhang, Lishan Qiao, Lishan Qiao, Mingxia Liu*. Multi-Scale Dynamic Graph Learning for Brain Disorder Detection with Functional MRI. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 31: 3501-3512, 2023. 

A multi-scale dynamic graph learning (MDGL) framework is developed to capture multi-scale spatiotemporal dynamic representations of rs-fMRI data for automated brain disorder diagnosis. The MDGL consists of 3 components: (1) multi- scale dynamic FCN construction using multiple brain atlases to model multi-scale topological information, (2) multi-scale dynamic graph representation learning to capture spatiotemporal information conveyed in fMRI data, and (3) multi-scale feature fusion and classification. Experimental results on two datasets show that MDGL outperforms several state-of-the-art methods.

[PlosOne] Jinjian Wu, Yuqi Fang, Xin Tan, Shangyu Kang, Xiaomei Yue, Yawen Rao, Haoming Huang, Mingxia Liu, Shijun Qiu, Pew-Thian Yap. Detecting Type 2 Diabetes Mellitus Cognitive Impairment using Whole-Brain Functional Connectivity. Scientific Reports,  13 (1): 3940, 2023.

Type 2 diabetes mellitus (T2DM) is closely linked to cognitive decline and alterations in brain structure and function. Resting‐state functional magnetic resonance imaging (rs‐fMRI) is used to diagnose neurodegenerative diseases, such as cognitive impairment (CI), Alzheimer’s disease (AD), and vascular dementia (VaD). However, whether the functional connectivity (FC) of patients with T2DM and mild cognitive impairment (T2DM‐MCI) is conducive to early diagnosis remains unclear. To answer this question, we analyzed the rs‐fMRI data of 37 patients with T2DM and mild cognitive impairment (T2DM‐MCI), 93 patients with T2DM but no cognitive impairment (T2DM‐NCI), and 69 normal controls (NC). We achieved an accuracy of 87.91% in T2DM‐MCI versus T2DM‐NCI classification and 80% in T2DM‐NCI versus NC classification using the XGBoost model. The thalamus, angular, caudate nucleus, and paracentral lobule contributed most to the classification outcome. Our findings provide valuable knowledge to classify and predict T2DM‐related CI, can help with early clinical diagnosis of T2DM‐MCI, and provide a basis for future studies.

[HBM] Xiaochuan Wang, Ying Chu, Qianqian Wang, Liang Cao, Lishan Qiao, Limei Zhang,  Mingxia Liu*. Unsupervised Contrastive Graph Learning for
Resting-State Functional MRI Analysis and Brain
Disorder Detection
. Human Brain Mapping, 44(17): 5672-5692, 2023. 

An unsupervised contrastive graph learning (UCGL) framework is designed for fMRI-based brain disease analysis, in which a pretext model is designed to generate informative fMRI representations using unlabeled training data, followed by model fine-tuning to perform downstream disease identification tasks. Specifically, in the pretext model, we first design a bi-level fMRI augmentation strategy to increase the sample size by augmenting blood-oxygen-level-dependent (BOLD) signals, and then employ two parallel graph convolutional networks for fMRI feature extraction in an unsupervised contrastive learning manner. This pretext model can be optimized on large-scale fMRI datasets, without requiring labeled training data. This model is further fine-tuned on to-be-analyzed fMRI data for downstream disease detection in a task-oriented learning manner. Experimental results on three fMRI datasets suggest that the UCGL outperforms several state-of-the-art approaches in automated diagnosis of three brain diseases.

2022

[IEEE TPAMI] Yongsheng Pan, Mingxia Liu*, Yong Xia, Dinggang Shen. Disease-image-specific Learning for Diagnosis-oriented Neuroimage Synthesis with Incomplete Multi-Modality Data. IEEE Transactions on Pattern Recognition and Machine Intelligence, 44(10): 6839-6853, 2022.

A disease-image-specific deep learning (DSDL) framework is designed for joint neuroimage synthesis and disease diagnosis using incomplete multi-modality neuroimages. We first design a Disease-image-Specific Network (DSNet) with a spatial cosine module to implicitly model the disease-image specificity. We then develop a Feature-consistency Generative Adversarial Network (FGAN) to impute missing neuroimages, where feature maps of a synthetic image and its respective real image are encouraged to be consistent while preserving disease-image-specific information. 

[IEEE TBME] Hao Guan,  Mingxia Liu*. Domain Adaptation for Medical Image Analysis: A Survey. IEEE Transactions on Biomedical Engineering, 69(3): 1173-1185, 2022.

This paper reviews the recent advances of domain adaptation methods in medical image analysis. We first present the motivation for introducing domain adaptation techniques to tackle domain heterogeneity issues. Then, we provide a review of recent domain adaptation models in various medical image analysis tasks. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges, and future directions of this energetic research field.

[MedIA] Yunbi Liu, Ling Yue, Shifu Xiao, Wei Yang, Dinggang Shen, Mingxia Liu*. Assessing Clinical Progression from Subjective Cognitive Decline to Mild Cognitive Impairment with Incomplete Multi-modal Neuroimages. Medical Image Analysis, 75: 102266, 2022.

A Joint neuroimage Synthesis and Representation Learning (JSRL) framework is designed for conversion prediction of subjective cognitive decline (SCD) using incomplete multi-modal neuroimages. The JSRL contains two components: (1) a generative adversarial network to synthesize missing images and generate multi-modal features, and (2) a classification network to fuse multi-modal features for SCD conversion prediction. The two components are incorporated into a joint learning framework by sharing the same features, encouraging the effective fusion of multi-modal features for accurate prediction. A transfer learning strategy is also designed by leveraging the model trained on the large-scale ADNI database (with MRI and PET acquired from 863 subjects) to a small-scale database (with MRI from 76 subjects). 

[MedIA] Nan Wang, Dongren Yao, Lizhuang Ma, Mingxia Liu*. Multi-Site Clustering and Nested Feature Extraction for Identifying Autism Spectrum Disorder with Resting-State fMRI. Medical Image Analysis, 75: 102279, 2022.

A Multi-site Clustering and Nested Feature Extraction (MC-NFE) method is developed for fMRI-based autism spectrum disorder (ASD) detection. We first divide multi-site training data into ASD and health control (HC) groups. To model inter-site heterogeneity within each category, we use a similarity-driven multiview linear reconstruction model to learn latent representations and perform subject clustering within each group. We then design a nested singular value decomposition (SVD) method to mitigate inter-site heterogeneity and extract FC features by learning both local cluster-shared features across sites within each category and global category-shared features across ASD and HC groups, followed by a linear support vector machine (SVM) for ASD detection. Experimental results on 21 imaging sites suggest the effectiveness of MC-NFE in ASD detection. 

[MedIA] Mingliang Wang, Daoqiang Zhang, Jiashuang Huang, Mingxia Liu*, Qingshan Liu. Consistent Connectome Landscape Mining for Cross-Site Brain Disease Identification using Functional MRI. Medical Image Analysis, 82: 102591, 2022. 

A Connectome Landscape Modeling (CLM) method is designed to mine cross-site consistent connectome landscape and extract data-driven representation of functional connectivity networks for brain disorder identification. With functional connectivity networks as input, the proposed CLM model aims to learn a weight matrix for joint cross-site consistent connectome landscape learning, network feature extraction, and disease identification. We impose the row column overlap norm penalty on the network-based predictor to capture consistent connectome landscape across multiple sites. To capture site-specific patterns, we introduce an L1-norm penalty in CLM. We develop an efficient algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve the proposed objective function. Experimental results on three real-world autism spectrum disorder datasets demonstrate the potential use of our CLM in cross-site brain disorder analysis.

[IEEE JBHI] Zhengdong Wang, Biao Jie, Chunxiang Feng, Taochun Wang, Weixin Bian, Xintao Ding, Wen Zhou,  Mingxia Liu*. Distribution-guided Network Thresholding for Functional Connectivity Analysis in fMRI-based Brain Disorder Identification. IEEE Journal on Biomedical and Health Informatics, 26(4): 1602-1613, 2022.

A distribution-guided network thresholding (DNT) method is proposed for FC network analysis in brain disorder identification with rs-fMRI. Specifically, for each connection of a pair of brain regions, we propose to determine its specific threshold based on the distribution of connection strength (\ie, temporal correlation) between subject groups. The DNT can adaptively yield an FC-specific threshold for each connection in FC network, thus can preserve the diversity of temporal correlation among different brain regions. Experiment results on 365 subjects suggest that DNT outperforms state-of-the-art methods in brain disorder identification with rs-fMRI data.

[IEEE TCYB] Chunfeng Lian#Mingxia Liu#, Yongsheng Pan, Dinggang Shen. Attention-Guided Hybrid Network for Dementia Diagnosis with Structural MR Images. IEEE Transactions on Cybernetics, 52(4): 1992-2003, 2022.

An attention-guided deep learning framework is proposed to extract multi-level discriminative MRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to localize the discriminative regions in MRIs. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multi-level (i.e., subject-specific & inter-subject-consistent, and local & global) sMRI features for constructing an automated disease diagnosis model.

[IEEE TCYB] Yu Zhang, Han Zhang, Ehsan Adeli, Xiaobo Chen, Mingxia Liu, Dinggang Shen. Multi-view Feature Learning with Multi-atlas based Functional Connectivity Networks for MCI Diagnosis. IEEE Transactions on Cybernetics, 52(7): 6822-6833, 2022.

We proposed a multiview feature learning method with multi-atlas-based functional connectivity (FC) networks to improve MCI diagnosis. A three-step transformation is implemented to generate multiple individually specified atlases from the standard automated anatomical labeling template, from which a set of atlas exemplars is selected. Multiple FC networks are constructed based on these preselected atlas exemplars, providing multiple views of the FC network-based features for each subject. We then devise a multitask learning algorithm for joint feature selection from multiple FC networks, followed by an SVM classifier. Experimental results indicate that our method significantly improves the MCI classification.

[IEEE TNNLS] Chunfeng Lian, Mingxia Liu*, Li Wang, Dinggang Shen. Multi-Task Weakly-Supervised Attention Network for Dementia Status Estimation with Structural MRI. IEEE Transactions on Neural Networks and Learning Systems, 33(8): 4056-4068, 2022.

A multi-task neural network is developed for joint regression of multiple clinical scores from baseline MRI scans. Three components are included: 1) a fully convolutional network for extracting MRI features, 2) a weakly-supervised block for locating informative brain locations, and 3) a multi-task regression for jointly predicting multiple clinical scores. Experimental results demonstrate that our method produces superior performance compared with state-of-the-arts.

[MBEC] Hui Su, Limei Zhang, Lishan Qiao, Mingxia Liu*. Estimating High‐Order Brain Functional Networks by Correlation‐Preserving Embedding. Medical & Biological Engineering & Computing, 60(10): 2813-2823, 2022. (Highlight Paper)

A correlation-preserving embedding method is developed to re-code the low-order functional network prior to constructing brain functional connectivity networks. Specifically, we first use sparse representation to construct traditional FN. Then, we embed the FN via COPE to generate the new node representation for removing the potentially redundant/noisy information in the original node feature vectors and simultaneously maintaining the low-order relationship. Finally, the high-order FN is estimated by SR based on the new node representation. To verify the effectiveness of the proposed scheme, we conduct experiments to identify subjects with brain disorders from normal controls. Experimental results show that the proposed scheme can achieve better performance than the baseline method. 

Samuel Walton, Jacob R. Powell, Benjamin L. Brett, Weiyan Yin, Zachary Yukio Kerr, Mingxia Liu, Michael A. McCrea, Kevin M. Guskiewicz, Kelly S. Giovanello. Associations of Lifetime Concussion History and Repetitive Head Impact Exposure with Resting-State Functional Connectivity in Former Collegiate American Football Players: An NCAA 15-Year Follow-Up Study. PLOS ONE, 19(7): e0273918, 2022. 

This work aimed to examine associations of lifetime concussion history (CHx) and an advanced metric of lifetime repetitive head impact exposure with resting-state functional connectivity (rsFC) across the whole-brain and among large-scale functional networks (Default Mode; Dorsal Attention; and Frontoparietal Control) in former collegiate football players. Individuals who completed one or more year of varsity collegiate football were eligible to participate in this study. Neither CHx nor Head Impact Exposure Estimate were associated with neural signatures that have been observed in studies of typical and pathological aging. While CHx and repetitive head impacts have been associated with changes in brain health in older former athletes, our preliminary results suggest that associations with rsFC may not be present in early midlife former football players.

[arXiv] Lintao Zhang, Lihong Wang, Minhui Yu, Rong Wu, David C. Steffens, Guy G. Potter, Mingxia Liu*. Hybrid Representation Learning for Cognitive Diagnosis in Late-Life Depression Over 5 Years with Structural MRI. arXiv:2212.12810, 2022

A hybrid representation learning (HRL) framework is developed for predicting cognitive diagnosis over 5 years based on T1-weighted MRI data. Specifically, we first extract prediction-oriented MRI features via a deep neural network, and then integrate them with handcrafted MRI features via a Transformer encoder for cognitive diagnosis prediction. Two tasks are investigated in this work, including (1) identifying cognitively normal subjects with LLD and never-depressed older healthy subjects, and (2) identifying LLD subjects who developed cognitive impairment (or even Alzheimer’s disease) and those who stayed cognitively normal over five years. To the best of our knowledge, this is among the first attempts to study the complex heterogeneous progression of LLD based on task-oriented and handcrafted MRI features. We validate the proposed HRL on 294 subjects with T1-weighted MRIs from two clinically harmonized studies.  Experimental results suggest that the HRL outperforms several classical machine learning and state-of-the-art methods in LLD identification and prediction tasks.

[arXiv] Yuqi Fang, Pew-Thian Yap, Weili Lin, Hongtu Zhu, Mingxia Liu*. Source-Free Unsupervised Domain Adaptation: A Survey. arXiv:2301.00265, 2022.

We provide a systematic literature review of existing source-free unsupervised domain adaptation (SFUDA) approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on the different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions.

2021

[MedIA] Hao Guan, Yunbi Liu, Erkun Yang, Pew-Thian Yap, Dinggang Shen, Mingxia Liu*. Multi-Site MRI Harmonization via Attention-Guided Deep Domain Adaptation for Brain Disorder Identification. Medical Image Analysis, 71: 102076, 2021.

An attention-guided deep domain adaptation framework was designed for MMH and apply it to automated brain disorder identification with multi-site MRIs. This method does not need any category label information of target data, and can also automatically identify discriminative regions in whole-brain MR images. This method contains an MRI feature encoding module to extract feature representations of MRIs,  and an attention discovery module to automatically locate discriminative dementia-related regions in each whole-brain MRI scan, and a domain transfer module trained with adversarial learning for knowledge transfer between source and target domains. Experiments were performed on 2,572 subjects with T1-weighted MRIs. 

[IEEE TMI] Dongren Yao, Jing Sui, Mingliang Wang,  Erkun Yang, Yeerfan Jiaerken, Na Luo, Pew-Thian Yap, Mingxia Liu*, Dinggang Shen. A Mutual Multi-Scale Triplet Graph Convolutional Network for Classification of Brain Disorders using Functional or Structural Connectivity. IEEE Transactions on Medical Imaging, 40(4): 1279-1289, 2021.

A mutual multi-scale triplet graph convolutional network (GCN) was designed to analyze functional and structural connectivity for brain disorder diagnosis. We first employ several templates to construct coarse-to-fine brain connectivity networks for each subject. A triplet GCN is developed to learn functional/structural representations of brain connectivity networks at each scale. A template mutual learning strategy is used to train different scale TGCNs collaboratively for disease classification. Experimental results on 1,160 subjects with fMRI or dMRI suggest that our method outperforms state-of-the-art methods in identifying three types of brain disorders.

[IEEE TMI] Erkun Yang, Mingxia Liu*, Dongren Yao, Bing Cao, Chunfeng Lian, Pew-Thian Yap, Dinggang Shen. Deep Bayesian Hashing with Center Prior for Multi-modal Neuroimage Retrieval. IEEE Transactions on Medical Imaging, 40(2): 503-513, 2021.

A deep Bayesian hash learning framework is developed to map MRI and PET into a shared Hamming space and learn discriminative hash codes from imbalanced neuroimages. The key idea to tackle the small inter-class variation and large inter-modal discrepancy is to learn a common center representation for similar neuroimages from different modalities and encourage hash codes to be explicitly close to their corresponding center representations. Experimental results on ADNI  suggest the efficacy of the proposed method.

[IEEE TMI] Shuai Wang#Mingxia Liu#, Jun Lian, Dinggang Shen. Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy. IEEE Transactions on Medical Imaging, 40(1): 310-320, 2021.

A novel boundary coding network is developed to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. We then encode the organ boundary based on the predictions of two sub-networks and a multi-atlas based refinement strategy. 2) Organ segmentation. The boundary coding features are used to train the segmentation network.

[FNINS] Aimei Dong, Zhigang Li, Mingliang Wang, Dinggang Shen, Mingxia Liu*. High-order Laplacian Regularized Low-rank Representation for Multimodal Dementia Diagnosis. Frontiers in Neuroscience, 15: 116, 2021.

A high-order Laplacian regularized Low-Rank Representation (hLRR) method was proposed for AD diagnosis using block-wise missing multimodal data. The proposed method was evaluated on 805 subjects (with incomplete MRI, PET and CSF data) from the real Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort. Experimental results suggest the effectiveness of our method in three tasks of brain disease classification, compared with the state-of-the-art methods.

[MedIA] Mingliang Wang, Jiashuang Huang, Mingxia Liu*, Daoqiang Zhang. Modeling Dynamic Characteristics of Brain Functional Connectivity Networks using Resting-State Functional MRI. Medical Image Analysis, 71: 102063, 2021.

A temporal dynamics learning (TDL) method is designed for network-based brain disease identification using rs-fMRI data. We partition rs-fMRI time series into a sequence of segments using overlapping sliding windows and construct longitudinally ordered functional connectivity networks. To model global temporal evolution patterns of successive networks, we introduce a group-fused Lasso regularizer. The TDL can explicitly model evolving connectivity patterns of global networks over time and capture characteristics of each network defined at each segment. We evaluate TDL on 3 autism spectrum disorder databases, achieving superior results compared with state-of-the-art methods.

[MedIA] Mengting Xu, Tao Zhang, Zhongnian Li, Mingxia Liu*, Daoqiang Zhang. Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack. Medical Image Analysis, 69:  101977, 2021.

We evaluate the robustness of deep diagnostic models against adversarial attacks. We have performed two types of adversarial attacks to three deep diagnostic models in both single-label and multi-label classification tasks and found that these models are not reliable when attacked by adversarial examples. We have explored how adversarial examples attack the models, by analyzing quantitative classification results, intermediate features, discriminability of features, and correlation of estimated labels for original/clean images and those adversarial ones. Two defense methods are designed to handle adversarial examples. Experiments suggest that our defense methods help improve the robustness of deep diagnostic models.

[AIIM] Lei Sun, Yanfang Xue, Yining Zhang, Lishan Qiao, Limei Zhang, Mingxia Liu*. Estimating Sparse Functional Connectivity Networks via Hyperparameter-Free Learning Model. Artificial Intelligence in Medicine, 111: 102004, 2021.

A hyperparameter-free method was proposed for functional connectivity network (FCN) construction based on the global representation among fMRI time courses. Interestingly, the proposed method can automatically generate sparse FCNs, without using any thresholding or regularization parameters. We conducted experiments to identify subjects with mild cognitive impairment and Autism spectrum disorder from normal controls based on the estimated FCNs. Experimental results on two benchmark databases suggest the efficacy of the proposed scheme.

[FNINS] Yangyang Zhang, Xiao Jiang, Lishan Qiao, Mingxia Liu*. Modularity-Guided Functional Brain Network Analysis for Early-Stage Dementia Identification. Frontiers in Neuroscience, 15: 720909, 2021.

We proposed a modular-LASSO feature selection (MLFS) framework that can explicitly model the modularity information to identify discriminative and interpretable features from FBNs for automated AD/MCI classification. The MLFS first searches the modular structure of FBNs through a signed spectral clustering algorithm and then selects discriminative features via a modularity-induced group LASSO method, followed by an SVM for classification. Experiments on 563 resting-state functional MRI scans from ADNI  demonstrate that our method is superior to previous methods in AD/MCI identification and MCI conversion prediction, and helps discover discriminative brain regions and functional connectivities associated with AD.   

[arXiv] Weida Li, Mingxia Liu*, Daoqiang Zhang. Geometric Interpretation of Running Nyström-Based Kernel Machines and Error Analysis. arXiv:2002.08937v2 , 2021.

We develop a  subspace projection approach (SPA) for running Nyström-based kernel machines approach with a clear geometric interpretation, making it possible to develop approximation errors in general settings. Our analysis manifests the relationship between the low-rank linearization approach (LLA) and the Gram matrix substitution approach (GSA). The analytical forms of the corresponding approximate solutions are only at odds with one term. These analytical results lead to the conjecture that the SPA can provide more accurate approximate solutions than LLA and GSA.  

[PR] Kelei He, Wei Zhao, Xingzhi Xie, Wen Ji, Mingxia Liu, Zhenyu Tang, Yinghuan Shi, Feng Shi, Yang Gao, Jun Liu, Junfeng Zhang, Dinggang Shen. Synergistic Learning of Lung Lobe Segmentation and Hierarchical Multi-Instance  Classification for Automated Severity Assessment of COVID-19 in CT Images. Pattern Recognition, 113: 107828, 2021.

A synergistic learning framework was developed for automated severity assessment of COVID-19 in 3D CT images, by jointly performing lung lobe segmentation and multi-instance classification. Considering that only a few infection regions in a CT image are related to the severity assessment, we first represent each input image by a bag that contains a set of 2D image patches (with each one cropped from a specific slice). A multi-task multi-instance deep network is then developed to assess the severity of COVID-19 patients and segment the lung lobe simultaneously. Experiments on a real COVID-19 CT image dataset with 660+ images suggest the effectiveness of the proposed method.

2020

[IEEE TCYB] Mingxia Liu, Jun Zhang, Chunfeng Lian, Dinggang Shen. Weakly-supervised Deep Learning for Brain Disease Prognosis using MRI and Incomplete Clinical Scores. IEEE Transactions on Cybernetics, 50(7): 3381-3392, 2020.

A weakly-supervised densely connected neural network (wiseDNN) was developed for brain disease prognosis using baseline MRI data and incomplete clinical scores. Multiscale image patches (located by anatomical landmarks) from MRI were employed to extract local-to-global structural information of images and a weakly-supervised network with a weighted loss function was developed for task-oriented feature extraction and joint prediction of multiple clinical measures.

[IEEE TPAMI] Chunfeng Lian#, Mingxia Liu#, Jun Zhang, Dinggang Shen. Hierarchical Fully Convolutional Network for Joint Atrophy Localization and Alzheimer’s Disease Diagnosis using Structural MRI. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4): 880-893, 2020.

A hierarchical fully convolutional network was proposed to automatically identify discriminative local patches and regions in the whole brain sMRI, where multi-scale feature representations are then jointly learned and fused to construct hierarchical classification models.

[IEEE TMI] Mingliang Wang, Daoqiang Zhang, Jiashuang Huang, Pew-Thian Yap, Dinggang Shen, Mingxia Liu*. Identifying Autism Spectrum Disorder with Multi-Site fMRI via Low-Rank Domain Adaptation. IEEE Transactions on Medical Imaging, 39(3): 644-655, 2020.

A multi-site adaption framework based on low-rank representation decomposition was proposed for Autism identification based on functional MRI. The main idea is to determine a common low-rank representation for data from multiple sites, aiming to reduce differences in data distributions. With one site as a target domain and the remaining sites as source domains, data from these domains are transformed into a common space using low-rank representation. To reduce inter-domain heterogeneity, data from the source domains are linearly represented in the common space by those from the target domain.

[MedIA] Biao Jie#Mingxia Liu#, Chunfeng Lian, Feng Shi, Dinggang Shen. Designing Weighted Correlation Kernels in Convolutional Neural Networks for Functional Connectivity based Brain Disease Diagnosis. Medical Image Analysis, 63: 101709, 2020.

A novel weighted correlation kernel is designed to measure the correlation of brain regions, by which weighting factors are learned in a data-driven manner to characterize the contributions of different time points, thus conveying the richer interaction information among brain regions compared with conventional Pearson correlation based method. We build a unique kernel-based convolutional neural network for learning hierarchical features for disease diagnosis, by using fMRI data. Specifically, we first define a layer to build dynamic FCNs using the proposed kernel. Then, we define another three layers to sequentially extract local region-specific, global network-specific and temporal features from the constructed dynamic FCNs for classification.

[IEEE TMI] Yongsheng Pan, Mingxia Liu*, Chunfeng Lian, Yong Xia, Dinggang Shen. Spatially-Constrained Fisher Representation for Brain Disease Identification with Incomplete Multi-Modal Neuroimages. IEEE Transactions on Medical Imaging, 39(9): 2965-2975, 2020.

Incomplete data problem is unavoidable in multi-modal neuroimage studies due to patient dropouts and/or poor data quality. Conventional methods usually discard data-missing subjects, thus significantly reducing the number of training samples. We design a spatially-constrained Fisher representation framework for brain disease diagnosis. We first impute missing PET images based on their corresponding MRI scans using a hybrid generative adversarial network. We also develop a spatially-constrained Fisher representation network to extract statistical descriptors of neuroimages for disease diagnosis, assuming that these descriptors follow a Gaussian mixture model with a strong spatial constraint (i.e., images from different subjects have similar anatomical structures). Experiments suggest that our method can synthesize reasonable neuroimages and achieve promising results in AD diagnosis.

[IEEE TBME] Mingliang Wang, Chunfeng Lian, Dongren Yao, Daoqiang Zhang, Mingxia Liu*, Dinggang Shen. Spatial-Temporal Dependency Modeling and Network Hub Detection for Functional MRI Analysis via Convolutional-Recurrent Network. IEEE Transactions on Biomedical Engineering, 67(8): 2241-2252, 2020.

A unique Spatial-Temporal convolutional-recurrent neural Network (STNet) is proposed for automated prediction of AD progression and network hub detection from rs-fMRI time series. Our STNet incorporates the spatial-temporal information mining and AD-related hub detection into an end-to-end deep learning model. Specifically, we first partition rs-fMRI time series into a sequence of overlapping sliding windows. A sequence of convolutional components are then designed to capture the local-to-global spatially-dependent patterns within each sliding window, based on which we are able to identify discriminative hubs and characterize their unique contributions to disease diagnosis. A recurrent component with long short-term memory (LSTM) units is further employed to model the whole-brain temporal dependency from the spatially-dependent pattern sequences, thus capturing the temporal dynamics along time. Experimental results suggest the effectiveness of the proposed method in disease progression prediction and hub detection.

[MedIA] Jun Zhang#Mingxia Liu#, Li Wang, Si Chen, Peng Yuan, Jianfu Li, Steve Guo-Fang Shen, Zhen Tang, Ken-Chung Chen, James J. Xia, Dinggang Shen. Context-Guided Fully Convolutional Networks for Joint Craniomaxillofacial Bone Segmentation and Landmark Digitization. Medical Image Analysis, 60: 101621, 2020.

We propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. Experimental results suggest that our method is superior to the state-of-the-art approaches in bone segmentation and landmark digitization.

[MedIA] Tao Zhou, Kim-Han Thung, Mingxia Liu, Feng Shi, Changqing Zhang, Dinggang Shen. Multi-modal Latent Space Inducing Ensemble SVM Classifier for Early Dementia Diagnosis with Neuroimaging Data. Medical Image Analysis, 60: 101630, 2020.

We propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that the proposed models outperform other state-of-the-art methods.

[FNINS] Li Zhang, Mingliang Wang, Mingxia Liu*, Daoqiang Zhang. A Survey on Deep Learning for Neuroimaging-based Brain Disorder Analysis. Frontiers in Neuroscience, 14: 779, 2020.

This paper reviews the applications of deep learning methods for neuroimaging-based brain disorder analysis. We provide a comprehensive overview of deep learning techniques and popular network architectures, by introducing various types of neural networks and recent developments. We then review deep learning methods for computer-aided analysis of four typical brain disorders, including Alzheimer’s disease, Parkinson’s disease, autism spectrum disorder, and schizophrenia. We further discuss the limitations of existing studies and present possible future directions.

2019

[MedIA] Mingliang Wang, Daoqiang Zhang, Dinggang Shen, Mingxia Liu*. Multi-task Exclusive Relationship Learning for Alzheimer’s Disease Progression Prediction with Longitudinal Data. Medical Image Analysis, 53: 111-122, 2019.

A multi-task exclusive relationship learning model was designed to automatically capture the intrinsic relationship among tasks at different time points for estimating clinical measures based on longitudinal imaging data. The proposed method can select the most discriminative features for different tasks and also model the intrinsic relatedness among different time points, by utilizing an exclusive lasso regularization and a relationship-inducing regularization.

[IEEE TMI] Tao Zhou#, Mingxia Liu#, Kim-Han Thung, Dinggang Shen. Latent Representation Learning for Alzheimer’s Disease Diagnosis with Incomplete Multi-modal Neuroimaging and Genetic Data. IEEE Transactions on Medical Imaging, 38 (10): 2411-2422, 2019. [#Co-first author]

A latent representation learning method was developed for multi-modality based AD diagnosis. Specifically, we use all samples with incomplete modalities to learn a latent representation space. Within this space, we use samples with complete multi-modality data to learn a common latent representation and also use samples with incomplete multi-modality data to learn an independent modality-specific latent representation. We then project the latent representations to the label space for AD diagnosis.

[IEEE TMI] Zhenghan Fang, Yong Chen, Mingxia Liu, Lei Xiang, Qian Zhang, Qian Wang, Weili Lin, Dinggang Shen. Deep Learning for Fast and Spatially-Constrained Tissue Quantification from Highly-Accelerated Data in Magnetic Resonance Fingerprinting. IEEE Transactions on Medical Imaging, 38 (10): 2364-2374, 2019.

A two-step deep learning model was designed to learn the mapping from the observed signals to the desired properties for tissue quantification, i.e., 1) with a feature extraction module for reducing the dimension of signals by extracting a low-dimensional feature vector from the high-dimensional signal evolution and 2) a spatially-constrained quantification module for exploiting the spatial information from the extracted feature maps to generate the final tissue property map. 

[IEEE TBME] Tao Zhou, Kim-Han Thung, Mingxia Liu, Dinggang Shen. Brain-wide Genome-wide Association Study for Alzheimer’s Disease via Joint Projection Learning and Sparse Regression Model, IEEE Transactions on Biomedical Engineering, 66 (1): 165-175, 2019.

A joint projection and sparse regression model was developed to discover the associations between the phenotypes and genotypes. To alleviate the negative influence of data heterogeneity, we first map the genotypes into an intermediate imaging-phenotype-like space. To better reveal the complex phenotype–genotype associations, we project both the mapped genotypes and the original imaging phenotypes into a diagnostic-label-guided joint feature space, where the intraclass projected points are constrained to be close to each other.

[IEEE TBME] Mingxia Liu, Jun Zhang, Ehsan Adeli, Dinggang Shen. Joint Classification and Regression via Deep Multi-task Multi-channel Learning for Alzheimer’s Disease Diagnosis. IEEE Transactions on Biomedical Engineering, 66 (5): 1195-1206, 2019.

A deep multi-task multi-channel learning framework was proposed for simultaneous brain disease classification and clinical score regression, using MRI data and demographic information of subjects. Specifically, we first identify the discriminative anatomical landmarks from MR images in a data-driven manner, and then extract multiple image patches around these detected landmarks. We then proposed a deep multi-task multi-channel convolutional neural network for joint classification and regression. The proposed framework can not only automatically learn discriminative features for MR images, but also explicitly incorporate the demographic information into the learning process. 

[NeuroImage] Xuyun Wen, Han Zhang, Gang Li, Mingxia Liu, Weiyan Yin, Weili Lin, Jun Zhang, Dinggang Shen. First-year Development of Modules and Hubs in Infant Brain Functional Networks. NeuroImage, 185: 222-235, 2019.

A novel algorithm was designed to construct the robust, temporally consistent and modular structure augmented group-level network based on which functional modules were detected at each age. Our study reveals that the brain functional network is gradually subdivided into an increasing number of functional modules accompanied by the strengthened intra- and inter-modular connectivities. 

[PR] Yu Zhang, Han Zhang, Xiaobo Chen, Mingxia Liu, Xiaofeng Zhu, Seong-Whan Lee, Dinggang Shen. Strength and Similarity Guided Group-level Brain Functional Network Construction for MCI Diagnosis, Pattern Recognition, 88: 421-430, 2019.

A strength and similarity guided group sparse representation method was developed to exploit both BOLD signal temporal correlation-based “low-order” FC (LOFC) and intersubject LOFC-profile similarity-based “high-order” FC (HOFC) as two priors to jointly guide the GSR-based network modeling. Extensive experimental comparisons are carried out, with the rs-fMRI data from mild cognitive impairment (MCI) subjects and healthy controls, between the proposed algorithm and other state-of-the-art brain network modeling approaches.

[BIB] Bo Cheng#Mingxia Liu#, Daoqiang Zhang, Dinggang Shen. Robust Multi-label Transfer Feature Learning for Early Diagnosis of Alzheimer’s Disease. Brain Imaging and Behavior, 13(1): 138-153, 2019.

This work proposed to transform the original binary class label of a particular subject into a multi-bit label coding vector with the aid of multiple source domains. A robust multi-label transfer feature learning model was further developed to simultaneously capture a common set of features from different domains and to identify the unrelated source domains. The evaluation was performed on ADNI with baseline magnetic resonance imaging (MRI) and cerebrospinal fluid (CSF) data.

[NeuroInformtics] Mingliang Wang, Xiaoke Hao, Jiashuang Huang, Kangcheng Wang, Li Shen, Xijia Xu, Daoqiang Zhang, Mingxia Liu. Hierarchical Structured Sparse Learning for Schizophrenia Identification. Neuroinformatics, 18(1): 43-57, 2019.

A hierarchical structured sparse learning method was proposed to sufficiently utilize the specificity and complementary structure information across four different frequency bands (from 0.01Hz to 0.25Hz) for schizophrenia diagnosis. The proposed method can help preserve the partial group structures among multiple frequency bands and the specific characters in each frequency band. 

2018

[IEEE TMI] Daoqiang Zhang, Jiashuang Huang, Biao Jie, Junqiang Du, Liyang Tu, Mingxia Liu*. Ordinal Pattern: A New Network Descriptor for Brain Connectivity Networks. IEEE Transactions on Medical Imaging, 37(7): 1711-1722, 2018.

A new network descriptor (i.e., ordinal pattern that contains a sequence of weighted edges) was proposed for brain connectivity network analysis. Compared with previous network properties, our ordinal patterns can take advantage of the weight information of edges and also explicitly model the ordinal relationship of weighted edges in brain connectivity networks. An ordinal pattern based learning framework was further designed for brain disease diagnosis using resting-state fMRI data.

[MedIA] Mingxia Liu, Jun Zhang, Ehsan Adeli, Dinggang Shen. Landmark-based Deep Multi-instance Learning for Brain Disease Diagnosis. Medical Image Analysis, 43: 157-168, 2018.

In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this work, a landmark-based deep multi-instance learning framework was proposed for brain disease diagnosis. A data-driven learning approach was used to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, LDMIL learns an end-to-end MR image classifier for capturing both local structural information conveyed by image patches and the global structural information derived from all detected landmarks.

[IEEE JBHI] Mingxia Liu, Jun Zhang, Dong Nie, Pew-Thian Yap, Dinggang Shen. Anatomical Landmark–based Deep Feature Representation for MR Images in Brain Disease DiagnosisIEEE Journal of Biomedical and Health Informatics, 22(5): 1476-1485, 2018.

An anatomical landmark based deep feature learning framework was designed to extract patch-based representation from MRI for automatic diagnosis of Alzheimer’s disease (AD). We first identify discriminative anatomical landmarks from MR images in a data-driven manner, and then propose a convolutional neural network for patch-based deep feature learning.

[IEEE JBHI] Mingxia Liu, Yue Gao, Pew-Thian Yap, Dinggang Shen. Multi-hypergraph Learning for Incomplete Multi-modality Data. IEEE Journal of Biomedical and Health Informatics. 22(4): 1197-1208, 2018. (Featured Article)

A multi-hypergraph learning method was proposed for dealing with incomplete multimodality data. Specifically, we first construct multiple hypergraphs to represent the highorder relationships among subjects by dividing them into several groups according to the availability of their data modalities. A hypergraph regularized transductive learning method is then applied to these groups for automatic diagnosis of brain diseases.

[IEEE TIP] Biao Jie#, Mingxia Liu#, Daoqiang Zhang, Dinggang Shen. Sub-network Kernels for Connectivity Networks in Brain Disease Classification. IEEE Transactions on Image Processing, 27(5): 2340-2353, 2018.

A unique sub-network kernel was proposed for measuring the similarity between a pair of brain networks and then apply it to brain disease classification. Different from current graph kernels, our proposed sub-network kernel not only takes into account the inherent characteristic of brain networks, but also captures multi-level (from local to global) topological properties of nodes in brain networks, which are essential for defining the similarity measure of brain networks. We perform extensive experiments on subjects with baseline functional magnetic resonance imaging (fMRI) data obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Experimental results demonstrate that the proposed method outperforms several state-of-the-art graph-based methods in MCI classification.

[MIA] Biao Jie, Mingxia Liu, Dinggang Shen. Integration of Temporal and Spatial Properties of Dynamic Connectivity Networks for Automatic Diagnosis of Brain Disease. Medical Image Analysis, 47: 81-94, 2018.

This work defined a new measure to characterize the spatial variability of dynamic connectivity networks (DCNs) and propose a novel learning framework to integrate both temporal and spatial variabilities of DCNs for automatic brain disease diagnosis. Specifically, we first construct DCNs from the rs-fMRI time series at successive non-overlapping time windows. Then, spatial variability of a specific brain region was characterized by computing the correlation of functional sequences (i.e., the changing profile of FC between a pair of brain regions within all time windows) associated with this region.

[IEEE TCBB] Wei Shao, Mingxia Liu, Ying-Ying Xu, Hong-Bin Shen, and Daoqiang Zhang. An Organelle Correlation-guided Feature Selection Approach for Classifying Multi-label Subcellular Bio-Images. IEEE Transactions on Computational Biology and Bioinformatics, 15(3): 828-838, 2018.

This work proposed to capture structural correlation among different cellular compartments and designed an organelle structural correlation regularized feature selection method. We formulated the multi-label classification problem by adopting a group-sparsity regularizer to select common subsets of relevant features from different cellular compartments. We also added a cell structural correlation regularized Laplacian term, which utilizes the prior biological structural information to capture the intrinsic dependency among different cellular compartments.

[MedIA] Chunfeng Lian, Jun Zhang, Mingxia Liu, Xiaopeng Zong, Weili Lin, Dinggang Shen. Multi-channel Multi-scale Fully Convolutional Network for 3D Perivascular Spaces Segmentation in 7T MR Images, Medical Image Analysis, 46: 106-117, 2018.

A novel fully convolutional neural network (FCN) with no requirement of any specified hand-crafted features and ROIs is proposed for efficient segmentation of PVSs. Particularly, the original T2-weighted 7T magnetic resonance (MR) images are first filtered via a non-local Haar-transform-based line singularity representation method to enhance the thin tubular structures. Both the original and enhanced MR images are used as multi-channel inputs to complementarily provide detailed image information and enhanced tubular structural information for the localization of PVSs. Multi-scale features are then automatically learned to characterize the spatial associations between PVSs and adjacent brain tissues. 

[HBM] Li Wang, Gang Li, Ehsan Adeli, Mingxia Liu, Zhengwang Wu, Yu Meng, Weili Lin, Dinggang Shen. Anatomy-guided Joint Tissue Segmentation and Topological Correction for 6-month Infant Brain MRI with Risk of Autism. Human Brain Mapping, 39(6): 2609-2623, 2018.

An anatomy-guided joint tissue segmentation and topological correction framework was designed for isointense infant MRI. We use a signed distance map with respect to the outer cortical surface as anatomical prior knowledge, and incorporate such prior information into the proposed framework to guide segmentation in ambiguous regions. Experimental results on the National Database for Autism Research demonstrate the effectiveness of topological errors and also some levels of robustness to motion. Comparisons with the state-of-the-art methods further demonstrate the advantages of the proposed method in terms of both segmentation accuracy and topological correctness.

2017

[MedIA] Mingxia Liu, Jun Zhang, Pew-Thian Yap, Dinggang Shen. View-aligned Hypergraph Learning for Alzheimer’s Disease Diagnosis with Incomplete Multi-modality Data. Medical Image Analysis, 36(2): 123-134, 2017.

A view-aligned hypergraph learning method was created to explicitly model the coherence among views. Specifically, we first divide the original data into several views based on the availability of different modalities and then construct a hypergraph in each view space based on sparse representation. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to capture coherence among views, followed by a multi-view label fusion method for making a final decision.

[IEEE TIP] Jun Zhang#, Mingxia Liu#, Dinggang Shen. Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-stage Task-oriented Deep Neural Networks. IEEE Transactions on Image Processing, 26(10): 4753-4764, 2017. [#Co-first author]

This work proposed a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, another CNN model was developed, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. 

[IEEE TBME] Biao Jie, Mingxia Liu, Jun Liu, Daoqiang Zhang, Dinggang Shen. Temporally-constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease. IEEE Transactions on Biomedical Engineering, 64(1): 238-249, 2017.

A novel temporally-constrained group sparse learning method was created aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. To reflect the smooth changes between data derived from adjacent time-points, two smoothness regularization terms were incorporated into the proposed objective function.

[IEEE JBHI] Jun Zhang, Mingxia Liu, Le An, Yaozong Gao, Dinggang Shen. Alzheimer’s Disease Diagnosis using Landmark-based Features from Longitudinal Structural MR Images. IEEE Journal of Biomedical and Health Informatics. 21(6): 1607-1616, 2017.

An anatomical landmark-based feature extraction method was developed for AD diagnosis using longitudinal structural MR images, which does not require nonlinear registration or tissue segmentation in the application stage and is also robust to inconsistencies among longitudinal scans. Discriminative anatomical landmarks were automatically discovered from the whole brain using training images, followed by a fast landmark detection method for testing images, without the involvement of any nonlinear registration and tissue segmentation. Then, high-level statistical spatial features and contextual longitudinal features were extracted based on those detected landmarks, followed by a linear support vector machine.

[Neuroinformatics] Bo Cheng, Mingxia Liu, Dinggang Shen, Daoqiang Zhang. Multi-domain Transfer Learning for Early Diagnosis of Alzheimer’s Disease. Neuroinformatics, 15(2): 115-132, 2017.

This work considered the joint learning of tasks in multi-auxiliary domains and the target domain, and proposed a novel Multi-Domain Transfer Learning (MDTL) framework for early diagnosis of AD. Specifically, the proposed MDTL framework consists of two key components: 1) a multi-domain transfer feature selection (MDTFS) model that selects the most informative feature subset from multi-domain data, and 2) a multidomain transfer classification (MDTC) model that can identify disease status for early AD detection.

[Medical Physics] Yulian Zhu, Li Wang, Mingxia Liu, Chunjun Qian, Ambereen Yousuf, Aytekin Oto, Dinggang Shen. MRI-based Prostate Cancer Detection with High-level Representation and Hierarchical Classification. Medical Physics, 44 (3): 1028-1039, 2017.

High-level feature representation was first learned via a deep network, where multiparametric MR images were used as input. Based on the learned high-level features, a hierarchical classification method was developed, where multiple random forest classifiers are iteratively constructed to refine the detection results of prostate cancer.

[SCI REP-UK] Le An, Ehsan Adeli, Mingxia Liu, Jun Zhang, Seong-Whan Lee, Dinggang Shen. A Hierarchical Feature and Sample Selection Framework and Its Application for Alzheimer’s Disease Diagnosis. Scientific Reports, 7: 45269, 2017.

This work formulated a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilized both labeled and unlabeled data during training, making our method semi-supervised.

2016

[IEEE TPAMI] Mingxia Liu, Daoqiang Zhang, Songcan Chen, Hui Xue. Joint Binary Classifier Learning for ECOC-based Multi-class Classification.  IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(11): 2335-2341, 2016.

Error-correcting output coding (ECOC) is one of the most widely used strategies for dealing with multi-class problems by decomposing the original multi-class problem into a series of binary sub-problems. In this paper, we explored to mine and utilize such relationship through a joint classifier learning method, by integrating the training of binary classifiers and the learning of the relationship among them into a unified objective function. We also developed an efficient alternating optimization algorithm to solve the objective function.

[IEEE TMI] Mingxia Liu, Daoqiang Zhang, Dinggang Shen. Relationship Induced Multi-template Learning for Diagnosis of Alzheimer’s Disease and Mild Cognitive Impairment. IEEE Transactions on Medical Imaging, 35(6): 1463-1474, 2016.

A novel relationship induced multi-template learning method was proposed for automatic brain disease diagnosis, by explicitly modeling structural information in the multi-template data. Specifically, we first nonlinearly register each brain’s MR image separately onto multiple pre-selected templates, and then extract multiple sets of features for this MR image. A novel feature selection algorithm was designed to model the relationships among templates and among individual subjects, followed by an ensemble classification strategy for diagnosis. 

[IEEE TCYB] Mingxia Liu, Daoqiang Zhang. Pairwise Constraint-guided Sparse Learning for Feature Selection. IEEE Transactions on Cybernetics, 46(1): 298-310, 2016.

A pairwise constraint-guided sparse learning method was developed for feature selection, where the must-link and the cannot-link constraints are used as discriminative regularization terms that directly concentrate on the local discriminative structure of data. Furthermore, we develop two variants of CGS, including 1) semi-supervised CGS that utilizes labeled data, pairwise constraints, and unlabeled data and 2) ensemble CGS that uses the ensemble of pairwise constraint sets. An efficient optimization algorithm was further developed for solving the proposed problem.

[IEEE TBME] Mingxia Liu, Ehsan Adeli-Mosabbeb, Daoqiang Zhang, Dinggang Shen. Inherent Structure Based Multi-view Learning with Multi-template Feature Representation for Alzheimer’s Disease Diagnosis.  IEEE Transactions on Biomedical Engineering, 63(7): 1473-1482, 2016.

An inherent structure-based multiview learning method was proposed by using multiple templates for AD/MCI classification. Multiview feature representations were first extracted using multiple selected templates and then cluster subjects within a specific class into several subclasses (i.e., clusters) in each view space. Those subclasses were encoded by considering both their original class information and their own distribution information, followed by a multitask feature selection model and a multi-view classification model. 

[Neurocomputing] Mingxia Liu, Daoqiang Zhang. Feature Selection with Effective Distance. Neurocomputing, 215: 100-109, 2016.

To model the dynamic structure of data, we proposed a set of effective distance-based feature selection methods, where a probabilistically motivated effective distance is used to measure the similarity of samples. A sparse representation-based algorithm was used to compute effective distance. Three filter-type unsupervised feature selection methods were developed, including an effective distance-based Laplacian Score, and two effective distance-based Sparsity Scores. 

[Bioinformatics] Wei Shao, Mingxia Liu, Daoqiang Zhang. Human Cell Structure-driven Model Construction for Predicting Protein Subcellular Location from Biological Images. Bioinformatics, 32(1): 114-121, 2016.

A cell structure-driven classifier construction method was created by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error-correcting output coding (ECOC) framework. Then, multiple classifiers were constructed corresponding to the columns of the ECOC codeword matrix using a multi-kernel support vector machine (SVM) classification approach.

[BIB] Chen Zu, Biao Jie, Mingxia Liu, Songcan Chen, Dinggang Shen, Daoqiang Zhang. Label-aligned Multi-task Feature Learning for Multimodal Classification of Alzheimer’s Disease and Mild Cognitive Impairment. Brain Imaging and Behavior, 10(4): 1148-1159, 2016.

A multimodal learning method was developed for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. The proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification.

2015 & Earlier

[HBM] Mingxia Liu, Daoqiang Zhang, Dinggang Shen. View-centralized Multi-atlas Classification for Alzheimer’s Disease Diagnosis. Human Brain Mapping, 36(5): 1847-1865, 2015.

A view-centralized multi-atlas classification method, which can better exploit useful information in multiple feature representations from different atlases. All brain images were registered onto multiple atlases individually, to extract feature representations in each atlas space. Then, the proposed view-centralized multi-atlas feature selection method was used to select the most discriminative features from each atlas with extra guidance from other atlases, followed by an SVM classifier in each atlas space. 

[BIB] Bo Cheng, Mingxia Liu, Heung-Il Suk, Dinggang Shen, Daoqiang Zhang. Multimodal Manifold-regularized Transfer Learning for MCI Conversion Prediction. Brain Imaging and Behavior, 1-14, 2015.

A multimodal manifold-regularized transfer learning (M2TL) method was designed to jointly utilize samples from another domain as well as unlabeled samples to boost the performance of MCI conversion prediction in the source domain. Specifically, the proposed M2TL method includes two components: 1) a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain and the target domain; and 2) a semi-supervised multimodal manifold-regularized least-squares classification method, where target-domain samples, auxiliary-domain samples, and unlabeled samples can be jointly used for model learning. 

[IEEE TBME] Bo Cheng#, Mingxia Liu#, Daoqiang Zhang, BC Munsell, Dinggang Shen. Domain Transfer Learning for MCI Conversion Prediction. IEEE Transactions on Biomedical Engineering, 62(7): 1805-1817, 2015. [#Co-first author]

A domain transfer learning method was proposed for MCI conversion prediction, which can use data from both the target domain (i.e., MCI) and auxiliary domains (i.e., AD and NC). Three key components were included: 1) a domain transfer feature selection component that selects the most informative feature-subset from both target domain and auxiliary domains from different imaging modalities; 2) a domain transfer sample selection component that selects the most informative sample-subset from the same target and auxiliary domains from different data modalities; and 3) a domain transfer support vector machine classification component that fuses the selected features and samples to separate MCI-C and MCI-NC patients.

[IEEE TR] Mingxia Liu, Linsong Miao, Daoqiang Zhang. Two-stage Cost-sensitive Learning for Software Defect Prediction. IEEE Transactions on Reliability, 63(2): 676-686, 2014.

Two-stage cost-sensitive learning (TSCS) framework was developed for detect prediction, by utilizing cost information not only in the classification stage but also in the feature selection stage. We developed three novel cost-sensitive feature selection algorithms, namely, Cost-Sensitive Variance Score, Cost-Sensitive Laplacian Score, and Cost-Sensitive Constraint Score, by incorporating cost information into traditional feature selection algorithms. Experiments suggested that the proposed cost-sensitive feature selection methods outperform traditional cost-blind methods, validating the efficacy of using cost information for feature selection.

[Neurocomputing] Mingxia Liu, Daoqiang Zhang, Songcan Chen. Attribute Relation Learning for Zero-shot Classification. Neurocomputing, 139: 34-46, 2014.

This work proposed a unified framework that learns the attribute–attribute relation and the attribute classifiers jointly to boost the performances of attribute predictors. Specifically, we unify the attribute relation learning and the attribute classifier design into a common objective function, through which we can not only predict attributes, but also automatically discover the relation between attributes from data. Furthermore, based on the afore-learned attribute relation and classifiers, we develop two types of learning schemes for zero-shot classification. 

[IJPRAI] Mingxia Liu, Daoqiang Zhang. Sparsity Score: A Novel Graph-preserving Feature Selection Method. International Journal of Pattern Recognition and Artificial Intelligence, 28 (4): 1450009, 2014.

A general graph-preserving feature selection framework was developed in this work, where graphs to be preserved vary in specific definitions. A number of existing filter-type feature selection algorithms can be unified within this framework. Based on the proposed framework, a new filter-type feature selection method called sparsity score (SS) was proposed, aiming to preserve the structure of a pre-defined l1 graph that is proven robust to data noise. Here, the modified sparse representation based on an l1-norm minimization problem was used to determine the graph adjacency structure and corresponding afinity weight matrix simultaneously. Furthermore, a variant of SS called supervised SS (SuSS) is also proposed, where the l1 graph to be preserved is constructed by using only data points from the same class.