Highlighted Research
Our research develops a scalable topological framework that enables analyses of large-scale, complex networks. It connects topology theory with statistical inference as well as machine learning applications, thereby opening new avenues for algorithmic performance enhanced by topology. The framework has a wide range of innovative applications, and a few notable examples are highlighted below.
Stable Vector Representation. The framework establishes a stable vector space associated with Wasserstein metric, which is highly regarded for its theoretical stability in topological spaces. By preserving this metric, topological embeddings in our vector space fully leverage its robustness properties and enable the application of a wide range of vector-based learning models to topological data analysis.
[MICCAI '23]
Statistical Inference. The framework is highly versatile and can be utilized with standard statistical and machine learning models to perform topological inference. It is particularly effective in differentiating subtle topological patterns within various brain networks.
[Annals of Applied Statistics]
Clustering. A novel approach to efficient topological clustering is developed for analyzing intricate networks based on their high-order topological characteristics. Our algorithm efficiently clusters human brain networks and achieves rapid convergence. It derives effective biomarkers for the neural basis of consciousness, shedding light on the complex interplay between brain regions and conscious experience.
[ICLR '22]
Regression. Multimodal fusion of functional and structural brain networks, guided by their topology, conserves the original topological characteristics from both modalities in a statistically principled manner. Our twin study reveals highly heritable functional connectivity in multimodal networks that would have remained undetected when using functional data alone.
[MICCAI '21]
Neural Networks. A novel approach to neural network training is developed to enable neural networks to learn topological structure from data. This acquired structure enhances the network's ability to retain previously learned knowledge when retrained on new tasks.
[NeurIPS '22 Meta-Learning]
If you are interested in these new learning paradigms, feel free to send me an email to discuss possible collaboration. Duke students who seek research experience, please reach out to me for research opportunities. To learn more about current open positions and how to apply, please visit the opportunities page. Email inquiries are welcome!