2025

Read like a Pathologist: Enhancing Mamba with Pyramid Router for Whole Slide Image Analysis
Read like a Pathologist: Enhancing Mamba with Pyramid Router for Whole Slide Image Analysis

Qixiang ZHANG, Yi LI, Tianqi XIANG, Haonan WANG, Xiaomeng LI

Under review 2025

Read like a Pathologist: Enhancing Mamba with Pyramid Router for Whole Slide Image Analysis

Qixiang ZHANG, Yi LI, Tianqi XIANG, Haonan WANG, Xiaomeng LI

Under review 2025

S&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation
S&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation

Qixiang ZHANG*, Haonan WANG*, Yi LI, Xiaomeng LI (* equal contribution)

IEEE Transactions on Medical Image (TMI) JCR Q1. 2025

S&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation

Qixiang ZHANG*, Haonan WANG*, Yi LI, Xiaomeng LI (* equal contribution)

IEEE Transactions on Medical Image (TMI) JCR Q1. 2025

UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation
UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation

Yi LI, Haonan Wang, Qixiang Zhang, Boyu Xiao, Chenchang Hu, Hualiang Wang, Xiaomeng Li

Submitted to Annual Conference on Neural Information Processing Systems (NeurIPS) 2025

UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation

Yi LI, Haonan Wang, Qixiang Zhang, Boyu Xiao, Chenchang Hu, Hualiang Wang, Xiaomeng Li

Submitted to Annual Conference on Neural Information Processing Systems (NeurIPS) 2025

Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction

Haonan WANG*, Qixiang ZHANG*, Lehan WANG, Xuanqi HUANG, Xiaomeng LI (* equal contribution)

International Conference of Computer Vision (ICCV) CCF-A 2025

Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction

Haonan WANG*, Qixiang ZHANG*, Lehan WANG, Xuanqi HUANG, Xiaomeng LI (* equal contribution)

International Conference of Computer Vision (ICCV) CCF-A 2025

MOC: Meta-Optimized Classifier for Few-Shot Whole Slide Image Classification
MOC: Meta-Optimized Classifier for Few-Shot Whole Slide Image Classification

Tianqi XIANG, Yi LI, Qixiang ZHANG, Haonan WANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2025

MOC: Meta-Optimized Classifier for Few-Shot Whole Slide Image Classification

Tianqi XIANG, Yi LI, Qixiang ZHANG, Haonan WANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2025

Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration
Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration

Honglong Yang, Shanshan Song, Yi Qin, Lehan Wang, Haonan Wang, Xinpeng Ding, Qixiang Zhang, Bodong Du, Xiaomeng Li

Under Review 2025

Generalist Medical AI systems have demonstrated expert-level performance in biomedical perception tasks, yet their clinical utility remains limited by inadequate multi-modal explainability and suboptimal prognostic capabilities. Here, we present XMedGPT, a clinician-centric, multi-modal AI assistant that integrates textual and visual interpretability to support transparent and trustworthy medical decision-making. XMedGPT not only produces accurate diagnostic and descriptive outputs, but also grounds referenced anatomical sites within medical images, bridging critical gaps in interpretability and enhancing clinician usability. The model achieves an Intersection over Union of 0.703 across 141 anatomical regions, and a Kendall’s tau-b of 0.479, demonstrating strong alignment between visual rationales and clinical outcomes. In survival and recurrence prediction, it surpasses prior leading models by 26.9%...

Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration

Honglong Yang, Shanshan Song, Yi Qin, Lehan Wang, Haonan Wang, Xinpeng Ding, Qixiang Zhang, Bodong Du, Xiaomeng Li

Under Review 2025

Generalist Medical AI systems have demonstrated expert-level performance in biomedical perception tasks, yet their clinical utility remains limited by inadequate multi-modal explainability and suboptimal prognostic capabilities. Here, we present XMedGPT, a clinician-centric, multi-modal AI assistant that integrates textual and visual interpretability to support transparent and trustworthy medical decision-making. XMedGPT not only produces accurate diagnostic and descriptive outputs, but also grounds referenced anatomical sites within medical images, bridging critical gaps in interpretability and enhancing clinician usability. The model achieves an Intersection over Union of 0.703 across 141 anatomical regions, and a Kendall’s tau-b of 0.479, demonstrating strong alignment between visual rationales and clinical outcomes. In survival and recurrence prediction, it surpasses prior leading models by 26.9%...

PRET: Achieving Pan-Cancer Recognition via a Few Examples Without Training
PRET: Achieving Pan-Cancer Recognition via a Few Examples Without Training

Yi Li, Ziyu Ning, Tianqi Xiang, Qixiang Zhang, Yi Min, Zhihao Lin, Feiyan Feng, Baozhen Zeng, Xuexia Qian, Lu Sun, Jiace Qin, Ling Xiang, Chao Fan, Tian Qin, Qian Wang, Xiu-Wu Bian, Qingling Zhang, Xiaomeng Li

Submitted to Nature Cancer Under Review. 2025

...In this paper, we introduce a novel paradigm, Pan-cancer Recognition via Examples without Training (PRET). The proposed PRET learns from a few examples during the inference phase without model fine-tuning, offering a flexible, scalable, and effective solution to recognize cancers across diverse organs, hospitals, and tasks using a single model only. Through extensive evaluations across international hospitals and diverse benchmarks, our method outperforms existing approaches across 20 tasks, achieving performances over 97% on 15 benchmarks at a maximum improvement of 36.76%...

PRET: Achieving Pan-Cancer Recognition via a Few Examples Without Training

Yi Li, Ziyu Ning, Tianqi Xiang, Qixiang Zhang, Yi Min, Zhihao Lin, Feiyan Feng, Baozhen Zeng, Xuexia Qian, Lu Sun, Jiace Qin, Ling Xiang, Chao Fan, Tian Qin, Qian Wang, Xiu-Wu Bian, Qingling Zhang, Xiaomeng Li

Submitted to Nature Cancer Under Review. 2025

...In this paper, we introduce a novel paradigm, Pan-cancer Recognition via Examples without Training (PRET). The proposed PRET learns from a few examples during the inference phase without model fine-tuning, offering a flexible, scalable, and effective solution to recognize cancers across diverse organs, hospitals, and tasks using a single model only. Through extensive evaluations across international hospitals and diverse benchmarks, our method outperforms existing approaches across 20 tasks, achieving performances over 97% on 15 benchmarks at a maximum improvement of 36.76%...

Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant
Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant

Haonan Wang, Jiaji Mao, Lehan Wang, Qixiang Zhang, Marawan Elbatel, Yi Qin, Huijun Hu, Baoxun Li, Wenhui Deng, Weifeng Qin, Hongrui Li, Jialin Liang, Jun Shen, Xiaomeng Li

Submitted to Nature Communication Under Review. 2025

...we propose RCMed, a full-stack AI assistant enhancing multimodal alignment in both input & output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis for clinicians through hierarchical vision-language grounding. Trained on a 20M images-mask-description triplets dataset, RCMed achieves SOTA precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling across 165 clinical tasks with 9 different modalities...

Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant

Haonan Wang, Jiaji Mao, Lehan Wang, Qixiang Zhang, Marawan Elbatel, Yi Qin, Huijun Hu, Baoxun Li, Wenhui Deng, Weifeng Qin, Hongrui Li, Jialin Liang, Jun Shen, Xiaomeng Li

Submitted to Nature Communication Under Review. 2025

...we propose RCMed, a full-stack AI assistant enhancing multimodal alignment in both input & output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis for clinicians through hierarchical vision-language grounding. Trained on a 20M images-mask-description triplets dataset, RCMed achieves SOTA precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling across 165 clinical tasks with 9 different modalities...

2024

GlandSAM: Injecting Morphology Knowledge into Segment Anything Model for Label-free Gland Segmentation
GlandSAM: Injecting Morphology Knowledge into Segment Anything Model for Label-free Gland Segmentation

Qixiang ZHANG, Yi LI, Cheng XUE, Haonan WANG, Xiaomeng LI

IEEE Transactions on Medical Image (TMI) JCR Q1 2024

GlandSAM: Injecting Morphology Knowledge into Segment Anything Model for Label-free Gland Segmentation

Qixiang ZHANG, Yi LI, Cheng XUE, Haonan WANG, Xiaomeng LI

IEEE Transactions on Medical Image (TMI) JCR Q1 2024

AllSpark: Reborn labeled features from unlabeled in transformer for semi-supervised semantic segmentation
AllSpark: Reborn labeled features from unlabeled in transformer for semi-supervised semantic segmentation

Haonan WANG*, Qixiang ZHANG*, Yi LI, Xiaomeng LI (* equal contribution)

Conference on Computer Vision and Pattern Recognition (CVPR) CCF-A 2024

AllSpark: Reborn labeled features from unlabeled in transformer for semi-supervised semantic segmentation

Haonan WANG*, Qixiang ZHANG*, Yi LI, Xiaomeng LI (* equal contribution)

Conference on Computer Vision and Pattern Recognition (CVPR) CCF-A 2024

Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-parametric Classifier
Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-parametric Classifier

Yi LI, Qixiang ZHANG, Tianqi XIANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2024

Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-parametric Classifier

Yi LI, Qixiang ZHANG, Tianqi XIANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2024

2023

Morphology-inspired unsupervised gland segmentation via selective semantic grouping
Morphology-inspired unsupervised gland segmentation via selective semantic grouping

Qixiang ZHANG, Yi LI, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023

Morphology-inspired unsupervised gland segmentation via selective semantic grouping

Qixiang ZHANG, Yi LI, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023

Dual Teacher Sample Consistency Framework for Semi-Supervised Medical Image Classification
Dual Teacher Sample Consistency Framework for Semi-Supervised Medical Image Classification

Qixiang Zhang, Yuxiang Yang, Chen Zu, Jianjia Zhang, Xi Wu, Jiliu Zhou, Yan Wang

IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) 2023

Dual Teacher Sample Consistency Framework for Semi-Supervised Medical Image Classification

Qixiang Zhang, Yuxiang Yang, Chen Zu, Jianjia Zhang, Xi Wu, Jiliu Zhou, Yan Wang

IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) 2023