Qixiang ZHANG, Yi LI, Tianqi XIANG, Haonan WANG, Xiaomeng LI
Under review 2025
WSI Analysis
Qixiang ZHANG, Yi LI, Tianqi XIANG, Haonan WANG, Xiaomeng LI
Under review 2025
WSI Analysis
Yi LI, Haonan Wang, Qixiang Zhang, Boyu Xiao, Chenchang Hu, Hualiang Wang, Xiaomeng Li
Submitted to Annual Conference on Neural Information Processing Systems (NeurIPS) 2025
[Preprint]
[Code]
Benchmark MLLM
Yi LI, Haonan Wang, Qixiang Zhang, Boyu Xiao, Chenchang Hu, Hualiang Wang, Xiaomeng Li
Submitted to Annual Conference on Neural Information Processing Systems (NeurIPS) 2025
[Preprint]
[Code]
Benchmark MLLM
Haonan WANG*, Qixiang ZHANG*, Lehan WANG, Xuanqi HUANG, Xiaomeng LI (* equal contribution)
International Conference of Computer Vision (ICCV) CCF-A 2025
[Preprint]
[Code]
AI for Neural Science
Haonan WANG*, Qixiang ZHANG*, Lehan WANG, Xuanqi HUANG, Xiaomeng LI (* equal contribution)
International Conference of Computer Vision (ICCV) CCF-A 2025
[Preprint]
[Code]
AI for Neural Science
Honglong Yang, Shanshan Song, Yi Qin, Lehan Wang, Haonan Wang, Xinpeng Ding, Qixiang Zhang, Bodong Du, Xiaomeng Li
Under Review 2025
Generalist Medical AI systems have demonstrated expert-level performance in biomedical perception tasks, yet their clinical utility remains limited by inadequate multi-modal explainability and suboptimal prognostic capabilities. Here, we present XMedGPT, a clinician-centric, multi-modal AI assistant that integrates textual and visual interpretability to support transparent and trustworthy medical decision-making. XMedGPT not only produces accurate diagnostic and descriptive outputs, but also grounds referenced anatomical sites within medical images, bridging critical gaps in interpretability and enhancing clinician usability. The model achieves an Intersection over Union of 0.703 across 141 anatomical regions, and a Kendall’s tau-b of 0.479, demonstrating strong alignment between visual rationales and clinical outcomes. In survival and recurrence prediction, it surpasses prior leading models by 26.9%...
[Preprint] MLLM Benchmark
Honglong Yang, Shanshan Song, Yi Qin, Lehan Wang, Haonan Wang, Xinpeng Ding, Qixiang Zhang, Bodong Du, Xiaomeng Li
Under Review 2025
Generalist Medical AI systems have demonstrated expert-level performance in biomedical perception tasks, yet their clinical utility remains limited by inadequate multi-modal explainability and suboptimal prognostic capabilities. Here, we present XMedGPT, a clinician-centric, multi-modal AI assistant that integrates textual and visual interpretability to support transparent and trustworthy medical decision-making. XMedGPT not only produces accurate diagnostic and descriptive outputs, but also grounds referenced anatomical sites within medical images, bridging critical gaps in interpretability and enhancing clinician usability. The model achieves an Intersection over Union of 0.703 across 141 anatomical regions, and a Kendall’s tau-b of 0.479, demonstrating strong alignment between visual rationales and clinical outcomes. In survival and recurrence prediction, it surpasses prior leading models by 26.9%...
[Preprint] MLLM Benchmark
Yi Li, Ziyu Ning, Tianqi Xiang, Qixiang Zhang, Yi Min, Zhihao Lin, Feiyan Feng, Baozhen Zeng, Xuexia Qian, Lu Sun, Jiace Qin, Ling Xiang, Chao Fan, Tian Qin, Qian Wang, Xiu-Wu Bian, Qingling Zhang, Xiaomeng Li
Submitted to Nature Cancer Under Review. 2025
...In this paper, we introduce a novel paradigm, Pan-cancer Recognition via Examples without Training (PRET). The proposed PRET learns from a few examples during the inference phase without model fine-tuning, offering a flexible, scalable, and effective solution to recognize cancers across diverse organs, hospitals, and tasks using a single model only. Through extensive evaluations across international hospitals and diverse benchmarks, our method outperforms existing approaches across 20 tasks, achieving performances over 97% on 15 benchmarks at a maximum improvement of 36.76%...
[Preview Model] WSI Analysis
Yi Li, Ziyu Ning, Tianqi Xiang, Qixiang Zhang, Yi Min, Zhihao Lin, Feiyan Feng, Baozhen Zeng, Xuexia Qian, Lu Sun, Jiace Qin, Ling Xiang, Chao Fan, Tian Qin, Qian Wang, Xiu-Wu Bian, Qingling Zhang, Xiaomeng Li
Submitted to Nature Cancer Under Review. 2025
...In this paper, we introduce a novel paradigm, Pan-cancer Recognition via Examples without Training (PRET). The proposed PRET learns from a few examples during the inference phase without model fine-tuning, offering a flexible, scalable, and effective solution to recognize cancers across diverse organs, hospitals, and tasks using a single model only. Through extensive evaluations across international hospitals and diverse benchmarks, our method outperforms existing approaches across 20 tasks, achieving performances over 97% on 15 benchmarks at a maximum improvement of 36.76%...
[Preview Model] WSI Analysis
Haonan Wang, Jiaji Mao, Lehan Wang, Qixiang Zhang, Marawan Elbatel, Yi Qin, Huijun Hu, Baoxun Li, Wenhui Deng, Weifeng Qin, Hongrui Li, Jialin Liang, Jun Shen, Xiaomeng Li
Submitted to Nature Communication Under Review. 2025
...we propose RCMed, a full-stack AI assistant enhancing multimodal alignment in both input & output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis for clinicians through hierarchical vision-language grounding. Trained on a 20M images-mask-description triplets dataset, RCMed achieves SOTA precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling across 165 clinical tasks with 9 different modalities...
[Preprint] [Demo] SegmentationMLLM
Haonan Wang, Jiaji Mao, Lehan Wang, Qixiang Zhang, Marawan Elbatel, Yi Qin, Huijun Hu, Baoxun Li, Wenhui Deng, Weifeng Qin, Hongrui Li, Jialin Liang, Jun Shen, Xiaomeng Li
Submitted to Nature Communication Under Review. 2025
...we propose RCMed, a full-stack AI assistant enhancing multimodal alignment in both input & output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis for clinicians through hierarchical vision-language grounding. Trained on a 20M images-mask-description triplets dataset, RCMed achieves SOTA precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling across 165 clinical tasks with 9 different modalities...
[Preprint] [Demo] SegmentationMLLM
Yi LI, Qixiang ZHANG, Tianqi XIANG, Xiaomeng LI
International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2024
Yi LI, Qixiang ZHANG, Tianqi XIANG, Xiaomeng LI
International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2024
Qixiang Zhang, Yuxiang Yang, Chen Zu, Jianjia Zhang, Xi Wu, Jiliu Zhou, Yan Wang
IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) 2023
[Paper] Segmentation
Qixiang Zhang, Yuxiang Yang, Chen Zu, Jianjia Zhang, Xi Wu, Jiliu Zhou, Yan Wang
IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) 2023
[Paper] Segmentation