Qixiang Zhang
Logo Ph.D. Candidate at HKUST

I am a Year-2 Ph.D. candidate at The Hong Kong University of Science and Technology (HKUST) under the supervision of Prof. Xiaomeng LI. My research lies in the interdisciplinary areas of artificial intelligence and medical image analysis, aiming at advancing healthcare with machine intelligence. Specifically, I am currently working on advancing Neural Science with AI solutions.

Previously, I received B.Eng degree in Software Engineering at Sichuan University, and closely collaborated with Prof. Yan Wang as an undergraduate research intern.

Curriculum Vitae

Education
  • The Hong Kong University of Science and Technology
    The Hong Kong University of Science and Technology
    Department of Electronic and Computer Engineering
    Ph.D. Candidate
    Sep. 2023 - present
  • Sichuan University
    Sichuan University
    School of Software Engineering
    B.Eng in Software Engineering
    Sep. 2019 - Jul. 2023
  • National University of Singapore
    National University of Singapore
    School of Computing
    SoC Research Summer Workshop
    Jun. 2020 - Sep. 2020
Experience
  • Sichuan University
    Sichuan University
    Research Intern
    Supervisor: Prof. Yan WANG
    Sep. 2020 - Jul. 2022
Honors & Awards
  • Best TA Award (10,000 HKD, Top 1%) | HKUST
    2025
  • Postgraduate Studentship (18,000 HKD per month) | HKUST
    2023
  • RedBird PhD Scholarship (82,000 HKD, Top 1%) | HKUST
    2023
  • Outstanding Graduate (Top 3%) | SCU
    2023
  • First Prize (Top < 2%) of Research Summer Workshop | NUS
    2020
News
2025
Awarded with Best TA Award (10,000HKD). Thank You HKUST and Prof. LI!
Aug 22
One paper accepted by ICCV 2025 CCF-A
Jun 25
One co-author paper accepted by MICCAI 2025
Jun 18
One Journal paper accepted by IEEE Transactions on Medical Imaging JCR Q1
May 28
Check our new evaluation benchmark for Unified MLLM, submitted to NeuralIPS 2025 Read more
May 15
New attempts in AI for Neural Science (Brain Decode) Read more
May 15
2024
One Journal Paper accepted by IEEE Transactions on Medical Imaging JCR Q1
Dec 22
One co-author Paper accepted by MICCAI 2024
Dec 22
One paper accepted by CVPR 2024 CCF-A
Nov 28
2023
Awarded with PGS and RedBird Scholarship. Thank You HKUST and Prof. LI!
Sep 19
Publications (view all )
Read like a Pathologist: Enhancing Mamba with Pyramid Router for Whole Slide Image Analysis
Read like a Pathologist: Enhancing Mamba with Pyramid Router for Whole Slide Image Analysis

Qixiang ZHANG, Yi LI, Tianqi XIANG, Haonan WANG, Xiaomeng LI

Under review 2025

Read like a Pathologist: Enhancing Mamba with Pyramid Router for Whole Slide Image Analysis

Qixiang ZHANG, Yi LI, Tianqi XIANG, Haonan WANG, Xiaomeng LI

Under review 2025

S&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation
S&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation

Qixiang ZHANG*, Haonan WANG*, Yi LI, Xiaomeng LI (* equal contribution)

IEEE Transactions on Medical Image (TMI) JCR Q1. 2025

S&D Messenger: Exchanging Semantic and Domain Knowledge for Generic Semi-Supervised Medical Image Segmentation

Qixiang ZHANG*, Haonan WANG*, Yi LI, Xiaomeng LI (* equal contribution)

IEEE Transactions on Medical Image (TMI) JCR Q1. 2025

UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation
UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation

Yi LI, Haonan Wang, Qixiang Zhang, Boyu Xiao, Chenchang Hu, Hualiang Wang, Xiaomeng Li

Submitted to Annual Conference on Neural Information Processing Systems (NeurIPS) 2025

UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation

Yi LI, Haonan Wang, Qixiang Zhang, Boyu Xiao, Chenchang Hu, Hualiang Wang, Xiaomeng Li

Submitted to Annual Conference on Neural Information Processing Systems (NeurIPS) 2025

Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction

Haonan WANG*, Qixiang ZHANG*, Lehan WANG, Xuanqi HUANG, Xiaomeng LI (* equal contribution)

International Conference of Computer Vision (ICCV) CCF-A 2025

Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction

Haonan WANG*, Qixiang ZHANG*, Lehan WANG, Xuanqi HUANG, Xiaomeng LI (* equal contribution)

International Conference of Computer Vision (ICCV) CCF-A 2025

MOC: Meta-Optimized Classifier for Few-Shot Whole Slide Image Classification
MOC: Meta-Optimized Classifier for Few-Shot Whole Slide Image Classification

Tianqi XIANG, Yi LI, Qixiang ZHANG, Haonan WANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2025

MOC: Meta-Optimized Classifier for Few-Shot Whole Slide Image Classification

Tianqi XIANG, Yi LI, Qixiang ZHANG, Haonan WANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2025

Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration
Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration

Honglong Yang, Shanshan Song, Yi Qin, Lehan Wang, Haonan Wang, Xinpeng Ding, Qixiang Zhang, Bodong Du, Xiaomeng Li

Under Review 2025

Generalist Medical AI systems have demonstrated expert-level performance in biomedical perception tasks, yet their clinical utility remains limited by inadequate multi-modal explainability and suboptimal prognostic capabilities. Here, we present XMedGPT, a clinician-centric, multi-modal AI assistant that integrates textual and visual interpretability to support transparent and trustworthy medical decision-making. XMedGPT not only produces accurate diagnostic and descriptive outputs, but also grounds referenced anatomical sites within medical images, bridging critical gaps in interpretability and enhancing clinician usability. The model achieves an Intersection over Union of 0.703 across 141 anatomical regions, and a Kendall’s tau-b of 0.479, demonstrating strong alignment between visual rationales and clinical outcomes. In survival and recurrence prediction, it surpasses prior leading models by 26.9%...

Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration

Honglong Yang, Shanshan Song, Yi Qin, Lehan Wang, Haonan Wang, Xinpeng Ding, Qixiang Zhang, Bodong Du, Xiaomeng Li

Under Review 2025

Generalist Medical AI systems have demonstrated expert-level performance in biomedical perception tasks, yet their clinical utility remains limited by inadequate multi-modal explainability and suboptimal prognostic capabilities. Here, we present XMedGPT, a clinician-centric, multi-modal AI assistant that integrates textual and visual interpretability to support transparent and trustworthy medical decision-making. XMedGPT not only produces accurate diagnostic and descriptive outputs, but also grounds referenced anatomical sites within medical images, bridging critical gaps in interpretability and enhancing clinician usability. The model achieves an Intersection over Union of 0.703 across 141 anatomical regions, and a Kendall’s tau-b of 0.479, demonstrating strong alignment between visual rationales and clinical outcomes. In survival and recurrence prediction, it surpasses prior leading models by 26.9%...

PRET: Achieving Pan-Cancer Recognition via a Few Examples Without Training
PRET: Achieving Pan-Cancer Recognition via a Few Examples Without Training

Yi Li, Ziyu Ning, Tianqi Xiang, Qixiang Zhang, Yi Min, Zhihao Lin, Feiyan Feng, Baozhen Zeng, Xuexia Qian, Lu Sun, Jiace Qin, Ling Xiang, Chao Fan, Tian Qin, Qian Wang, Xiu-Wu Bian, Qingling Zhang, Xiaomeng Li

Submitted to Nature Cancer Under Review. 2025

...In this paper, we introduce a novel paradigm, Pan-cancer Recognition via Examples without Training (PRET). The proposed PRET learns from a few examples during the inference phase without model fine-tuning, offering a flexible, scalable, and effective solution to recognize cancers across diverse organs, hospitals, and tasks using a single model only. Through extensive evaluations across international hospitals and diverse benchmarks, our method outperforms existing approaches across 20 tasks, achieving performances over 97% on 15 benchmarks at a maximum improvement of 36.76%...

PRET: Achieving Pan-Cancer Recognition via a Few Examples Without Training

Yi Li, Ziyu Ning, Tianqi Xiang, Qixiang Zhang, Yi Min, Zhihao Lin, Feiyan Feng, Baozhen Zeng, Xuexia Qian, Lu Sun, Jiace Qin, Ling Xiang, Chao Fan, Tian Qin, Qian Wang, Xiu-Wu Bian, Qingling Zhang, Xiaomeng Li

Submitted to Nature Cancer Under Review. 2025

...In this paper, we introduce a novel paradigm, Pan-cancer Recognition via Examples without Training (PRET). The proposed PRET learns from a few examples during the inference phase without model fine-tuning, offering a flexible, scalable, and effective solution to recognize cancers across diverse organs, hospitals, and tasks using a single model only. Through extensive evaluations across international hospitals and diverse benchmarks, our method outperforms existing approaches across 20 tasks, achieving performances over 97% on 15 benchmarks at a maximum improvement of 36.76%...

Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant
Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant

Haonan Wang, Jiaji Mao, Lehan Wang, Qixiang Zhang, Marawan Elbatel, Yi Qin, Huijun Hu, Baoxun Li, Wenhui Deng, Weifeng Qin, Hongrui Li, Jialin Liang, Jun Shen, Xiaomeng Li

Submitted to Nature Communication Under Review. 2025

...we propose RCMed, a full-stack AI assistant enhancing multimodal alignment in both input & output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis for clinicians through hierarchical vision-language grounding. Trained on a 20M images-mask-description triplets dataset, RCMed achieves SOTA precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling across 165 clinical tasks with 9 different modalities...

Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant

Haonan Wang, Jiaji Mao, Lehan Wang, Qixiang Zhang, Marawan Elbatel, Yi Qin, Huijun Hu, Baoxun Li, Wenhui Deng, Weifeng Qin, Hongrui Li, Jialin Liang, Jun Shen, Xiaomeng Li

Submitted to Nature Communication Under Review. 2025

...we propose RCMed, a full-stack AI assistant enhancing multimodal alignment in both input & output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis for clinicians through hierarchical vision-language grounding. Trained on a 20M images-mask-description triplets dataset, RCMed achieves SOTA precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling across 165 clinical tasks with 9 different modalities...

GlandSAM: Injecting Morphology Knowledge into Segment Anything Model for Label-free Gland Segmentation
GlandSAM: Injecting Morphology Knowledge into Segment Anything Model for Label-free Gland Segmentation

Qixiang ZHANG, Yi LI, Cheng XUE, Haonan WANG, Xiaomeng LI

IEEE Transactions on Medical Image (TMI) JCR Q1 2024

GlandSAM: Injecting Morphology Knowledge into Segment Anything Model for Label-free Gland Segmentation

Qixiang ZHANG, Yi LI, Cheng XUE, Haonan WANG, Xiaomeng LI

IEEE Transactions on Medical Image (TMI) JCR Q1 2024

AllSpark: Reborn labeled features from unlabeled in transformer for semi-supervised semantic segmentation
AllSpark: Reborn labeled features from unlabeled in transformer for semi-supervised semantic segmentation

Haonan WANG*, Qixiang ZHANG*, Yi LI, Xiaomeng LI (* equal contribution)

Conference on Computer Vision and Pattern Recognition (CVPR) CCF-A 2024

AllSpark: Reborn labeled features from unlabeled in transformer for semi-supervised semantic segmentation

Haonan WANG*, Qixiang ZHANG*, Yi LI, Xiaomeng LI (* equal contribution)

Conference on Computer Vision and Pattern Recognition (CVPR) CCF-A 2024

Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-parametric Classifier
Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-parametric Classifier

Yi LI, Qixiang ZHANG, Tianqi XIANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2024

Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-parametric Classifier

Yi LI, Qixiang ZHANG, Tianqi XIANG, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2024

Morphology-inspired unsupervised gland segmentation via selective semantic grouping
Morphology-inspired unsupervised gland segmentation via selective semantic grouping

Qixiang ZHANG, Yi LI, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023

Morphology-inspired unsupervised gland segmentation via selective semantic grouping

Qixiang ZHANG, Yi LI, Xiaomeng LI

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023

Dual Teacher Sample Consistency Framework for Semi-Supervised Medical Image Classification
Dual Teacher Sample Consistency Framework for Semi-Supervised Medical Image Classification

Qixiang Zhang, Yuxiang Yang, Chen Zu, Jianjia Zhang, Xi Wu, Jiliu Zhou, Yan Wang

IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) 2023

Dual Teacher Sample Consistency Framework for Semi-Supervised Medical Image Classification

Qixiang Zhang, Yuxiang Yang, Chen Zu, Jianjia Zhang, Xi Wu, Jiliu Zhou, Yan Wang

IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) 2023

All publications
Professional Activities
Journal Reviews
  • IEEE Transactions on Medical Image (TMI)
  • Medical Image Analysis (MIA)
  • IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
  • IEEE Journal of Biomedical and Health Informatics (JBHI)
  • IEEE Transactions on Circuits and Systems for Video Technology (TCSVT)
Conference Reviews
  • European Conference on Computer Vision (ECCV)
  • Medical Image Computing and Computer Assisted Intervention (MICCAI)
Life Beyond AI

I’ve always believed in the motto: “The most precious gift God has ever given us is the world.” Beyond research, I’m passionate about water sports, including swimming (I’m a Level-2 athlete), surfing, windsurfing, and kayaking. I’m also deeply committed to global exploration and volunteer work. My travels have taken me to cities across countries such as China, Germany, Singapore, the United States, and Canada. For the past two years, I’ve spent my summer vacations in Sanya, China, volunteering as a coastal lifeguard, a meaningful experience I intend to continue in the years ahead. May god bless everyone, and may god bless me!