Explainable Medical AI
APOLO is a medical AI model with dual-level explainability for privacy-preserving medical image analysis. The project integrates DeepSeek-VL2 with LoRA fine-tuning to create a powerful vision-language model specifically designed for medical applications.
Drag & Drop or Click to Upload
A Vision Language Model (VLM), fine-tuned using advanced methods focused on descriptive quality, analyzes the input medical image. It generates an exhaustive, structured, and strictly objective textual description of all discernible visual findings.
A separate, computationally efficient Language Model (LLM) receives only the detailed textual description produced by Stage 1. Based exclusively on this rich text, the LLM performs diagnostic classification or assessment.
A Privacy-Preserving, Explainable Framework for Medical Image Analysis.
Read PaperDetailed performance metrics across different medical imaging tasks.
View Analysis