Skip to content
All research
Accepted
Featured

Self-Supervised Pretraining for Medical Imaging

A workshop paper exploring how MAE-style pre-training generalises across imaging modalities.

Problem

Medical imaging suffers from severe label scarcity, which limits supervised performance.

Methodology

We pretrain a ViT with masked autoencoder objective on a multi-modality unlabeled corpus (CT, MRI, X-ray) and study transfer.

Results

Workshop paper accepted; 4.1 AUC point average improvement over supervised baselines.