Download
⚖️ Model weights
⚙️ Model configuration
📂 Dataset splits
Abstract
MAESTRO is a tailored adaptation of the Masked Autoencoder (MAE) that effectively orchestrates the use of multimodal, multitemporal, and multispectral Earth Observation (EO) data. Evaluated on four EO datasets, MAESTRO sets a new state-of-the-art on tasks that strongly rely on multitemporal dynamics, while remaining competitive on tasks dominated by a single monotemporal modality.
MAESTRO's contributions are as follows:
- Extensive benchmarking of multimodal and multitemporal SSL: Impact evaluation of various fusion strategies for multimodal and multitemporal SSL.
- Patch-group-wise normalization: Novel normalization scheme that normalizes reconstruction targets patch-wise within groups of highly correlated spectral bands.
- MAESTRO: Novel adaptation of the MAE that combines optimized fusion strategies with patch-group-wise normalization.
📃 Paper: https://arxiv.org/abs/2508.10894
💻 Code repository: https://github.com/IGNF/MAESTRO
Pre-training
This model is pre-trained on FLAIR-HUB.
FLAIR-HUB contains 241,100 tiles of size 102.4 × 102.4 m, covering a total area of 2,528 km² across France.
We retain six distinct modalities:
- Aerial imagery RGB + NIR (0.2 m resolution)
- DEM/DSM imagery (0.2 m resolution)
- SPOT 6–7 imagery
- Sentinel-1 time series in ascending orbit
- Sentinel-1 time series in descending orbit
- Sentinel-2 time series
Below is the reconstruction loss during pre-training on the combined training and validation ensembles, using patch-group-wise normalization and modality-weighted averaging proportional to token counts.
Fine-tuning
For optimal fine-tuning results with this model:
- Ensure that patch sizes and channels match between pre-training and fine-tuning for each modality:
- Modality "aerial":
- Patch size: 16
- Channels: NIR, RED, GREEN, BLUE
- Modality "dem":
- Patch size: 32
- Channels: DEM, DSM
- Modality "spot":
- Patch size: 16
- Channels: RED, GREEN, BLUE
- Modality "s1_asc":
- Patch size: 2
- Channels: VV, VH
- Modality "s1_des":
- Patch size: 2
- Channels: VV, VH
- Modality "s2":
- Patch size: 2
- Channels: B02, B03, B04, B05, B06, B07, B08, B8A, B11, B12
- Modality "aerial":
- Use fixed cross-dataset grids for positional encodings proportional to ground sampling distance:
grid_pos_enc≈ 1.6 *crop_meters
Note that modality names must match between pre-training and fine-tuning.
Below are cross-dataset evaluation results obtained with these guidelines on TreeSatAI-TS and PASTIS-HD.
| Model | Pre-training dataset | TreeSatAI-TS | PASTIS-HD |
|---|---|---|---|
| MAESTRO (ours) | FLAIR-HUB | 79.6 | 68.0 |
| DINO-v2 | LVD-142M | 76.7 | 64.4 |
| DINO-v2 sat. | Maxar Vivid2 | 76.3 | 64.0 |
| DOFA | DOFA MM | 76.0 | 62.9 |
| CROMA | SSL4EO | 70.5 | 65.0 |
| Prithvi-EO-2.0 | HLS | 75.6 | 66.2 |
| SatMAE | fMoW RGB+S | 76.9 | 66.6 |
🚀 Getting started
Prerequisites:
- Clone MAESTRO's code repository
- Fetch Dataset splits and move them to each dataset directory
- Fetch model weights and move them into
/path/to/experiments/MAESTRO_FLAIR-HUB_base/checkpoints/ - Fetch model configuration and move it into
/path/to/experiments/MAESTRO_FLAIR-HUB_base/.hydra/
The module is setup with Poetry.
# 1. Change directory
cd MAESTRO
# 2. Install dependencies with Poetry
poetry install
Pre-training on FLAIR-HUB is performed using:
# batch size 9 on 4 nodes with 4 GPUs per node
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=100 opt_probe.epochs=0 opt_finetune.epochs=0 \
opt_pretrain.batch_size=9 trainer.num_nodes=4 \
datasets.name_dataset=flair \
datasets.flair.filter_inputs=[aerial,dem,spot,s2,s1_asc,s1_des] \
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
datasets.flair.dem.image_size=512 datasets.flair.dem.patch_size.mae=32 \
datasets.flair.spot.image_size=128 datasets.flair.spot.patch_size.mae=16 datasets.flair.spot.bands=3 \
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
datasets.flair.s1_asc.image_size=10 datasets.flair.s1_asc.patch_size.mae=2 \
datasets.flair.s1_des.image_size=10 datasets.flair.s1_des.patch_size.mae=2 \
datasets.root_dir=/path/to/dataset/dir datasets.flair.csv_dir=/path/to/dataset/dir/FLAIR-HUB datasets.flair.rel_dir=FLAIR-HUB \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_FLAIR-HUB_base \
Fine-tuning on TreeSatAI-TS:
# batch size 24 on 1 node with 4 GPUs per node
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
opt_probe.batch_size=24 opt_finetune.batch_size=24 trainer.num_nodes=1 \
opt_finetune.monitor=treesat_mlc_thresh/weighted_f1_val \
datasets.name_dataset=treesatai_ts \
datasets.treesatai_ts.filter_inputs=[aerial,s2,s1_asc,s1_des] \
datasets.treesatai_ts.crop_meters=60 datasets.treesatai_ts.grid_pos_enc=96 \
datasets.treesatai_ts.aerial.image_size=240 datasets.treesatai_ts.aerial.patch_size.mae=16 \
datasets.treesatai_ts.s2.image_size=6 datasets.treesatai_ts.s2.patch_size.mae=2 \
datasets.treesatai_ts.s1_asc.image_size=6 datasets.treesatai_ts.s1_asc.patch_size.mae=2 \
datasets.treesatai_ts.s1_des.image_size=6 datasets.treesatai_ts.s1_des.patch_size.mae=2 \
datasets.root_dir=/path/to/dataset/dir datasets.treesatai_ts.rel_dir=TreeSatAI-TS \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_FLAIR-HUB-x-TSAI-TS_base \
run.load_name=MAESTRO_FLAIR-HUB_base
Fine-tuning on PASTIS-HD:
# batch size 12 on 1 node with 4 GPUs per node
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
opt_probe.batch_size=12 opt_finetune.batch_size=12 trainer.num_nodes=1 \
opt_finetune.monitor=pastis_seg/average_iou_val \
datasets.name_dataset=pastis_hd \
datasets.pastis_hd.filter_inputs=[spot,s2,s1_asc,s1_des] \
datasets.pastis_hd.crop_meters=160 datasets.pastis_hd.grid_pos_enc=256 datasets.pastis_hd.repeats=8 \
datasets.pastis_hd.spot.image_size=160 datasets.pastis_hd.spot.patch_size.mae=16 \
datasets.pastis_hd.s2.image_size=16 datasets.pastis_hd.s2.patch_size.mae=2 \
datasets.pastis_hd.s1_asc.image_size=16 datasets.pastis_hd.s1_asc.patch_size.mae=2 \
datasets.pastis_hd.s1_des.image_size=16 datasets.pastis_hd.s1_des.patch_size.mae=2 \
datasets.root_dir=/path/to/dataset/dir datasets.pastis_hd.rel_dir=PASTIS-HD \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_FLAIR-HUB-x-PASTIS-HD_base \
run.load_name=MAESTRO_FLAIR-HUB_base
Fine-tuning on FLAIR#2:
# batch size 6 on 2 nodes with 4 GPUs per node
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
opt_probe.batch_size=6 opt_finetune.batch_size=6 trainer.num_nodes=2 \
opt_finetune.monitor=cosia/average_iou_val \
datasets.name_dataset=flair \
datasets.flair.version=flair2 \
datasets.flair.filter_inputs=[aerial,dem,s2] \
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
datasets.flair.dem.image_size=512 datasets.flair.dem.patch_size.mae=32 \
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
datasets.root_dir=/path/to/dataset/dir datasets.flair.csv_dir=/path/to/dataset/dir/FLAIR-HUB datasets.flair.rel_dir=FLAIR-HUB \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_FLAIR-HUB-x-FLAIR2_base \
run.load_name=MAESTRO_FLAIR-HUB_base
Fine-tuning on FLAIR-HUB:
# batch size 6 on 4 nodes with 4 GPUs per node
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
opt_probe.batch_size=6 opt_finetune.batch_size=6 trainer.num_nodes=4 \
opt_finetune.monitor=cosia/average_iou_val \
datasets.name_dataset=flair \
datasets.flair.filter_inputs=[aerial,dem,s2,s1_asc,s1_des] \
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
datasets.flair.dem.image_size=512 datasets.flair.dem.patch_size.mae=32 \
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
datasets.flair.s1_asc.image_size=10 datasets.flair.s1_asc.patch_size.mae=2 \
datasets.flair.s1_des.image_size=10 datasets.flair.s1_des.patch_size.mae=2 \
datasets.root_dir=/path/to/dataset/dir datasets.flair.csv_dir=/path/to/dataset/dir/FLAIR-HUB datasets.flair.rel_dir=FLAIR-HUB \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_FLAIR-HUB-x-FLAIR-HUB_base \
run.load_name=MAESTRO_FLAIR-HUB_base
Reference
If you use this model, please cite:
@inproceedings{labatie2026maestro,
title={MAESTRO: Masked AutoEncoders for Multimodal, Multitemporal, and Multispectral Earth Observation Data},
author={Labatie, Antoine and Vaccaro, Michael and Lardiere, Nina and Garioud, Anatol and Gonthier, Nicolas},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year={2026}
}
Acknowledgement
The experiments in the paper were conducted using HPC/AI resources from GENCI-IDRIS (allocations A0181013803, A0161013803, AD010114597R1, and AD011014690R1).