Papers
arxiv:2603.23211

The NCS-Model: A seismic foundation model trained on the Norwegian repository of public data

Published on Mar 24
Authors:
,
,
,
,
,

Abstract

Seismic foundation models pretrained on large-scale geological data demonstrate superior performance for seismic interpretation tasks compared to natural-image pretrained models, with 2.5D tokenization offering optimal accuracy-efficiency balance.

AI-generated summary

We present the NCS-models, a family of seismic foundation models pretrained on a large share of full-stack seismic cubes from the Norwegian Continental Shelf (NCS) available through the public DISKOS database. The model weights are open-sourced for the wider geoscience community. Foundation models trained with large-scale self-supervision are emerging as a promising basis for automatic seismic interpretation. However, most existing seismic models rely on limited or proprietary datasets, and it remains unclear how well natural-image foundation models transfer to seismic data. Our goals are to develop basin-scale seismic foundation models, provide practical recipes for scalable 3D training, and quantify the effects of basin-targeted pretraining and token dimensionality on downstream interpretation performance. Using masked autoencoders with Vision Transformer backbones, we pretrain models on a DISKOS-derived corpus of 3D time- and depth-migrated seismic volumes. The NCS-model variants use 2D, 2.5D multi-view, and 3D tokenization within a matched training setup. Transfer is evaluated on interpretation benchmarks using frozen backbones and a simple k-nearest neighbor classifier. Baselines include an ImageNet-pretrained MAE, a frontier vision foundation model, and a globally pretrained seismic model. Natural-image pretrained models do not reliably transfer, reflecting the large domain gap between natural images and seismic data. Seismic pretraining is necessary for robust transfer, and large-scale basin-targeted pretraining yields further gains over a smaller globally pretrained seismic baseline. The NCS-models achieve the best overall performance without fine-tuning, while 2.5D tokenization offers the strongest accuracy-efficiency tradeoff and the embeddings support similarity search for interactive interpretation.

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.23211 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.23211 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.