Papers
arxiv:2506.14963

Understanding multi-fidelity training of machine-learned force-fields

Published on Jun 17, 2025
Authors:
,
,
,
,

Abstract

Multi-fidelity training strategies for machine-learned force fields are investigated, showing that pre-training/fine-tuning requires method-specific adaptations while multi-headed models learn universal representations across different quantum-chemical methods.

AI-generated summary

Effectively leveraging data from multiple quantum-chemical methods is essential for building machine-learned force fields (MLFFs) that are applicable to a wide range of chemical systems. This study systematically investigates two multi-fidelity training strategies, pre-training/fine-tuning and multi-headed training, to elucidate the mechanisms underpinning their success. We identify key factors driving the efficacy of pre-training followed by fine-tuning, but find that internal representations learned during pre-training are inherently method-specific, requiring adaptation of the model backbone during fine-tuning. Multi-headed models offer an extensible alternative, enabling simultaneous training on multiple fidelities. We demonstrate that a multi-headed model learns method-agnostic representations that allow for accurate predictions across multiple label sources. While this approach introduces a slight accuracy compromise compared to sequential fine-tuning, it unlocks new cost-efficient data generation strategies and paves the way towards developing universal MLFFs.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2506.14963
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.14963 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.14963 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.14963 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.