Papers
arxiv:2602.07120

Anchored Decoding: Provably Reducing Copyright Risk for Any Language Model

Published on Feb 6
· Submitted by
Jacqueline He
on Feb 10
Authors:
,
,
,
,
,

Abstract

Anchor decoding suppresses verbatim copying in language models while maintaining fluency and factual accuracy through constrained generation that balances risk and utility.

AI-generated summary

Modern language models (LMs) tend to memorize portions of their training data and emit verbatim spans. When the underlying sources are sensitive or copyright-protected, such reproduction raises issues of consent and compensation for creators and compliance risks for developers. We propose Anchored Decoding, a plug-and-play inference-time method for suppressing verbatim copying: it enables decoding from any risky LM trained on mixed-license data by keeping generation in bounded proximity to a permissively trained safe LM. Anchored Decoding adaptively allocates a user-chosen information budget over the generation trajectory and enforces per-step constraints that yield a sequence-level guarantee, enabling a tunable risk-utility trade-off. To make Anchored Decoding practically useful, we introduce a new permissively trained safe model (TinyComma 1.8B), as well as Anchored_{Byte} Decoding, a byte-level variant of our method that enables cross-vocabulary fusion via the ByteSampler framework (Hayase et al., 2025). We evaluate our methods across six model pairs on long-form evaluations of copyright risk and utility. Anchored and Anchored_{Byte} Decoding define a new Pareto frontier, preserving near-original fluency and factuality while eliminating up to 75% of the measurable copying gap (averaged over six copying metrics) between the risky baseline and a safe reference, at a modest inference overhead.

Community

The memorization and reproduction of copyrighted text in LLMs is an issue that has potentially harmful repercussions for both data creators and AI developers. To this end, Anchored Decoding is a decoding technique for language models (LMs) that provably reduces the likelihood of generating copyrighted text. It requires two LMs: a safe model trained exclusively on permissively licensed data, and a risky model that is higher-utility and trained on mixed-licensed data.

Anchored Decoding works for both token-level and byte-level decoding. To make this algorithm as practical as possible, we release (1) TinyLlama 1.8B, a safe base LM that is tokenizer-compatible with the Llama 3 model family, and (2) byte-level support to facilitate mixed-tokenizer decoding.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.07120 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.07120 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.