Introduction

This is the Agentic-R trained in our paper: Agentic-R: Learning to Retrieve for Agentic Search (📝arXiv). Please refer our 🧩github repository for the detailed usage of our Agentic-R.

Usage

Our Agentic-R query encoder is designed for agentic search scenarios.
For queries, the input format is: query: <original_question> [SEP] <agent_query>. Passages use the standard passage: prefix following E5.

Below is an example of how to compute embeddings using sentence_transformers:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("liuwenhan/Agentic-R_e5")

input_texts = [
    # Query encoder input:
    # original_question [SEP] current_query
    "query: Who wrote The Old Man and the Sea? [SEP] Old Man and the Sea",

    # Passages
    "passage: The Old Man and the Sea is a short novel written by the American author Ernest Hemingway in 1951.",
    "passage: Ernest Hemingway was an American novelist, short-story writer, and journalist, born in 1899."
]

embeddings = model.encode(
    input_texts,
    normalize_embeddings=True
)

Notes:

original_question refers to the user’s initial question.

agent_query refers to the intermediate query generated during the agent’s reasoning process.

Always include [SEP] to separate the two parts of the query.

We recommend setting normalize_embeddings=True for cosine similarity–based retrieval.

Downloads last month
27
Safetensors
Model size
0.1B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for liuwenhan/Agentic-R_e5

Finetuned
(46)
this model

Collection including liuwenhan/Agentic-R_e5

Paper for liuwenhan/Agentic-R_e5