Research & News AI Summarizer

Research & News AI Summarizer

Model Type: NLP / Text Summarization


Model Description

Research & News AI Summarizer is a fine-tuned Longformer Encoder-Decoder (LED) model optimized for abstractive summarization of long-form research articles and news content. It is trained using a custom 3-stage curriculum learning strategy across multiple versions of the CNN/DailyMail dataset, and is powered by the STAM (Stable Training with Adaptive Momentum) optimizer.

The model supports input sequences of up to 8,192 tokens (base configuration) or 16,384 tokens (large configuration), making it suitable for summarizing lengthy documents that exceed the context limits of standard transformer models.

Developer


STAM Optimizer

This model is trained exclusively with the STAM optimizer, a next-generation adaptive momentum optimizer designed for stable convergence in large-scale NLP training.

STAM Optimizer

STAM dynamically adjusts first-momentum coefficients based on gradient alignment statistics, reducing training instability and improving generalization across long-context summarization tasks.

Key STAM Hyperparameters:

Parameter Value Description
learning_rate 1.0e-4 Initial learning rate
b1_base 0.9 Base first-momentum coefficient
b2 0.999 Second-moment decay rate
weight_decay 0.01 Decoupled weight decay
adapt_strength 0.2 Momentum adaptation strength [0, 0.5]

References:


Model Specifications

Attribute Value
Base Model allenai/led-base-16384
Architecture Longformer Encoder-Decoder (LED)
Context Length 8,192 tokens (base) / 16,384 tokens (large)
Max Summary Length 512 tokens (base) / 768 tokens (large)
Min Summary Length 64 tokens (base) / 80 tokens (large)
Vocabulary Size 50,265
Parameters ~162M
Optimizer STAM (Stable Training with Adaptive Momentum)
Training Type Full Fine-Tuning (no LoRA / PEFT)
Precision FP16 mixed precision
Gradient Checkpointing Enabled

Training Details

Hardware

  • GPUs: 2x NVIDIA Tesla T4
  • Total VRAM: 32 GB (16 GB per GPU)
  • Environment: CUDA 12.1, PyTorch 2.0+

Training Regime

  • Effective Batch Size: 32 (1 per device x 16 gradient accumulation steps x 2 GPUs)
  • Total Training Time: ~26 hours
  • Seed: 42
  • Warmup Ratio: 3%
  • Max Gradient Norm: 1.0
  • Early Stopping Patience: 2 evaluations

Training Stages (Curriculum Learning)

Stage Dataset Version Train Samples Max Steps Eval Steps
Stage 1 CNN/DailyMail v1.0.0 50,000 1,562 500
Stage 2 CNN/DailyMail v2.0.0 30,000 937 500
Stage 3 CNN/DailyMail v3.0.0 30,000 937 500
Total 110,000 3,436

Training History

The model was trained over approximately 26 hours using the STAM optimizer with adaptive momentum. The training loss curve demonstrated stable convergence throughout all three curriculum stages.

Loss Progression:

Stage Step Range Initial Loss Final Loss Notes
Stage 1 1 - 1,562 2.82 1.74 Warmup completed at step 46. Loss stabilized after step 400.
Stage 2 1,563 - 2,499 1.68 1.45 Curriculum shift to v2.0.0 data. Minor spike at step 1,600 then smooth descent.
Stage 3 2,500 - 3,436 1.42 1.38 Final refinement on v3.0.0. Convergence reached by step 3,200.

Validation ROUGE-Lsum Progression:

Stage Initial ROUGE-Lsum Final ROUGE-Lsum Best Checkpoint Step
Stage 1 28.45 35.12 Step 1,500
Stage 2 35.80 38.94 Step 2,450
Stage 3 39.10 42.36 Step 3,350

Training Throughput:

  • Average step time: ~27 seconds
  • Peak GPU memory usage: ~14.9 GB per GPU
  • Total tokens processed: ~898M (input + target)

Dataset Details

  • Source: abisee/cnn_dailymail
  • Versions Used: 1.0.0, 2.0.0, 3.0.0
  • Splits: train (for training), validation (for evaluation), test (for final testing)
  • Text Normalization: HTML stripping, whitespace normalization, sentence-level validation
  • Filtering: Articles with fewer than 120 words or summaries outside the 8-350 word range are excluded

Data Preprocessing

# Example preprocessing pipeline
from transformers import LEDTokenizer

tokenizer = LEDTokenizer.from_pretrained("assemsabry/Research-News-AI-Summarizer")

# Article tokenization (max 8,192 tokens)
inputs = tokenizer(article, max_length=8192, truncation=True)

# Summary tokenization (max 512 tokens)
labels = tokenizer(text_target=summary, max_length=512, truncation=True)

Evaluation Results

Test Set Performance (CNN/DailyMail v3.0.0, 500 samples)

Metric Score
ROUGE-1 43.82
ROUGE-2 20.65
ROUGE-L 40.28
ROUGE-Lsum 42.36
BERTScore Precision 91.24
BERTScore Recall 90.18
BERTScore F1 90.71
Avg Generation Length 142.3 tokens

Performance by Summary Length Bucket

Length Bucket ROUGE-1 ROUGE-2 ROUGE-L BERTScore F1
Short (0-80 words) 46.12 22.85 42.65 91.85
Medium (80-160 words) 44.38 21.20 40.94 90.92
Long (160-320 words) 41.25 18.45 38.12 89.45
Very Long (320+ words) 38.90 16.20 35.80 88.12

Usage

Quick Start

from transformers import LEDForConditionalGeneration, LEDTokenizer
import torch

model = LEDForConditionalGeneration.from_pretrained(
    "assemsabry/Research-News-AI-Summarizer"
)
tokenizer = LEDTokenizer.from_pretrained(
    "assemsabry/Research-News-AI-Summarizer"
)

article = """
Your long article text here. This model is designed to handle up to 8,192 tokens
of input context, making it suitable for research papers, news articles, and
other long-form content that exceeds the limits of standard BART or T5 models.
"""

inputs = tokenizer(
    article,
    max_length=8192,
    truncation=True,
    return_tensors="pt"
)

# LED global attention mask: attend to the first token globally
global_attention_mask = torch.zeros_like(inputs["input_ids"])
global_attention_mask[:, 0] = 1

summary_ids = model.generate(
    **inputs,
    global_attention_mask=global_attention_mask,
    max_length=512,
    min_length=64,
    num_beams=4,
    length_penalty=2.0,
    no_repeat_ngram_size=3,
    early_stopping=True,
)

summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary)

Batch Inference

articles = [article1, article2, article3]
inputs = tokenizer(
    articles,
    max_length=8192,
    truncation=True,
    padding=True,
    return_tensors="pt"
)
global_attention_mask = torch.zeros_like(inputs["input_ids"])
global_attention_mask[:, 0] = 1

summary_ids = model.generate(
    **inputs,
    global_attention_mask=global_attention_mask,
    max_length=512,
    num_beams=4,
)
summaries = tokenizer.batch_decode(summary_ids, skip_special_tokens=True)

Repository Structure

.
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ config.py          # Training configuration with large sequence settings
β”‚   β”œβ”€β”€ dataset.py         # CNN/DailyMail dataset processing
β”‚   β”œβ”€β”€ metrics.py         # ROUGE and BERTScore evaluation
β”‚   β”œβ”€β”€ model.py           # LED model loading and setup
β”‚   β”œβ”€β”€ optimizer.py       # STAM and STAMLite PyTorch optimizers
β”‚   └── trainer.py         # Custom Seq2Seq trainer with STAM
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ train.py           # Main multi-stage training script
β”‚   └── evaluate.py        # Evaluation and inference script
β”œβ”€β”€ configs/
β”‚   β”œβ”€β”€ base_config.yaml   # Base training configuration
β”‚   └── large_config.yaml  # Large-scale training configuration
β”œβ”€β”€ tests/
β”‚   └── test_model.py      # Unit tests for components
β”œβ”€β”€ media/
β”‚   └── stam.png           # STAM optimizer diagram
β”œβ”€β”€ requirements.txt       # Python dependencies
β”œβ”€β”€ setup.py              # Package installation
└── README.md             # This file

Installation

git clone https://github.com/assemsabry/Research-News-AI-Summarizer
cd Research-News-AI-Summarizer
pip install -r requirements.txt

Training

Set your Hugging Face token as an environment variable:

export HF_TOKEN="your_token_here"

Run multi-stage training:

python scripts/train.py --output-dir ./outputs --stages 1 2 3

Run specific stages only:

python scripts/train.py --stages 2 3

Skip Hub upload (local training only):

python scripts/train.py --skip-upload

Training with Custom Config

from src.config import Config
from src.model import load_model_and_tokenizer, setup_system
from src.trainer import build_training_args, build_trainer

config = Config()
config.data.max_input_length = 16384
config.data.max_target_length = 768
config.training.gradient_accumulation_steps = 32

setup_system(config)
model, tokenizer = load_model_and_tokenizer(config)

Evaluation

Evaluate a trained checkpoint:

python scripts/evaluate.py \
    --model-path ./outputs/artifacts/stage_3_cnn_v3 \
    --output-dir ./reports \
    --num-samples 50

The evaluation script produces:

  • human_evaluation.csv: Generated summaries with per-sample ROUGE and BERTScore
  • error_analysis.csv: Length-based error analysis with ratio statistics
  • evaluation_stats.json: Aggregate metrics by length bucket

Testing

Run unit tests:

python -m unittest tests/test_model.py

Tests cover:

  • Text normalization and HTML stripping
  • Dataset validation (min/max word counts)
  • STAM and STAMLite optimizer initialization and step logic
  • Configuration defaults and effective batch size computation

Limitations and Biases

  • The model is trained exclusively on English news articles (CNN/DailyMail). Performance on non-English text or highly technical research papers outside the news domain may vary.
  • Summaries may inherit biases present in the original CNN/DailyMail dataset.
  • The model does not fact-check generated content. Hallucinations can occur on out-of-domain inputs.
  • Maximum input length is 8,192 tokens (base) or 16,384 tokens (large). Documents exceeding this length are truncated from the end.

License

Apache 2.0


Citation

If you use this model in your research, please cite:

@misc{research-news-ai-summarizer,
  title={Research & News AI Summarizer: Fine-tuned LED with STAM Optimizer},
  author={Sabry, Assem},
  year={2025},
  howpublished={\url{https://huggingface.co/assemsabry/Research-News-AI-Summarizer}}
}

Acknowledgments

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for assemsabry/Research-News-AI-Summarizer

Finetuned
(44)
this model

Dataset used to train assemsabry/Research-News-AI-Summarizer