nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8 Text Generation • 124B • Updated 2 days ago • 1.08M • 229
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16 Text Generation • 124B • Updated 2 days ago • 463k • 326
view article Article KV Caching Explained: Optimizing Transformer Inference Efficiency Jan 30, 2025 • 292
view article Article Transformers v5: Simple model definitions powering the AI ecosystem +2 Dec 1, 2025 • 307
nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 Text Generation • 32B • Updated 29 days ago • 1.49M • 710
NVIDIA Nemotron v3 Collection Open, Production-ready Enterprise Models • 15 items • Updated 6 days ago • 265