Inference Providers
Active filters: llama-2
meta-llama/Llama-2-7b-chat-hf
Text Generation
• Updated • 420k
• 4.74k
meta-llama/Llama-2-13b-chat-hf
Text Generation
• Updated • 167k
• • 1.11k
meta-llama/Llama-2-13b-hf
Text Generation
• Updated • 44.1k
• 624
NousResearch/Nous-Hermes-Llama2-13b
Text Generation
• 13B • Updated • 1.64k
• 322
NousResearch/Nous-Hermes-llama-2-7b
Text Generation
• 7B • Updated • 12.3k
• 71
NousResearch/Nous-Hermes-Llama2-70b
Text Generation
• Updated • 1.25k
• 84
TheBloke/Nous-Hermes-Llama2-70B-GGUF
69B • Updated • 880
• 27
TheBloke/Nous-Hermes-Llama2-70B-GGML
Updated • 7
• 13
codellama/CodeLlama-7b-hf
Text Generation
• 7B • Updated • 108k
• 376
TheBloke/CodeLlama-7B-Instruct-GGUF
Text Generation
• 7B • Updated • 8.91k
• 147
TheBloke/CodeLlama-34B-GGUF
Text Generation
• 34B • Updated • 1.78k
• 56
TheBloke/CodeLlama-34B-Instruct-GGUF
Text Generation
• 34B • Updated • 3.11k
• 110
Text Generation
• 7B • Updated • 8.08k
• 208
TheBloke/CodeLlama-70B-Instruct-GPTQ
Text Generation
• 69B • Updated • 55
• 15
DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix
Text Generation
• 20B • Updated • 365
• 15
facebook/layerskip-llama2-7B
Text Generation
• 7B • Updated • 97
• 16
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated • 182
Text Generation
• Updated • 318
• 4.48k
meta-llama/Llama-2-7b-chat
Text Generation
• Updated • 49
• 619
Text Generation
• Updated • 33
• 352
meta-llama/Llama-2-13b-chat
Text Generation
• Updated • 14
• 296
Text Generation
• Updated • 11
• 538
meta-llama/Llama-2-70b-hf
Text Generation
• Updated • 18.5k
• 854
Text Generation
• 7B • Updated • 1.94M
• 2.29k
meta-llama/Llama-2-70b-chat
Text Generation
• Updated • 8
• 399
meta-llama/Llama-2-70b-chat-hf
Text Generation
• Updated • 105k
• 2.21k
Text Generation
• Updated • 712
• 219
Text Generation
• 7B • Updated • 7.76k
• 81
TheBloke/Llama-2-13B-GPTQ
Text Generation
• 13B • Updated • 896
• 120
TheBloke/Llama-2-13B-GGML
Text Generation
• Updated • 676
• 174