-
-
-
-
-
-
Inference Providers
Active filters:
gptq
tencent/HY-MT1.5-1.8B-GPTQ-Int4
Translation
•
2B
•
Updated
•
291
•
9
tencent/HY-MT1.5-7B-GPTQ-Int4
Translation
•
8B
•
Updated
•
226
•
5
QuantTrio/GLM-4.7-GPTQ-Int4-Int8Mix
Text Generation
•
390B
•
Updated
•
126
•
4
TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ
7B
•
Updated
•
192
•
61
tiiuae/Falcon-H1-7B-Instruct-GPTQ-Int8
Text Generation
•
8B
•
Updated
•
35
•
2
fifrio/Llama-3.1-8B-Instruct-gptq-2bit-calibration-Swahili-128samples
8B
•
Updated
•
84
•
2
mayaeary/pygmalion-6b_dev-4bit-128g
Text Generation
•
Updated
•
22
•
121
Ancestral/Dolly_Malion-6b-4bit-128g
Text Generation
•
Updated
•
20
•
1
TheBloke/WizardLM-33B-V1-0-Uncensored-SuperHOT-8K-GPTQ
Text Generation
•
33B
•
Updated
•
39
•
93
Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4
Text Generation
•
14B
•
Updated
•
1.19k
•
49
Intel/Qwen2-0.5B-Instuct-int4-inc
Text Generation
•
0.6B
•
Updated
•
4
•
1
Intel/Qwen2-1.5B-Instuct-int4-inc
Text Generation
•
2B
•
Updated
•
4
•
3
jart25/Qwen3-Next-80B-A3B-Instruct-Int4-GPTQ
Updated
•
954
•
3
Text Generation
•
16B
•
Updated
•
81
•
2
TevunahAi/Nemotron-3-Nano-30B-A3B-GPTQ
Text Generation
•
6B
•
Updated
•
1.12k
•
2
elinas/alpaca-13b-lora-int4
Text Generation
•
Updated
•
24
•
41
elinas/alpaca-30b-lora-int4
Text Generation
•
Updated
•
39
•
68
mayaeary/pygmalion-6b-4bit-128g
Text Generation
•
Updated
•
34
•
40
mayaeary/PPO_Pygway-V8p4_Dev-6b-4bit-128g
Text Generation
•
Updated
•
26
•
2
mayaeary/PPO_Pygway-6b-Mix-4bit-128g
Text Generation
•
Updated
•
23
•
2
Text Generation
•
Updated
•
24
•
45
Text Generation
•
7B
•
Updated
•
166
•
31
Text Generation
•
Updated
•
2.58k
•
21
Text Generation
•
Updated
•
1.15k
•
41
Text Generation
•
13B
•
Updated
•
46
•
38
TheBloke/galpaca-30B-GPTQ
Text Generation
•
Updated
•
22
•
48
Ancestral/Dolly_Shygmalion-6b-4bit-128g
Text Generation
•
Updated
•
15
•
5
Ancestral/PPO_Shygmalion-6b-4bit-128g
Text Generation
•
Updated
•
13
TheBloke/vicuna-7B-v0-GPTQ
Text Generation
•
7B
•
Updated
•
17
•
15
4bit/pygmalion-6b-4bit-128g
Text Generation
•
Updated
•
14
•
3