-
-
-
-
-
-
Inference Providers
Active filters:
glm
0xSero/GLM-4.7-REAP-50-W4A16
Text Generation
•
2B
•
Updated
•
378
•
20
Text Generation
•
218B
•
Updated
•
42
•
16
Text Generation
•
353B
•
Updated
•
4.5k
•
14
mlx-community/GLM-4.7-REAP-50-mxfp4
Text Generation
•
185B
•
Updated
•
272
•
9
Text Generation
•
185B
•
Updated
•
39
•
6
cerebras/GLM-4.6-REAP-218B-A32B-FP8
Text Generation
•
218B
•
Updated
•
309
•
42
0xSero/GLM-4.7-REAP-40-W4A16
Text Generation
•
2B
•
Updated
•
442
•
3
6B
•
Updated
•
57.4k
•
1.16k
cerebras/GLM-4.5-Air-REAP-82B-A12B
Text Generation
•
82B
•
Updated
•
12.9k
•
104
garrison/GLM-4.5-Air-REAP-82B-A12B-mlx-4Bit
Text Generation
•
82B
•
Updated
•
47
•
2
0xSero/GLM-4.6-REAP-218B-A32B-W4A16-AutoRound
Text Generation
•
2B
•
Updated
•
207
•
5
Updated
•
2.3k
•
2.87k
Text Generation
•
9B
•
Updated
•
2.26k
•
262
Text Generation
•
9B
•
Updated
•
9.27k
•
23
zai-org/glm-edge-v-2b-gguf
Image-Text-to-Text
•
2B
•
Updated
•
550
•
12
zai-org/glm-edge-1.5b-chat-gguf
Text Generation
•
2B
•
Updated
•
540
•
4
mradermacher/glm-edge-4b-chat-GGUF
4B
•
Updated
•
329
•
2
cerebras/GLM-4.6-REAP-252B-A32B-FP8
Text Generation
•
252B
•
Updated
•
92
•
6
cerebras/GLM-4.6-REAP-268B-A32B
Text Generation
•
269B
•
Updated
•
23
•
12
cerebras/GLM-4.5-Air-REAP-82B-A12B-FP8
Text Generation
•
82B
•
Updated
•
194
•
6
bartowski/cerebras_MiniMax-M2-REAP-162B-A10B-GGUF
Text Generation
•
162B
•
Updated
•
2.04k
•
5
cyankiwi/MiniMax-M2-REAP-162B-A10B-AWQ-4bit
Text Generation
•
26B
•
Updated
•
1.47k
•
5
Wwayu/GLM-4.7-PRISM-mlx-2Bit
Text Generation
•
353B
•
Updated
•
1.38k
•
2
yhavinga/GLM-4.7-REAP-40p-GGUF
218B
•
Updated
•
1.72k
•
1
cammy/glm-roberta-large-finetune
cammy/glm-roberta-large-finetune-p2
cammy/glm-roberta-large-tuning-4-0.01
cammy/glm-roberta-large-finetune-p2-2-2
cammy/glm-roberta-large-finetune-p3-3
cammy/glm-roberta-large-finetune-1