-
GGUF Editor
🏢90Edit GGUF model metadata from Hugging Face or local files
-
mergekit-gui
🔀290Merge AI models using a YAML configuration file
-
GGUF My Repo
🦙1.87kQuantize a Hugging Face model to GGUF and create a repo
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 14
Joe
Joe57005
·
AI & ML interests
None yet
Recent Activity
updated
a collection
about 13 hours ago
Models to try updated
a collection
about 13 hours ago
For finetune updated
a collection
2 days ago
Models to try Organizations
None yet
For MOE 1.5B
Models to try
-
bunnycore/Gemma2-2b-function-calling-lora
Updated • 1 -
NickyNicky/gemma-2b-it_oasst2_all_chatML_function_calling_Agent_v1
Text Generation • 3B • Updated • 4 • 1 -
hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
Text Generation • 1B • Updated • 594k • 44 -
gorilla-llm/gorilla-openfunctions-v2
Text Generation • Updated • 201 • 245
For finetune
-
glaiveai/glaive-function-calling-v2
Viewer • Updated • 113k • 5.48k • 490 - Running16
Chat Template Editor
💬16View, edit, test and submit Chat Templates
- Running90
GGUF Editor
🏢90Edit GGUF model metadata from Hugging Face or local files
-
0xSero/glm47-reap-calibration-v2
Viewer • Updated • 1.36k • 64 • 2
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
LLM Tools
- Running90
GGUF Editor
🏢90Edit GGUF model metadata from Hugging Face or local files
- Runtime errorFeatured290
mergekit-gui
🔀290Merge AI models using a YAML configuration file
- Running on A10G1.87k
GGUF My Repo
🦙1.87kQuantize a Hugging Face model to GGUF and create a repo
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 14
For finetune
-
glaiveai/glaive-function-calling-v2
Viewer • Updated • 113k • 5.48k • 490 - Running16
Chat Template Editor
💬16View, edit, test and submit Chat Templates
- Running90
GGUF Editor
🏢90Edit GGUF model metadata from Hugging Face or local files
-
0xSero/glm47-reap-calibration-v2
Viewer • Updated • 1.36k • 64 • 2
For MOE 1.5B
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
Models to try
-
bunnycore/Gemma2-2b-function-calling-lora
Updated • 1 -
NickyNicky/gemma-2b-it_oasst2_all_chatML_function_calling_Agent_v1
Text Generation • 3B • Updated • 4 • 1 -
hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
Text Generation • 1B • Updated • 594k • 44 -
gorilla-llm/gorilla-openfunctions-v2
Text Generation • Updated • 201 • 245