Could you upload a BF16 GGUF version for LM Studio?

#12
by 867Mry - opened

Thank you for your remarkable work!
I downloaded the BF16 abliterated model months ago and it run well. But when I want to download it now I can't find it. The ollama can't use pre-set prompt and I can only find Q_8 in HF.
Or could I just download Safetensor files and use some commands to make it be recognized by LM Studio?
Thank again.

Save it again as a single large file.


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

OLD_MODEL_ID = "huihui-ai/Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated"
NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated-New"
model = AutoModelForCausalLM.from_pretrained(
    OLD_MODEL_ID,
    device_map="auto",
    trust_remote_code=True,
    torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(OLD_MODEL_ID, trust_remote_code=True)

model.save_pretrained(NEW_MODEL_ID, max_shard_size="80GB")
tokenizer.save_pretrained(NEW_MODEL_ID)

Sign up or log in to comment