gemma4-zero-compute

Optimized Gemma4 text-only checkpoint exported from this repository.

Loading

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("haysonC/gemma4-zero-compute", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("haysonC/gemma4-zero-compute", trust_remote_code=True)

Notes

  • Base model: google/gemma-4-26B-A4B-it
  • Architecture: OptimizedGemma4ForCausalLM
  • Scope: text-only causal language model export
  • Router checkpoint loaded: True
Downloads last month
30
Safetensors
Model size
26B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support