quants of https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO

only v18 sfw q3_k_s for myself

base on qwen-image-edit-2511

Downloads last month
-
GGUF
Model size
20B params
Architecture
qwen_image
Hardware compatibility
Log In to view the estimation

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nicehero/Qwen-Image-Edit-Rapid-AIO-GGUF

Quantized
(6)
this model