faheem
faheemraza1
AI & ML interests
LLMs
Recent Activity
new activity 3 days ago
LilaRest/gemma-4-31B-it-NVFP4-turbo:Updated Chat Template new activity 4 days ago
bg-digitalservices/Gemma-4-26B-A4B-it-NVFP4A16:Updated chat_template.jinja new activity 4 days ago
bg-digitalservices/Gemma-4-26B-A4B-it-NVFP4A16:Run on 3090Organizations
None yet
Updated Chat Template
1
#3 opened 3 days ago
by
faheemraza1
Updated chat_template.jinja
1
#2 opened 4 days ago
by
faheemraza1
Run on 3090
#1 opened 4 days ago
by
faheemraza1
on RTX 3090
#1 opened 5 days ago
by
faheemraza1
Will it work on 3090
4
#1 opened 5 days ago
by
faheemraza1
Smaller file size
#1 opened 7 days ago
by
faheemraza1
NVFP4 for Qwen3.5-27B
โ 1
9
#5 opened about 1 month ago
by
faheemraza1
Break the file down into smaller ones
#1 opened about 1 month ago
by
faheemraza1
NVFP4 for Qwen3.5-27B
#4 opened about 1 month ago
by
faheemraza1
torch 2.9 issue
3
#1 opened 3 months ago
by
faheemraza1
Serve with vLLM
๐ฅ 1
4
#1 opened 8 months ago
by
faheemraza1
Serve with vLLM
5
#1 opened 7 months ago
by
faheemraza1
Serve With vLLM
1
#1 opened 7 months ago
by
faheemraza1
Simple Q&A Fine Tune Dataset
#40 opened 7 months ago
by
faheemraza1
Newer Model
1
#4 opened 7 months ago
by
faheemraza1
LoRA training
1
#3 opened 7 months ago
by
faheemraza1
Serve with vLLM
#1 opened 7 months ago
by
faheemraza1
Serve with vLLM
#1 opened 8 months ago
by
faheemraza1
Run with vLLM
#2 opened 8 months ago
by
faheemraza1
What minimal VRAM does it require?
12
#18 opened over 1 year ago
by
DrNicefellow