Ronen Zyroff
BigBlueWhale
AI & ML interests
None yet
Recent Activity
liked a model 3 days ago
Qwen/Qwen3.6-27B new activity about 2 months ago
microsoft/VibeVoice-ASR:32GiB VRAM inference using Docker- my working setup new activity 2 months ago
microsoft/VibeVoice-ASR:32GiB VRAM inference using Docker- my working setupOrganizations
None yet
32GiB VRAM inference using Docker- my working setup
2
#21 opened 2 months ago
by
BigBlueWhale
Extremely slow on 5090
9
#1 opened 4 months ago
by
STTrife
Best non-thinking model qwen ever released
👍 2
#7 opened 5 months ago
by
BigBlueWhale
Disappointment in text performance
#1 opened 5 months ago
by
BigBlueWhale
Qwen3-32B (April 2025) is superior
#2 opened 5 months ago
by
BigBlueWhale
Recommended model parameters
#5 opened 6 months ago
by
BigBlueWhale
How about running by llama.cpp
2
#1 opened 7 months ago
by
rosspanda0
Best open source model ever, period.
🤝 2
2
#1 opened 7 months ago
by
BigBlueWhale
Fix prompt format in llama.cpp command
5
#2 opened over 2 years ago
by
nacs
Best open source model for coding (August 2023)
2
#1 opened over 2 years ago
by
BigBlueWhale
wizardcoder-python-34b sucks. Is this any better?
5
#1 opened over 2 years ago
by
BigBlueWhale
This model looks insanely good for coding ( 73.2 for humanEval )!
👍🤯 2
18
#1 opened over 2 years ago
by
mirek190
Uncensored my ass ....
7
#2 opened almost 3 years ago
by
mirek190
Works perfectly in CPU mode with oobabooga
👍 2
5
#4 opened almost 3 years ago
by
BigBlueWhale
Why so few 8 bit capable models?
1
#13 opened almost 3 years ago
by
ibivibiv
Works perfectly in CPU mode with oobabooga
👍 2
5
#4 opened almost 3 years ago
by
BigBlueWhale