Tom
TomLucidor
AI & ML interests
None yet
Recent Activity
new activity
4 days ago
tzervas/qwen2.5-coder-32b-bitnet-1.58b:Some questions on BitNet PTQ new activity
5 days ago
inclusionAI/Ring-2.5-1T:Will there be a base model? Organizations
Some questions on BitNet PTQ
2
#1 opened about 1 month ago
by
TomLucidor
Will there be a base model?
2
#4 opened 24 days ago
by
zianglih
Has REAM been checked for their resilience towards quantization?
4
#1 opened about 1 month ago
by
TomLucidor
How is this different from the other quants?
8
#1 opened 13 days ago
by
TomLucidor
Will Hybrid Attention RP models get some love?
➕ 1
#7 opened 11 days ago
by
TomLucidor
ValueError: Model type lfm2_moe not supported.
5
#1 opened 5 months ago
by
kadirnar
Runs Amazing on M4 Macbook air!
1
#1 opened 4 months ago
by
leo253
Could more benchmarks be done on Instruction Following / Function Calling?
4
#2 opened 25 days ago
by
TomLucidor
The comparison with the original MTP
👍 1
1
#2 opened 20 days ago
by
Michalea
Using ZwZ and better VLM along side DeepGen
4
#6 opened 23 days ago
by
TomLucidor
Could you make an Eagle3 model for Nemotron-3-Nano?
#4 opened 21 days ago
by
TomLucidor
Guys please make 30b a3b like MOE model
👍 1
1
#2 opened about 1 month ago
by
Narutoouz
Can this be scaled into Claude Code / OpenCode / Codex?
#9 opened 21 days ago
by
TomLucidor
WIll there be a "Ring Flash" or "RIng-Mini" for V2.5?
🤝 😔 2
#6 opened 22 days ago
by
TomLucidor
Please add quality Q6/Q4/Q3 quants to this
7
#1 opened 25 days ago
by
TomLucidor
Q: could this work with the REAM models?
1
#2 opened 22 days ago
by
TomLucidor
Q: How is this different from the other Eagle3?
1
#1 opened 22 days ago
by
TomLucidor
New issue: naive REAM not supporting MTP?
1
#5 opened 22 days ago
by
TomLucidor
Comparison to Emu
1
#5 opened 23 days ago
by
TomLucidor
Request for <4B linear attention quants
4
#1 opened 24 days ago
by
TomLucidor