you
megachad
AI & ML interests
None yet
Organizations
None yet
Whats the point of these if you still have to use the full-sized text_encoder?
1
#24 opened 3 months ago
by
megachad
This doesn't run with huggingface code, it should not be here.
😔 1
2
#2 opened 11 months ago
by
megachad
Here is some code for using LORAs with this
3
#7 opened over 1 year ago
by
megachad
How to achieve 4-bit quantization?
10
#6 opened over 1 year ago
by
HUG-NAN
Questions about LoRA
2
#5 opened over 1 year ago
by
tungdop2
Quantization scripts
13
#1 opened over 1 year ago
by
WaveCut
comfyui implementation?
🤗 2
5
#1 opened over 1 year ago
by
MayensGuds
How to inference
1
#2 opened over 1 year ago
by
zdxpan
Inference (Streaming)
5
#59 opened almost 2 years ago
by
hxrdxk
Img2img works with this
1
#4 opened over 1 year ago
by
megachad
Is there any way to improve inference time?
3
#68 opened over 1 year ago
by
winvin
quite slow to load the fp8 model
👍 4
11
#21 opened over 1 year ago
by
gpt3eth
This rephraser replaces words with 800 numbers. Extremely undesirable.
#9 opened over 3 years ago
by
megachad