Post
33
Ever see a 1024x1024 3 channel a little over 1m param noise classifier? This is one. This is phase 1 of the omega experiments and it's successful on a very high-accuracy selectivity level through statistics aggregation and pooling via... a tiny MLP attached to the battery array.
SVAE don't care what resolution you use. They never had that concern, they are solvers that fly through solutions. Perfect for math solutions of many formats and many structures, exactly what I need for the next stages.
AbstractPhil/geolip-svae-h2-64
Currently the primary use case for tests is noise format identification. There are multiple experiments to go before a full nth classification system is ready, however as it stands the only stopping point is training batteries now. They mostly train within about 10 million samples of tiny data so they will fly out hundreds a day if I find purposes for them.
Also I trained too many gaussian-related batteries, so there's really only about 50-100 or so batteries useful in the 192 array I set up. There's really only 64 batteries trained total but there are multiple epochs involved.
Now that there is a 57k parameter variation that converges on 16 variants of random noise like Johanna and Freckles before, you ask this model questions differently. You check the MSE to train downstream models, so if your array isn't conclusively working it won't work just yet.
It's not perfect yet, but it's improving daily.
A bad battery in the mix can be replaced at runtime.
SVAE don't care what resolution you use. They never had that concern, they are solvers that fly through solutions. Perfect for math solutions of many formats and many structures, exactly what I need for the next stages.
AbstractPhil/geolip-svae-h2-64
Currently the primary use case for tests is noise format identification. There are multiple experiments to go before a full nth classification system is ready, however as it stands the only stopping point is training batteries now. They mostly train within about 10 million samples of tiny data so they will fly out hundreds a day if I find purposes for them.
Also I trained too many gaussian-related batteries, so there's really only about 50-100 or so batteries useful in the 192 array I set up. There's really only 64 batteries trained total but there are multiple epochs involved.
Now that there is a 57k parameter variation that converges on 16 variants of random noise like Johanna and Freckles before, you ask this model questions differently. You check the MSE to train downstream models, so if your array isn't conclusively working it won't work just yet.
It's not perfect yet, but it's improving daily.
A bad battery in the mix can be replaced at runtime.
==============================================================================
PHASE J VERDICT
==============================================================================
Subset: 18 batteries, 1,029,870 params (vs 10.9M for full array)
Resolution A (summary) B (attn-pool)
256 96.6% 93.1%
512 95.4% 92.0%
1024 95.4% 95.4%