3 items with this tag.
moonshots
Moonshot hypothesis that models should be trained to survive multiple distinct artifact failure modes rather than a single clean quantization path.
moonshots
Moonshot hypothesis that the model should be trained directly to become a good compressed artifact, not merely a good floating-point checkpoint.
moonshots
Moonshot hypothesis that the best compact artifact may store a tiny generator plus latent construction tape and sparse corrections instead of mostly storing raw weight tensors.