7 items with this tag.
frontiers
Frontier synthesis on whether bounded evaluation-time compute can reconstruct some of the capacity that a hard artifact cap prevents us from storing directly.
hypotheses
Hypothesis that a smaller recurrent model with bounded extra evaluation-time refinement can beat a larger static artifact under the same storage cap.
ideas
Hypothesis that a compact shared-depth model should spend extra inference-time passes only on uncertain positions, turning compute into quality more efficiently than storing more static depth.
lanes
When extra evaluation-time compute may dominate storing more parameters.
notes
Synthesis note on the recurring compact-model idea that repeated computation can substitute for stored parameters.
papers
Paper note on compute-optimal inference and why smaller models plus better evaluation-time search can beat larger models under fixed budgets.
papers
Paper note on recurrent self-attentive depth, dynamic halting, and the idea that transformers can trade stored depth for repeated computation.