(Grangier et al., 2024)

Sources: arXiv:2402.01093 · alphaXiv overview

Core contribution

Plan Early argues that if the final deployment setting is cheap inference, the whole pipeline should be designed around that fact from the start. The paper emphasizes asymmetric train-vs-inference design, task specialization, and importance-aware data or compute allocation instead of assuming that the best path is “train something general and compress later.”

Why this matters for Parameter Golf

This paper is one of the clearest strategic mirrors of the challenge itself. Parameter Golf is a budgeted design problem, not an unconstrained pretraining contest. That means “budget awareness from day one” is not just good engineering advice; it is a research stance.

What to import

  • Inference budget should be a first-class design variable.
  • Specialization can dominate generic scale when the deployment target is narrow enough.
  • Train-time and inference-time budgets can and should be asymmetric.

What not to over-import

The paper does not imply that we should overfit to local proxies or that every specialized design will generalize to the challenge target. The right import is about planning posture: start from the actual budgeted objective instead of hoping late compression will fully repair a mismatched design.

  • Pairs with Inference Scaling Laws on the principle that inference constraints can change the optimal model family.
  • Connects to tokenizer and vocabulary efficiency because tokenization and output-head choices are part of early planning, not just downstream cleanup.
  • Reinforces training economics by distinguishing quality-per-training-budget from quality-per-inference-budget.

Parameter Golf translation

Plan Early suggests asking upfront:

  • should a compact specialized model be designed around the scoring metric rather than compressed toward it later?
  • which components deserve training compute even if they do not increase stored bytes?
  • where should we deliberately accept train-time expense to save persistent artifact or inference cost?
Grangier, D., Katharopoulos, A., Ablin, P., & Hannun, A. (2024). Need a Small Specialized Language Model? Plan Early! arXiv Preprint arXiv:2402.01093. https://arxiv.org/abs/2402.01093