Priority 1 — surface the frontier synthesis layer
Add backlinks to Research Frontiers from:
Why this is high-value:
frontiers/*is the strongest existing bridge layer- adding this route surfaces several cross-lane research theses at once
- this is higher value than sprinkling many small paper-to-paper links
Priority 2 — route readers into the most important frontier pages
Backlink to Compression Interfaces for Shared Depth from:
- Recursive and shared-parameter architectures
- Quantization, outliers, and compression-aware training
- Recursive width scaling
- Recurrent wide architecture
- RMSNorm stabilized scaling
- Research atlas
Why:
- this is the strongest bridge between the current architecture lane and the current compression-robustness lane
Backlink to Tokenizer-Head Co-Design Under a Hard Cap from:
- Tokenizer and vocabulary efficiency
- Training economics and small-model bottlenecks
- The LM Head Is Part of the Compression Problem
- Output-Head Compression
- Research atlas
Why:
- this page names one of the garden’s most underconnected but high-upside research seams
Backlink to Refinement Loops as Decompression from:
- Evaluation-time compute and inference scaling
- Iterative Refinement over Stored Depth
- Recurrent wide architecture
- Compute-for-storage exchange
Why:
- this is the best current bridge between recurrent structure and test-time compute
Priority 3 — surface the challenge-history layer
Add backlinks to Challenge History from:
- Parameter Golf challenge
- History and Public Runs
- How to Read the Leaderboard and Public Records
- Public Research Directions
- Map of content
Why:
- this resolves the current one-way bridge between challenge framing and challenge-history analysis
- it makes public-record interpretation discoverable from the main challenge route
Priority 4 — connect hypotheses to concrete evidence
Backlink to RWA Breadth Experiment from:
- Recurrent wide architecture
- Recursive width scaling
- Recursive and shared-parameter architectures
- Research atlas
Why:
- this creates a visible evidence trail for the recurrent-width lane
- right now the architecture lane reads more theoretical than it needs to
Backlink to Unified Compression-Aware Architecture from:
- Hypothesis ledger
- Research atlas
- Compression Interfaces for Shared Depth
- Quantization, outliers, and compression-aware training
Why:
- even if provisional, it is an important synthesis node for combined architecture + compression thinking
Priority 5 — rescue the orphan idea layer
Backlink to Norm-Only Phase Specialization from:
- Phase-Conditioned Sharing
- Shared Depth Needs Cheap Specialization
- Compression Interfaces for Shared Depth
Backlink to Head-to-Depth Budget Swap from:
Backlink to Entropy-Weighted Vocabulary Rescue from:
Backlink to Global Codebook Recursive Backbone from:
Backlink to Token-Adaptive Recurrent Refinement from:
- Refinement Loops as Decompression
- Iterative Refinement over Stored Depth
- Evaluation-time compute and inference scaling
Why this entire tier matters:
- the idea layer already contains concrete next experiments
- the lack of backlinks makes the garden weak at converting synthesis into agenda
Priority 6 — targeted paper rescue
Backlink to BitNet b1.58 from:
Why:
- BitNet is more important to the native-low-bit framing than its current visibility suggests
Backlink to Computational Bottlenecks of Training SLMs from:
- Public Research Directions
- The LM Head Is Part of the Compression Problem
- Tokenizer and vocabulary efficiency
Why:
- it is central to compute-aware interpretation of compact-model tradeoffs, not just a side paper for the training lane
Summary principle
The best backlinks are the ones that surface:
- hidden synthesis
- hidden agenda pages
- hidden evidence pages
- challenge-history context
They are not the ones that merely make the graph denser.