This page is a routing map, not a fake graph visualization.
Use it to enter the garden from the right layer:
- the core research graph for challenge, history, experiments, lanes, hypotheses, frontiers, ideas, and papers
- the meta layer for harness, workflow, and editorial notes
- the reports layer for maintainer-facing audits and IA proposals
Core research spine
- Challenge overview
- Challenge history
- Local experiment history
- Research lanes
- Hypothesis ledger
- Research frontiers
- Research ideas
- Paper index
- Research atlas
Challenge framing
- Constraints and scoring
- Local benchmark vs official evaluation
- History and public runs
- How to read the leaderboard and public records
Public record and local evidence
Public record
Local evidence
- Local research lineage
- Kept runs and turning points
- Dead ends and failed directions
- RWA breadth experiment
Research lanes
- Recursive and shared-parameter architectures
- Quantization, outliers, and compression-aware training
- Tokenizer and vocabulary efficiency
- Training economics and small-model bottlenecks
- Evaluation-time compute and inference scaling
Hypotheses
- RMSNorm stabilized scaling
- Sparse outlier preservation
- Recursive width scaling
- Recurrent wide architecture
- Phase-conditioned sharing
- Output-head compression
- Iterative refinement over stored depth
- Unified compression-aware architecture
Frontier synthesis and original bets
Frontier synthesis
- Byte allocation beats average bit-width
- Compression interfaces for shared depth
- Tokenizer-head co-design under a hard cap
- Entropy-friendly model structure
- Refinement loops as decompression
Original ideas
- Norm-Only Phase Specialization
- Token-Adaptive Recurrent Refinement
- Entropy-Weighted Vocabulary Rescue
- Head-to-Depth Budget Swap
- Global Codebook Recursive Backbone
Core paper notes
- An Extra RMSNorm is All You Need for Fine Tuning to 1.58 Bits
- pQuant
- ClusComp
- Relaxed Recursive Transformers
- MoEUT
- Extreme Compression via Additive Quantization
- QuEST
- Fine-grained Parameter Sharing
- ReTok
- LLM Vocabulary Compression for Low-Compute Environments
- Computational Bottlenecks of Training SLMs
- Inference Scaling Laws
- Tokenizer Evaluation Across Scales
- Need a Small Specialized LM? Plan Early!
Meta and reports
Meta layer
- Meta layer
- Local research protocol
- Background research throughput
- Background research engineering
- Pi extension architecture
- KB roadmap