"A good generator is boring: deterministic, streamable, restartable, and easy to audit."
Overview
JSONL generation turns heterogeneous raw sources into deterministic streams of validated training records. In an LLM training run, data is not an inert pile of text; it is the empirical distribution that defines the examples, losses, risks, and capabilities the model will see.
This section is written as LaTeX Markdown. Inline mathematics uses $...$, and display
equations use `
`. The goal is to connect data engineering decisions to mathematical objects such as records , token sequences , filters , hashes , mixture weights , and empirical expectations.
The scope is deliberately narrow: this chapter owns the training-data pipeline. Tokenizer design, GPU training systems, benchmark methodology, alignment objectives, and production MLOps each have their own canonical chapters. Here we study the data objects that those later systems consume.
Prerequisites
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Executable demonstrations for jsonl generation |
| exercises.ipynb | Graded practice for jsonl generation |
Learning Objectives
After completing this section, you will be able to:
- Define a JSONL generator as a deterministic map from source objects to records
- Implement memory-safe streaming over shards
- Preserve metadata and source trace fields during serialization
- Design quarantine paths for records that fail validation
- Measure throughput, parse failure rates, and duplicate IDs
- Explain atomic writes, resume logic, and deterministic ordering
- Separate extraction, transformation, validation, and writing stages
- Generate line-delimited JSON that can be parsed independently per line
Study Flow
- Read the pages in order and pause after each page to restate the main definition or theorem.
- Run
theory.ipynbwhen you want to check the formulas numerically. - Use
exercises.ipynbafter the reading path, not before it. - Return to this overview page when you need the chapter-level navigation.