Trainer API with metrics and logging #
This module defines a small, higher-level training API on top of a step function. It stays lightweight (no global state), while making it easy to:
- return a structured report per step (loss + metrics)
- render reports into human-readable logs
- plug in a logger if you want to print during training
Metrics and step reports #
A named scalar metric for logging/monitoring.
Examples: "acc", "top5", "grad_norm", "lr".
- name : String
Metric name (used as the key in logs).
- value : a
Metric value (typically the same scalar type as the loss).
Instances For
Per-step training report.
This is a lightweight record: a single scalar loss (to drive optimization) plus optional metrics for logging/monitoring.
- loss : a
Scalar objective value for this step or evaluation pass.
Additional named scalar metrics for logging/monitoring.
Instances For
Render a single step report (loss + metrics).
Instances For
Render reports for a full run, with step numbers starting at 0.
Instances For
Instances For
Generic training loop #
This is a light wrapper around a "step" function that returns a new state and an output. It is still useful for very small tests that do not need full metrics.
Run a monadic step function for a fixed number of steps, collecting the per-step outputs.
This is a generic utility (not Torch-specific): it threads a state value and accumulates an
out value per step.
Instances For
Trainer structure #
Trainer bundles the initial state, step function, and optional logger.
The logger runs after each step and can observe the updated state and report.
A small "trainer bundle": initial state, step function, and a per-step logger.
The logger runs after each step and can observe both the updated state and the report, which matches how training scripts typically print "after-update" metrics.
- init : state
Initial training state.
- step : state → m (state × StepReport a)
A single training step: update state and produce a report.
- logger : ℕ → state → StepReport a → m Unit
Optional logger hook called after each step.
Instances For
Construct a Trainer with a no-op logger.
This is a convenient default for experiments that just want to collect reports.