Scalar #
Runtime scalar conventions.
Runtime execution stays in IEEE32Exec/Float/NeuralFloat; each backend has a different proof
status relative to the spec layer.
Note on trust boundaries:
- Lean's
Floatis an implementation type; this repo does not prove thatFloatexecution matches an IEEE-754 spec. Claims connectingFloatexecution to spec-levelℝtherefore cross a trusted runtime interface. IEEE32Execis an executable bit-level IEEE-754 binary32 model; connecting it to Lean/runtime hardware float32 is out of scope (treat that bridge as trusted).- For proof-relevant numeric execution, use the rounding model backends (
NeuralFloat/NF), where per-op error bounds can be stated and composed. NeuralFloat/NFare formal models implemented in Lean. Relating them to real hardware floating-point (or Lean'sFloat) is a separate backend-correlation assumption, not a theorem in this module.
@[reducible, inline]
Default runtime scalar for execution.
Instances For
@[reducible, inline]
Runtime tensors are Float-typed tensors.
Instances For
@[reducible, inline]
NeuralFloat runtime scalar (precision-aware).
Instances For
@[reducible, inline]
Runtime tensors backed by NeuralFloat.