TorchLean API

NN.Proofs.RuntimeApprox.FP32.Layers

FP32 Layer Approximation #

This module specializes the backend-generic runtime-approximation framework (NN.Proofs.RuntimeApprox) to the concrete float32 rounding model TorchLean.Floats.FP32 (round-to-nearest-even with an IEEE-754 binary32-style exponent function).

The lemmas here are compositional: they let you relate a real-valued spec computation to its float32 execution under an explicit error budget, so that larger network theorems can be proved by chaining smaller ones.

Trust boundary: TorchLean.Floats.FP32 is a finite rounding model exposed to Lean. These statements are about real-valued spec computations and their rounded counterparts, under the intended side condition that execution stays finite (no NaN/Inf/overflow in an IEEE-754 hardware sense).

@[reducible, inline]

Runtime scalar type: float32 rounding model (TorchLean.Floats.FP32).

Instances For
    @[reducible, inline]

    Radix for the FP32 rounding model (binary).

    Instances For
      @[reducible, inline]

      Exponent function used by the FP32 rounding model.

      Instances For
        @[reducible, inline]
        noncomputable abbrev NN.Proofs.RuntimeApprox.FP32.rnd :

        Round-to-nearest-even function used by the FP32 rounding model.

        Instances For
          @[reducible, inline]
          noncomputable abbrev NN.Proofs.RuntimeApprox.FP32.toSpec :
          R

          Interpretation of runtime scalars as real spec scalars, specialized to FP32.

          Instances For

            Forward error bound for a linear layer y = Wx + b under the FP32 rounding semantics.

            Inputs:

            • hW, hb, and hx say the runtime weights, bias, and input approximate their real-spec counterparts.

            Output:

            • an explicit existential error budget eps such that the whole FP32 linear-layer result approximates the real-spec linear-layer result.

            This is the base layer theorem used by the MLP and CROWN/IBP FP32 wrappers.