TorchLean API

NN.Floats.NeuralFloat.Formats

Flocq-style formats (FIX / FLX / FLT) #

We are not defining the executable IEEE-754 layer here.

What we do here is the same separation used by Flocq:

Those fexps let us talk about “a fixed-point grid”, “an unbounded float grid”, or “a bounded, IEEE-like float grid with gradual underflow” without committing to a concrete bit encoding.

In particular:

If you want NaN/Inf/signed-zero and an executable kernel, that is NN/Floats/IEEEExec/. For TorchLean-specific “which precision do we use in each phase?” configuration helpers, see NN/Floats/NeuralFloat/Metadata.lean.

References:

FIX_exp emin is the simplest exponent-selection function: it always returns the same exponent.

This is the Flocq “FIX” family. It is useful when you want to reason about values living on a single, fixed grid β^emin * ℤ (think: fixed-point arithmetic / quantization).

Instances For

    FIX_exp satisfies the standard Flocq-style Valid_exp axioms (here: NeuralValidExp).

    Even though the proof is trivial, having the instance is what lets later theorems reuse the same generic lemmas for FIX/FLX/FLT.

    FIX_format emin x says “x is exactly representable on the fixed grid”.

    This is phrased via an existential NeuralFloat β so that it composes smoothly with the rest of the rounding model (neural_to_real, ULP bounds, etc.).

    Instances For

      FLX_exp prec is the unbounded-exponent family.

      This is Flocq’s “FLX” family: it models a floating-point format with no exponent bounds but with a mantissa precision parameter prec. It is a convenient intermediate model for proofs because it removes underflow/overflow corner cases while still tracking mantissa rounding.

      Instances For
        instance TorchLean.Floats.flxValidExp (prec : ) (h : 0 < prec) :

        FLX_exp satisfies NeuralValidExp.

        The side-condition 0 < prec matches the standard assumption that “precision is positive”.

        Exact representability predicate for FLX.

        Heuristically: there exists a mantissa/exponent pair with mantissa bounded by the precision, and x = m * β^e.

        Instances For
          def TorchLean.Floats.FLTExp (emin prec : ) :

          FLT_exp emin prec is the bounded-exponent family with gradual underflow.

          This is Flocq’s “FLT” family: it is the closest match to the finite fragment of IEEE-754 floating-point (no NaNs/Inf), where the exponent is bounded below by emin and the effective precision is prec. Gradual underflow is captured by taking max (e - prec) emin.

          Instances For
            instance TorchLean.Floats.fltValidExp (emin prec : ) (h : 0 < prec) :

            FLT_exp satisfies NeuralValidExp.

            This is where most format-bridge lemmas live when connecting proofs to float32-style bounds (e.g. via NN/Floats/FP32).

            def TorchLean.Floats.FLTFormat {β : NeuralRadix} (emin prec : ) (x : ) :

            Exact representability predicate for FLT.

            This version includes:

            • a mantissa size bound (precision),
            • and the lower exponent bound emin ≤ exponent (no values smaller than the min normal/subnormal scale, depending on the choice of emin and rounding).
            Instances For