TorchLean API

NN.Floats.NeuralFloat.Rounding

Rounding modes and a half-ULP error bound #

We use this file as the “rounding” half of the Flocq-style rounding-on- model:

These definitions are used by NF (the rounded scalar type), by the FP32 model, and by the bridge layer that connects proofs to executable IEEE-754 behavior.

References #

Round toward negative infinity (floor)

Instances For

    Round toward positive infinity (ceiling)

    Instances For

      Round toward zero (truncation)

      Instances For

        Round to nearest, ties to even

        Instances For

          Valid rounding mode predicate.

          • monotone (x y : ) : x yrnd x rnd y
          • id (n : ) : rnd n = n
          Instances

            Rounding modes with a half-unit error bound on the rounded integer.

            This matches "round-to-nearest" style roundings (ties can be resolved arbitrarily): |rnd x - x| ≤ 1/2 for all x.

            Instances

              neural_floor_round is a valid rounding mode (monotone and fixes integers).

              neural_ceil_round is a valid rounding mode (monotone and fixes integers).

              Basic bounds for nearest-even rounding.

              In words: neural_nearest_even x always lands in {⌊x⌋, ⌊x⌋ + 1}. This is the key fact used in the monotonicity proof and in IEEE-style interval bounds.

              Nearest-even rounds down when the fractional part is strictly less than 1/2.

              In words: if x is closer to ⌊x⌋ than to ⌊x⌋+1, then it rounds to ⌊x⌋.

              Nearest-even rounds up when the fractional part is strictly greater than 1/2.

              In words: if x is closer to ⌊x⌋+1 than to ⌊x⌋, then it rounds to ⌊x⌋+1.

              Nearest-even tie-breaking: when the fractional part is exactly 1/2 and the floor is even, round down to the even integer.

              Nearest-even tie-breaking: when the fractional part is exactly 1/2 and the floor is odd, round up to the even integer.

              neural_nearest_even is a valid rounding mode (monotone and fixes integers).

              Nearest-even is a “round-to-nearest” mode in the integer sense: |rnd(x) - x| ≤ 1/2.

              This is the key property required by NeuralValidRndToNearest.

              neural_nearest_even satisfies the half-unit error bound |rnd x - x| ≤ 1/2.

              noncomputable def TorchLean.Floats.neuralRound {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) (x : ) :

              Core rounding operator (“compute in , then round back to the grid”).

              We build a NeuralFloat record and then interpret it via neural_to_real. Only mantissa/exponent affect the value; the metadata fields are set to conventional defaults.

              Instances For
                @[simp]
                theorem TorchLean.Floats.neural_round_preserves_generic {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRnd rnd] (x : ) (hx : neuralGenericFormat β fexp x) :
                neuralRound rnd x = x

                Rounding preserves exactly-representable numbers.

                In words: if x lies on the grid described by (β,fexp) (neural_generic_format), then rounding it with any valid rnd is a no-op.

                This is the Flocq-style “round_generic” lemma.

                Scaled mantissa times base power equals the original value.

                In words: scaled_mantissa(x) * β^{cexp(x)} = x. This is the algebraic identity that justifies the scaling used before rounding.

                theorem TorchLean.Floats.neural_error_bound_ulp {β : NeuralRadix} {fexp : } [NeuralValidExp fexp] (rnd : ) [NeuralValidRndToNearest rnd] (x : ) :
                |neuralRound rnd x - x| neuralUlp β fexp x / 2

                Half-ULP error bound for neural_round under round-to-nearest.

                This is the basic “one-step” bound used by most error propagation arguments: neural_round deviates from x by at most half an ulp at the chosen exponent scale.