TorchLean API

NN.MLTheory.CROWN.BoundOps

Directed-rounding primitives for interval propagation #

TorchLean’s IBP/CROWN code represents bounds as endpoint pairs (lo/hi) inside Box/FlatBox. To make those bounds meaningful under different numeric semantics, we abstract the primitive endpoint operations (directed rounding).

Intuition:

This file defines a small typeclass BoundOps and some helper combinators. The default instance is conservative: it just uses the scalar’s ordinary operations (no directed rounding). Specific backends can override it (e.g. IEEE32Exec).

Integration points in the current codebase #

The intended usage is:

Concretely:

To extend coverage, the next candidates are box_inv, box_sqrt, and the various nonlinearity propagation rules in NN.MLTheory.CROWN.Runtime.Ops (these may need additional directed math primitives or an Arb-backed enclosure oracle).

BoundOps α #

BoundOps is intentionally focused: it supplies directed-rounding versions of the arithmetic primitives that appear in IBP for affine/linear layers and basic arithmetic nodes.

If you want to swap in a quantized backend, the key is to provide an instance of BoundOps for your scalar type.

  • addDown : ααα
  • addUp : ααα
  • subDown : ααα
  • subUp : ααα
  • mulDown : ααα
  • mulUp : ααα
Instances

    Default implementation: no directed rounding (use the scalar’s own arithmetic).

    This keeps existing Float/ℝ/Interval instantiations working without extra adapters.

    @[implicit_reducible, instance 10]

    Small helpers built only from Context.decidable_gt.

    @[inline]
    def NN.MLTheory.CROWN.BoundOps.min2 {α : Type} [Context α] (a b : α) :
    α
    Instances For
      @[inline]
      def NN.MLTheory.CROWN.BoundOps.max2 {α : Type} [Context α] (a b : α) :
      α

      Maximum of two scalars, using only Context.decidable_gt via decide (a > b).

      Instances For