TorchLean API

NN.MLTheory.CROWN.Graph.Engine

Flat vector pack: a tensor paired with its flattened dimension.

This is used for constant payloads and objective coefficient vectors in the flat LiRPA engine.

Instances For

    Parameters for a linear layer y = W*x + b in flattened form.

    m is the output dimension and n is the input dimension.

    Instances For

      Matrix parameters for bias-free matmul: y = W x.

      Instances For

        Conv2D parameters with cached spatial dimensions for graph propagation.

        • inC :

          Input channels.

        • outC :

          Output channels.

        • kH :

          Kernel height.

        • kW :

          Kernel width.

        • stride :

          Stride (shared for height/width).

        • padding :

          Zero padding (shared for height/width).

        • inH :

          Input height.

        • inW :

          Input width.

        • hIn : self.inC 0

          Proof that inC is nonzero (required by the typed conv spec).

        • hKH : self.kH 0

          Proof that kH is nonzero.

        • hKW : self.kW 0

          Proof that kW is nonzero.

        • spec : Spec.Conv2DSpec self.inC self.outC self.kH self.kW self.stride self.padding α

          Typed conv specification payload.

        Instances For

          Parameters keyed by node id (weights, biases, constants, and seeded input boxes).

          This is deliberately compact: it is the graph interpreter used to run IBP/CROWN on a pure Graph without pulling in a heavyweight runtime.

          Instances For
            @[implicit_reducible]

            Default inhabitant for FlatBox (a 0-dimensional box at 0).

            Elementwise product of two FlatBoxes (interval product per component). Requires equal dims.

            Instances For

              Convert a dependent Box of shape .dim n .scalar into a FlatBox with dim := n.

              Instances For

                Convert a FlatBox to a dependent Box at shape .dim B.dim .scalar.

                Instances For
                  def NN.MLTheory.CROWN.Graph.box_add {α : Type} [Context α] [BoundOps α] (B1 B2 : FlatBox α) :

                  Basic FlatBox ops

                  Instances For
                    def NN.MLTheory.CROWN.Graph.box_sub {α : Type} [Context α] [BoundOps α] (B1 B2 : FlatBox α) :

                    Interval subtraction on FlatBox endpoints (sound enclosure).

                    Instances For

                      Apply ReLU to both endpoints of a FlatBox (monotone activation, so endpoints suffice).

                      Instances For

                        Componentwise absolute value bounds. Soundly encloses abs over each interval component.

                        Instances For

                          Componentwise sqrt bounds. Uses the spec semantics sqrt(max(x,0)), which is monotone.

                          Instances For

                            Componentwise reciprocal bounds using operators.arithmetic.ibp_reciprocal.

                            Instances For

                              Componentwise max bounds: max(x,y) over interval boxes.

                              Instances For

                                Componentwise min bounds: min(x,y) over interval boxes.

                                Instances For

                                  Componentwise square of an interval box: for each component [l,u] produce [min (l^2,u^2), max (l^2,u^2)], with 0 as the minimum when the interval crosses 0.

                                  The body is exposed because the proof-facing theorem module unfolds this executable rule when proving dimension preservation and pointwise enclosure.

                                  Instances For
                                    def NN.MLTheory.CROWN.Graph.intervalMul {α : Type} [Context α] (aLo aHi bLo bHi : α) :
                                    α × α

                                    Interval multiplication for scalar endpoints: given [aLo,aHi] and [bLo,bHi], return bounds on the product.

                                    Instances For

                                      Softmax IBP (last axis) #

                                      For a 1D vector x with interval bounds l <= x <= u, a standard componentwise enclosure for softmax is:

                                      This uses monotonicity of exp and the fact that all terms in the denominator are nonnegative. The implementation below applies the 1D rule on the last tensor axis and recurses over leading batch dimensions.

                                      References:

                                      Interval bound propagation for softmax, applied on the last axis and lifted over leading dims.

                                      Instances For

                                        LayerNorm IBP (last axis) #

                                        Layer normalization (Ba et al.) computes, per vector, something like:

                                        y = (x - mean(x)) / sqrt(var(x) + eps).

                                        We implement a conservative enclosure by:

                                        1. Bounding mean using sums of endpoints.
                                        2. Bounding variance using a max-deviation upper bound.
                                        3. Bounding the per-component ratio by checking endpoint combinations against a positive denominator interval.

                                        This is intended as a simple checker-side transfer rule. It is conservative and is not an optimized relaxation.

                                        References:

                                        Interval bound propagation for layernorm, applied on the last axis and lifted over leading dims.

                                        Instances For

                                          For tensors known to have shape .dim n .scalar, extract the underlying function.

                                          Instances For

                                            Cast a 1D Box along an equality of dimensions.

                                            Instances For
                                              @[reducible, inline]

                                              Cast a dim-scalar tensor across an equality of dimensions.

                                              We keep this as an abbrev so it unfolds aggressively in simp-based soundness proofs.

                                              Instances For
                                                def NN.MLTheory.CROWN.Graph.ibp_linear {α : Type} [Context α] [BoundOps α] (id : ) (ps : ParamStore α) (Xin : FlatBox α) :

                                                IBP propagation for a .linear node using ParamStore.linearWB.

                                                Instances For
                                                  def NN.MLTheory.CROWN.Graph.ibp_matmul {α : Type} [Context α] [BoundOps α] (id : ) (ps : ParamStore α) (Xin : FlatBox α) :

                                                  IBP propagation for a .matmul node (bias-free) using ParamStore.matmulW.

                                                  Instances For
                                                    def NN.MLTheory.CROWN.Graph.propagateIBPNode {α : Type} [Context α] [BoundOps α] (nodes : Array Node) (ps : ParamStore α) (boxes : Array (Option (FlatBox α))) (id : ) :

                                                    IBP propagation for one node using ParamStore.

                                                    This executable function expects parents to have already been processed. The proof layer makes that precondition explicit via TopoSorted; callers that execute graphs directly should use graphs whose parents appear before their children.

                                                    Instances For

                                                      Run an IBP pass over the whole graph. Caller seeds inputs via ParamStore.inputBoxes.

                                                      Instances For
                                                        def NN.MLTheory.CROWN.Graph.runDeriv1D {α : Type} [Context α] [BoundOps α] (g : Graph) (ps : ParamStore α) (ibp : Array (Option (FlatBox α))) :

                                                        Derivative IBP pass for 1D input: computes for each node an interval on dy/dx. Requires value IBP boxes for activations.

                                                        Instances For
                                                          def NN.MLTheory.CROWN.Graph.runDerivDirectional {α : Type} [Context α] [BoundOps α] (g : Graph) (ps : ParamStore α) (ibp : Array (Option (FlatBox α))) (seed : FlatBox α) :

                                                          Directional first-derivative pass: like runDeriv1D but seeds the derivative at the input node with a user-provided direction vector (as a FlatBox with lo=hi). This allows extracting partial derivatives for multi-dimensional inputs by choosing e_x, e_y, etc.

                                                          Instances For
                                                            def NN.MLTheory.CROWN.Graph.runDeriv2D {α : Type} [Context α] [BoundOps α] (g : Graph) (ps : ParamStore α) (ibp d1 : Array (Option (FlatBox α))) :

                                                            Second-derivative IBP pass for 1D input: computes per node an interval on d²y/dx². Requires value IBP boxes and first-derivative boxes. Covers input, linear/matmul, tanh, add/sub.

                                                            Instances For

                                                              Context for affine (CROWN/DeepPoly) propagation.

                                                              Affine bounds are computed with respect to a single designated input node, whose flattened dimension is inputDim.

                                                              • inputId :

                                                                Node id treated as the input variable for affine bounds.

                                                              • inputDim :

                                                                Flattened input dimension.

                                                              Instances For
                                                                def NN.MLTheory.CROWN.Graph.propagateAffineNode {α : Type} [Context α] (nodes : Array Node) (ps : ParamStore α) (ibp : Array (Option (FlatBox α))) (affs : Array (Option (FlatAffine α))) (ctx : AffineCtx) (id : ) :

                                                                Propagate a single node’s affine form (CROWN/DeepPoly style) given parent affine forms.

                                                                This updates the affs array at index id when the node kind admits an affine transfer rule. For non-affine nodes (or missing parents/params), the array is left unchanged so downstream code can fall back to IBP boxes.

                                                                Instances For
                                                                  def NN.MLTheory.CROWN.Graph.runAffine {α : Type} [Context α] (g : Graph) (ps : ParamStore α) (ctx : AffineCtx) (ibp : Array (Option (FlatBox α))) :

                                                                  Run an affine pass; requires prior IBP to supply pre-activation bounds for ReLU.

                                                                  Instances For

                                                                    Basic CROWN-style affine bounds pass (lower + upper) #

                                                                    This pass computes a pair of affine forms per node:

                                                                    loAff(x) ≤ node(x) ≤ hiAff(x)

                                                                    where both affines are with respect to a designated flattened input node ctx.inputId.

                                                                    This pass is a practical CROWN/DeepPoly implementation:

                                                                    The pass is structured so a soundness theorem can cite per-op transfer rules for the supported relaxations. Unsupported axes or shape mismatches deliberately fall back to constant affine bounds derived from the IBP box.

                                                                    def NN.MLTheory.CROWN.Graph.propagateCROWNNode {α : Type} [Context α] (nodes : Array Node) (ps : ParamStore α) (ibp : Array (Option (FlatBox α))) (bounds : Array (Option (FlatAffineBounds α))) (ctx : AffineCtx) (id : ) :

                                                                    Propagate a single node’s affine bounds (lower/upper) given parent bounds.

                                                                    This is the CROWN/DeepPoly-style transfer step used by runCROWN. For node kinds without a dedicated rule, we fall back to the IBP enclosure (turned into a constant affine bound).

                                                                    Instances For

                                                                      Run the basic CROWN affine-bounds pass; requires prior IBP for per-node intervals.

                                                                      Instances For

                                                                        Backward/dual CROWN (objective-dependent) propagation #

                                                                        runCROWN above is a forward DeepPoly-style pass: it computes nodewise affine bounds that do not depend on an objective.

                                                                        Basic CROWN additionally supports backward propagation for a linear objective cᵀ · output, selecting per-neuron relaxations based on the sign of the downstream coefficients.

                                                                        The implementation below is pragmatic and covers the IR ops we support in runCROWN.

                                                                        def NN.MLTheory.CROWN.Graph.runCROWNBackwardObjective {α : Type} [Context α] (g : Graph) (ps : ParamStore α) (ctx : AffineCtx) (ibp : Array (Option (FlatBox α))) (outputId : ) (obj : FlatVec α) :

                                                                        Objective-dependent backward CROWN bound for a scalar objective.

                                                                        Given a linear objective objᵀ * output, this runs a backward pass that propagates the objective coefficients through the graph, selects the relaxation attached to each node, and returns a pair of affine bounds on the objective with respect to ctx.inputId.

                                                                        The returned FlatAffineBounds always has outDim = 1 (a scalar objective).

                                                                        Instances For
                                                                          def NN.MLTheory.CROWN.Graph.runCROWNBackwardObjectiveLowerWithReluAlpha {α : Type} [Context α] (g : Graph) (ps : ParamStore α) (ctx : AffineCtx) (ibp : Array (Option (FlatBox α))) (outputId : ) (obj : FlatVec α) (reluAlpha : Array (Option (FlatVec α))) :

                                                                          Backward CROWN objective lower bound with externally-provided ReLU alpha slopes.

                                                                          This is an integration hook for alpha-CROWN style workflows where ReLU slopes are chosen/optimized outside TorchLean and then imported as a per-node vector in reluAlpha.

                                                                          Instances For