TorchLean API

NN.Proofs.RuntimeApprox.NF.Ops

NF Primitive Ops #

Forward (runtime→spec) approximation lemmas for the rounded runtime NF.

This is the proof-relevant numeric backend: NF wraps and applies neural_round after each primitive operation. We prove forward bounds for a growing set of primitive tensor ops (arithmetic + common elementwise nonlinearities), and provide FwdNode constructors that can be composed using Proofs.RuntimeApprox.FwdGraph.eval_approx.

This is the foundation of the NF backend: once a primitive operation has an approxT_*_spec theorem here, it can be packaged as a graph node and composed into larger forward/backward/runtime approximation results.

Notes:

PyTorch correspondence / citations #

Tensor Approximation Plumbing #

linf_norm is always nonnegative.

Componentwise bound for linf_norm on a dimensioned tensor.

The norm of any component t[i] is bounded by the norm of the whole tensor.

theorem Proofs.RuntimeApprox.approxT_scalar_iff {α : Type} {toSpec : αSpec.SpecScalar} {x : Spec.SpecScalar} {xR : α} {eps : Spec.SpecScalar} :
approxT toSpec (Spec.Tensor.scalar x) (Spec.Tensor.scalar xR) eps |toSpec xR - x| eps

Scalar characterization of approxT on scalar tensors.

This rewrites approxT (Tensor.scalar x) (Tensor.scalar xR) eps into the usual absolute-error inequality |toSpec xR - x| ≤ eps.

theorem Proofs.RuntimeApprox.approxT_dim_get {α : Type} {toSpec : αSpec.SpecScalar} {n : } {s : Spec.Shape} {xS : Spec.SpecTensor (Spec.Shape.dim n s)} {xR : Spec.Tensor α (Spec.Shape.dim n s)} {eps : Spec.SpecScalar} (h : approxT toSpec xS xR eps) (i : Fin n) :
approxT toSpec (match xS with | Spec.Tensor.dim f => f i) (match xR, h with | Spec.Tensor.dim f, h => f i) eps

Projection lemma for approxT on dimensioned tensors.

If xS approximates xR within eps, then each component xS[i] approximates xR[i] within eps.

theorem Proofs.RuntimeApprox.approxT_map_spec_of_scalar_bound {α : Type} {toSpec : αSpec.SpecScalar} {s : Spec.Shape} (fS : Spec.SpecScalarSpec.SpecScalar) (fR : αα) (bnd : Spec.SpecScalarSpec.SpecScalarSpec.SpecScalar) {xS : Spec.SpecTensor s} {xR : Spec.Tensor α s} {eps : Spec.SpecScalar} :
approxT toSpec xS xR eps(∀ {x : Spec.SpecScalar} {xR : α}, |toSpec xR - x| eps|toSpec (fR xR) - fS x| bnd (toSpec xR) eps)approxT toSpec (Spec.Tensor.mapSpec fS xS) (Spec.Tensor.mapSpec fR xR) (linfNorm (Spec.Tensor.mapSpec (fun (a : Spec.SpecScalar) => bnd a eps) (tensorToSpec toSpec xR)))

Lift a scalar approximation bound to an elementwise map_spec.

Given a scalar bound of the form |toSpec (fR xR) - fS x| ≤ bnd (toSpec xR) eps and an input approximation approxT xS xR eps, this produces an approximation bound for map_spec fS xS vs map_spec fR xR, with an output epsilon computed by taking the linf_norm of the pointwise bound.

theorem Proofs.RuntimeApprox.approxT_map2_spec_of_scalar_bound {α : Type} {toSpec : αSpec.SpecScalar} {s : Spec.Shape} (fS : Spec.SpecScalarSpec.SpecScalarSpec.SpecScalar) (fR : ααα) (bnd : Spec.SpecScalarSpec.SpecScalarSpec.SpecScalarSpec.SpecScalarSpec.SpecScalar) {xS yS : Spec.SpecTensor s} {xR yR : Spec.Tensor α s} {epsx epsy : Spec.SpecScalar} :
approxT toSpec xS xR epsxapproxT toSpec yS yR epsy(∀ {x y : Spec.SpecScalar} {xR yR : α}, |toSpec xR - x| epsx|toSpec yR - y| epsy|toSpec (fR xR yR) - fS x y| bnd (toSpec xR) (toSpec yR) epsx epsy)approxT toSpec (Spec.Tensor.map2Spec fS xS yS) (Spec.Tensor.map2Spec fR xR yR) (linfNorm (Spec.Tensor.map2Spec (fun (a b : Spec.SpecScalar) => bnd a b epsx epsy) (tensorToSpec toSpec xR) (tensorToSpec toSpec yR)))

Lift a scalar approximation bound to an elementwise map2_spec.

This is the binary analogue of approxT_map_spec_of_scalar_bound, used for elementwise arithmetic (add, sub, mul_elem, etc.).

@[reducible, inline]
noncomputable abbrev Proofs.RuntimeApprox.NFBackend.toSpec {β : TorchLean.Floats.NeuralRadix} {fexp : } {rnd : } (x : TorchLean.Floats.NF β fexp rnd) :

Interpret a runtime NF scalar as a spec scalar () by forgetting rounding metadata.

Instances For

    NF → ℝ bridge lemmas #

    Most approximation statements in this file are phrased over the spec scalar , but the runtime backend is NF β fexp rnd. The following lemmas are small bridge facts that let us rewrite runtime expressions into:

    Keeping these as named lemmas (instead of repeating huge simp [...] lists) makes the later forward-approx proofs much easier to read.

    theorem Proofs.RuntimeApprox.NFBackend.approx_sqrt_clamp_nf_of_lb {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x : } {xR : TorchLean.Floats.NF β fexp rnd} {eps η : } ( : 0 < η) (hdom : η max x 0) (hx : |toSpec xR - x| eps) :
    |toSpec (MathFunctions.sqrt (xR0)) - (max x 0)| eps / η + TorchLean.Floats.neuralUlp β fexp (max (toSpec xR) 0) / 2

    Forward approximation bound for sqrt (max · 0) under a positive lower bound.

    This is a clamped sqrt bound: we work with sqrt (max x 0) to avoid the sqrt domain issue, but still require a strict lower bound η > 0 on max x 0 to control conditioning via |√a - √b| ≤ |a-b| / √η.

    theorem Proofs.RuntimeApprox.NFBackend.approx_add_nf {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y : } {xR yR : TorchLean.Floats.NF β fexp rnd} {epsx epsy : } (hx : |toSpec xR - x| epsx) (hy : |toSpec yR - y| epsy) :
    |toSpec (xR + yR) - (x + y)| epsx + epsy + TorchLean.Floats.neuralUlp β fexp (toSpec xR + toSpec yR) / 2

    Forward approximation bound for addition in NF.

    In words: if xR approximates x within epsx and yR approximates y within epsy, then xR + yR approximates x + y within epsx + epsy + ulp(toSpec xR + toSpec yR)/2.

    theorem Proofs.RuntimeApprox.NFBackend.approx_sub_nf {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y : } {xR yR : TorchLean.Floats.NF β fexp rnd} {epsx epsy : } (hx : |toSpec xR - x| epsx) (hy : |toSpec yR - y| epsy) :
    |toSpec (xR - yR) - (x - y)| epsx + epsy + TorchLean.Floats.neuralUlp β fexp (toSpec xR - toSpec yR) / 2

    Forward approximation bound for subtraction in NF.

    This is proved by reducing subtraction to addition with a negation and applying approx_add_nf.

    Forward approximation bound for negation in NF (rounding error on -toSpec xR).

    Forward approximation bound for absolute value in NF (abs is pure + a final rounding).

    Forward approximation bound for exp in NF.

    Uses the mean value theorem for Real.exp to bound the propagation of input error, then adds one rounding-ULP term for the final NF rounding.

    Forward approximation bound for tanh in NF (coarse but unconditional).

    Because tanh is bounded in [-1, 1], we always have |tanh(toSpec xR) - tanh(x)| ≤ 2, and then we add one rounding-ULP term for the final NF rounding step.

    noncomputable def Proofs.RuntimeApprox.NFBackend.safeLog (ε x : ) :

    Clamped log on spec scalars: log (max x ε).

    This is used to obtain unconditional forward bounds for log by avoiding the singularity at 0.

    Instances For
      noncomputable def Proofs.RuntimeApprox.NFBackend.safeLogR {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } (ε : ) (xR : TorchLean.Floats.NF β fexp rnd) :

      Clamped log on runtime NF scalars (implemented as NF.ofReal (safeLog (toSpec xR))).

      This definition keeps the semantic spec function explicit (so proofs can reason about it) while still producing an executable runtime scalar.

      Instances For
        theorem Proofs.RuntimeApprox.NFBackend.approx_safeLog_nf {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x : } {xR : TorchLean.Floats.NF β fexp rnd} {eps ε : } ( : 0 < ε) (hx : |toSpec xR - x| eps) :
        |toSpec (safeLogR ε xR) - safeLog ε x| 1 / ε * eps + TorchLean.Floats.neuralUlp β fexp (safeLog ε (toSpec xR)) / 2

        Forward approximation bound for safeLog in NF.

        On the clamped domain u,v ≥ ε > 0, log is (1/ε)-Lipschitz. We use that to propagate the input error and then add one rounding-ULP term for the final NF rounding.

        theorem Proofs.RuntimeApprox.NFBackend.approx_mul_nf {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y : } {xR yR : TorchLean.Floats.NF β fexp rnd} {epsx epsy : } (hx : |toSpec xR - x| epsx) (hy : |toSpec yR - y| epsy) :
        |toSpec (xR * yR) - x * y| (|toSpec xR| + epsx) * epsy + (|toSpec yR| + epsy) * epsx + TorchLean.Floats.neuralUlp β fexp (toSpec xR * toSpec yR) / 2

        Forward approximation bound for multiplication in NF.

        This has the standard "first-order" shape: terms proportional to |toSpec xR| * epsy and |toSpec yR| * epsx, plus an ulp term for the final rounding. (For classical background, see Higham, Accuracy and Stability of Numerical Algorithms.)

        theorem Proofs.RuntimeApprox.NFBackend.approx_div_nf_of_one_le {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y : } {xR yR : TorchLean.Floats.NF β fexp rnd} {epsx : } (hy : 1 y) (hx : |toSpec xR - x| epsx) :
        |toSpec (xR / yR) - x / y| TorchLean.Floats.neuralUlp β fexp (toSpec xR / toSpec yR) / 2 + |toSpec xR| * |1 / toSpec yR| + |toSpec xR| + epsx

        Forward approximation bound for division under a coarse denominator lower bound (y ≥ 1).

        Division is ill-conditioned near 0, so we need a domain condition. This lemma is tailored for the simple case y ≥ 1 to keep constants small.

        theorem Proofs.RuntimeApprox.NFBackend.approx_div_nf_of_lb {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y : } {xR yR : TorchLean.Floats.NF β fexp rnd} {epsx η : } ( : 0 < η) (hy : η y) (hx : |toSpec xR - x| epsx) :
        |toSpec (xR / yR) - x / y| TorchLean.Floats.neuralUlp β fexp (toSpec xR / toSpec yR) / 2 + |toSpec xR| * |1 / toSpec yR| + (|toSpec xR| + epsx) * (1 / η)

        Forward approximation bound for division under a general positive lower bound (η ≤ y with η > 0).

        This is the more general variant of approx_div_nf_of_one_le, making the conditioning explicit via the factor (1/η).

        Forward approximation bound for scaling (elementwise multiply by a runtime constant c).

        noncomputable def Proofs.RuntimeApprox.NFBackend.sumStep {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } (epsElem : ) :
        TorchLean.Floats.NF β fexp rnd × TorchLean.Floats.NF β fexp rndTorchLean.Floats.NF β fexp rnd ×

        One fold step for sum_spec that tracks an explicit forward error budget.

        State is (accR, epsAcc) where accR is the runtime accumulator and epsAcc bounds the absolute error |toSpec accR - accS| for the corresponding spec accumulator accS. Each step adds:

        • the incoming per-element budget epsElem;
        • one rounding-ULP term for the addition.
        Instances For
          noncomputable def Proofs.RuntimeApprox.NFBackend.sumFoldState {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {s : Spec.Shape} (epsElem : ) (st : TorchLean.Floats.NF β fexp rnd × ) (tR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s) :

          Fold sum_step over a tensor via tensor_foldl_spec.

          This is the shared helper behind sum_bound and approxT_sum_spec: it simultaneously computes the runtime sum (in .1) and the accumulated error bound (in .2).

          Instances For
            noncomputable def Proofs.RuntimeApprox.NFBackend.sumBound {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {s : Spec.Shape} (epsElem : ) (tR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s) :

            Forward absolute-error bound for sum_spec.

            sum_bound epsElem tR is the .2 component of sum_fold_state started at 0, assuming each element is approximated within epsElem. This corresponds to naive sequential summation with a rounding term added at each step (cf. standard floating-point summation analyses).

            Instances For

              Forward approximation bound for sum_spec over an arbitrary tensor shape.

              If xR approximates xS elementwise within eps, then the scalar sums sum_spec xR and sum_spec xS differ by at most sum_bound eps xR.

              noncomputable def Proofs.RuntimeApprox.NFBackend.addBoundTensor {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {s : Spec.Shape} (epsx epsy : ) (xR yR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s) :

              Per-entry bound tensor for addition.

              add_bound_tensor epsx epsy xR yR computes an elementwise error budget for xR + yR. Its linf_norm is used as the output epsilon in approxT_add_spec.

              Instances For
                noncomputable def Proofs.RuntimeApprox.NFBackend.subBoundTensor {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {s : Spec.Shape} (epsx epsy : ) (xR yR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s) :

                Per-entry bound tensor for subtraction.

                Analogous to add_bound_tensor, but for xR - yR (and the corresponding spec subtraction).

                Instances For
                  noncomputable def Proofs.RuntimeApprox.NFBackend.mulBoundTensor {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {s : Spec.Shape} (epsx epsy : ) (xR yR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s) :

                  Per-entry bound tensor for multiplication.

                  This is the elementwise lifting of the scalar bound approx_mul_nf, tracking first-order error propagation plus one rounding term.

                  Instances For

                    Per-entry bound tensor for scaling by a runtime constant.

                    scale_bound_tensor eps c xR bounds the error of xR * c assuming the input is approximated within eps and treating c as exact (relative to its own toSpec value).

                    Instances For

                      Per-entry bound tensor for negation.

                      Instances For

                        Per-entry bound tensor for absolute value.

                        Instances For

                          Per-entry bound tensor for exponentiation (exp).

                          This matches approx_exp_nf: a mean-value-theorem bound on the real exp plus one rounding term.

                          Instances For

                            Per-entry bound tensor for hyperbolic tangent (tanh).

                            Currently uses the coarse unconditional bound from approx_tanh_nf (boundedness of tanh).

                            Instances For

                              Per-entry bound tensor for safeLog.

                              safeLog_bound_tensor ε eps xR is the elementwise bound used by approxT_safeLog_spec, combining a (1/ε) Lipschitz propagation term with one rounding-ULP term.

                              Instances For

                                approxT bound for clamped log (safeLog) lifted to arbitrary tensor shapes.

                                This is the tensor-level wrapper around approx_safeLog_nf, using approxT_map_spec_of_scalar_bound.

                                NFBackend.safeLog is a clamped log surrogate log (max x ε) with an unconditional forward bound.

                                For smooth activations that use log (notably softplus and safe_log), we route the outer log through safeLog at a known lower bound:

                                This avoids needing a separate log approximation lemma while remaining extensionally equal on for the intended arguments.

                                Half-ULP rounding budget at the scalar value 1, used in the softplus helper bounds.

                                Instances For

                                  Scalar forward-error envelope for exp.

                                  This packages the exact value, the perturbed value at a + eps, and the final rounding budget into one reusable bound for the later softplus/safe-log proofs.

                                  Instances For

                                    Rounded representation of the scalar constant 1 at the NF backend.

                                    Instances For

                                      Rounded representation of exp a at the NF backend.

                                      Instances For

                                        Rounded surrogate for the inner 1 + exp a term appearing in softplus.

                                        Instances For

                                          Forward-error envelope for the rounded 1 + exp a subexpression used by softplus.

                                          Instances For

                                            Unconditional scalar forward-error bound for softplus, via the safeLog factorization.

                                            Instances For

                                              softplus implemented by safeLog 1 (1 + exp x) at the NF backend.

                                              Instances For

                                                Forward approximation bound for softplus in NF.

                                                We treat softplus(x) = log(1 + exp x) as safeLog 1 (1 + exp x) (since 1 + exp x ≥ 1) and then compose the scalar bounds for exp, +, and safeLog.

                                                Per-entry bound tensor for softplusR.

                                                This is the elementwise lifting of softplus_bound_scalar, used with linf_norm in approxT_softplus_spec.

                                                Instances For

                                                  approxT bound for softplus lifted to arbitrary tensor shapes.

                                                  This is the tensor-level wrapper around approx_softplus_nf, built via approxT_map_spec_of_scalar_bound.

                                                  Runtime implementation of safe_log as a single rounded primitive.

                                                  Instances For

                                                    Forward approximation bound for the smooth safe_log activation in NF.

                                                    safe_log is defined as log(softplus(x) + ε), which is globally well-defined for ε > 0. The proof combines:

                                                    • one rounding step for safe_logR (defined as NF.ofReal (safe_log_spec ...));
                                                    • a (1/ε) Lipschitz bound for the spec function (via mean value theorem + derivative bound).

                                                    Per-entry bound tensor for safe_log.

                                                    This is the elementwise lifting of approx_safe_log_nf's bound.

                                                    Instances For

                                                      approxT bound for safe_log lifted to arbitrary tensor shapes.

                                                      This is the tensor-level wrapper around approx_safe_log_nf, built via approxT_map_spec_of_scalar_bound.

                                                      noncomputable def Proofs.RuntimeApprox.NFBackend.safeDiv (ε x y : ) :

                                                      Spec-side safe division with a clamped denominator.

                                                      Instances For
                                                        noncomputable def Proofs.RuntimeApprox.NFBackend.safeDivR {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } (ε : ) (xR yR : TorchLean.Floats.NF β fexp rnd) :

                                                        Runtime implementation of safeDiv as a single rounded primitive.

                                                        Instances For
                                                          theorem Proofs.RuntimeApprox.NFBackend.approx_safeDiv_nf {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {x y : } {xR yR : TorchLean.Floats.NF β fexp rnd} {epsx epsy ε : } ( : 0 < ε) (hx : |toSpec xR - x| epsx) (hy : |toSpec yR - y| epsy) :
                                                          |toSpec (safeDivR ε xR yR) - safeDiv ε x y| 1 / ε * epsx + (|toSpec xR| + epsx) * (epsy / (ε * ε)) + TorchLean.Floats.neuralUlp β fexp (safeDiv ε (toSpec xR) (toSpec yR)) / 2

                                                          Forward approximation bound for safeDiv in NF.

                                                          safeDiv ε x y = x / max y ε clamps the denominator away from 0. For ε > 0, this yields an unconditional bound with explicit (1/ε) and (1/ε^2) sensitivity terms plus one rounding-ULP term.

                                                          noncomputable def Proofs.RuntimeApprox.NFBackend.safeDivBoundTensor {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {s : Spec.Shape} (ε epsx epsy : ) (xR yR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s) :

                                                          Per-entry bound tensor for safeDiv.

                                                          This is the elementwise lifting of approx_safeDiv_nf's bound (with a max-clamped denominator).

                                                          Instances For
                                                            theorem Proofs.RuntimeApprox.NFBackend.approxT_safeDiv_spec {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {s : Spec.Shape} (ε : ) ( : 0 < ε) {xS yS : Spec.SpecTensor s} {xR yR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s} {epsx epsy : } :
                                                            approxT toSpec xS xR epsxapproxT toSpec yS yR epsyapproxT toSpec (Spec.Tensor.map2Spec (safeDiv ε) xS yS) (Spec.Tensor.map2Spec (safeDivR ε) xR yR) (linfNorm (safeDivBoundTensor ε epsx epsy xR yR))

                                                            approxT bound for safeDiv lifted to arbitrary tensor shapes.

                                                            This is the tensor-level wrapper around approx_safeDiv_nf, built via approxT_map2_spec_of_scalar_bound.

                                                            Scalar forward bound for sigmoid in NF.

                                                            sigmoid(x) = 1 / (1 + exp(-x)) is implemented using the existing bounds for exp, +, and division (with the denominator lower-bounded by 1).

                                                            Instances For

                                                              Per-entry bound tensor for sigmoid.

                                                              Instances For

                                                                approxT bound for sigmoid lifted to arbitrary tensor shapes.

                                                                This is the tensor-level wrapper around sigmoid_bound_scalar (scalar case) and the usual componentwise linf_norm lifting (dimension case).

                                                                Scalar forward bound for the scalar logistic-compatible NF softmax node.

                                                                Here the node computes the scalar logistic-like function exp(x) / (exp(x) + 1), implemented using exp, +, and division (denominator is ≥ 1). We keep the public node name stable for the NF graph, but the mathematical function is Activation.Math.logisticSpec, not axis softmax.

                                                                Instances For

                                                                  Per-entry bound tensor for the scalar logistic NF node.

                                                                  Instances For

                                                                    approxT bound for the scalar logistic NF node lifted to arbitrary tensor shapes.

                                                                    This is the tensor-level wrapper around the scalar exp/+/div bound, using the usual linf_norm lifting for dimensioned tensors.

                                                                    approxT bound for elementwise addition (add_spec) over arbitrary tensor shapes.

                                                                    The output epsilon is computed as linf_norm (add_bound_tensor epsx epsy xR yR), which combines the input epsilons and one rounding-ULP term per element.

                                                                    approxT bound for elementwise subtraction (sub_spec) over arbitrary tensor shapes.

                                                                    This is obtained by lifting the scalar subtraction bound approx_sub_nf via approxT_map2_spec_of_scalar_bound.

                                                                    approxT bound for elementwise multiplication (mul_spec) over arbitrary tensor shapes.

                                                                    The scalar core is approx_mul_nf, lifted componentwise; the resulting bound is packaged as mul_bound_tensor and reduced with linf_norm.

                                                                    approxT bound for scaling by a runtime constant (scale_spec) over arbitrary tensor shapes.

                                                                    This is the tensor-level wrapper around the scalar scaling lemma approx_scale_nf.

                                                                    approxT bound for elementwise negation (neg_spec) over arbitrary tensor shapes.

                                                                    approxT bound for elementwise absolute value (abs_spec) over arbitrary tensor shapes.

                                                                    approxT bound for elementwise exponentiation (exp_spec) over arbitrary tensor shapes.

                                                                    This lifts the scalar mean-value-theorem bound approx_exp_nf.

                                                                    approxT bound for elementwise hyperbolic tangent (tanh) over arbitrary tensor shapes.

                                                                    Currently uses the coarse unconditional scalar bound approx_tanh_nf (boundedness of tanh).

                                                                    Rounded ReLU scalar op for NF: apply max · 0 then round.

                                                                    Instances For

                                                                      Per-entry bound tensor for ReLU (max · 0).

                                                                      ReLU is 1-Lipschitz (|max x 0 - max y 0| ≤ |x - y|), so the only new error is the final rounding step in reluR.

                                                                      Instances For

                                                                        approxT bound for elementwise ReLU (max · 0) over arbitrary tensor shapes.

                                                                        Combines the 1-Lipschitz property of max with one rounding step for reluR.

                                                                        FwdNode for elementwise addition.

                                                                        This packages approxT_add_spec so addition can be used inside larger verified FwdGraphs.

                                                                        Instances For

                                                                          FwdNode for elementwise subtraction (wraps approxT_sub_spec).

                                                                          Instances For

                                                                            FwdNode for elementwise multiplication (wraps approxT_mul_spec).

                                                                            Instances For

                                                                              FwdNode for clamped division safeDiv.

                                                                              Requires a proof hε : 0 < ε and uses approxT_safeDiv_spec to obtain an unconditional bound.

                                                                              Instances For

                                                                                FwdNode for scaling by a runtime constant c.

                                                                                Wraps approxT_scale_spec.

                                                                                Instances For

                                                                                  FwdNode for elementwise negation (wraps approxT_neg_spec).

                                                                                  Instances For

                                                                                    FwdNode for elementwise absolute value (wraps approxT_abs_spec).

                                                                                    Instances For

                                                                                      FwdNode for elementwise exponentiation (wraps approxT_exp_spec).

                                                                                      Instances For

                                                                                        FwdNode for elementwise softplus (wraps approxT_softplus_spec).

                                                                                        Instances For

                                                                                          FwdNode for clamped log safeLog.

                                                                                          Requires a proof hε : 0 < ε and wraps approxT_safeLog_spec.

                                                                                          Instances For

                                                                                            FwdNode for the smooth safe_log activation.

                                                                                            Requires hε : 0 < ε and wraps approxT_safe_log_spec.

                                                                                            Instances For

                                                                                              FwdNode for elementwise tanh (wraps approxT_tanh_spec).

                                                                                              Instances For

                                                                                                FwdNode for elementwise sigmoid (wraps approxT_sigmoid_spec).

                                                                                                Instances For

                                                                                                  FwdNode for elementwise ReLU (max · 0, wraps approxT_relu_spec).

                                                                                                  Instances For

                                                                                                    FwdNode for the scalar logistic-compatible softmax node (wraps approxT_softmax_spec).

                                                                                                    Instances For

                                                                                                      FwdNode for sum reduction (sum_spec).

                                                                                                      This reduces a tensor to a scalar and uses approxT_sum_spec with the accumulated sum_bound.

                                                                                                      Instances For