TorchLean API

NN.Proofs.RuntimeApprox.NF.Linalg

NF Linear Algebra #

Forward (runtime→spec) approximation lemmas for non-elementwise linear algebra ops over NF.

This extends NN.Proofs.RuntimeApprox.NF.Ops with bounds for the core sum-of-products patterns that appear in linear layers and matrix multiplication.

The central trick is to separate proof-friendly scalar fold bounds for dot products from tensor-level wrappers that turn those fold bounds into approxT theorems and graph nodes.

PyTorch correspondence / citations #

This is the proof analogue of linear algebra building blocks used throughout PyTorch models: matrix-vector/matrix-matrix multiplication (torch.matmul) and linear layers (torch.nn.functional.linear). https://pytorch.org/docs/stable/generated/torch.matmul.html https://pytorch.org/docs/stable/generated/torch.nn.functional.linear.html

Extract the i-th entry of a runtime vector tensor as an NF scalar.

Instances For

    Extract matrix entry (i,j) from a runtime matrix tensor as an NF scalar.

    Instances For

      expand_to_col_spec preserves approximation.

      This is a purely shape-level view operation (turn a vector into a singleton-column matrix), so it does not introduce new numeric error beyond the input approximation.

      matrix_transpose_spec preserves approximation.

      Transpose is a pure reindexing/view operation on tensor entries, so approxT is preserved with the same error budget.

      noncomputable def Proofs.RuntimeApprox.NFBackend.dotStep {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {n : } (epsa epsb : ) (aR bR : Fin nTorchLean.Floats.NF β fexp rnd) :
      TorchLean.Floats.NF β fexp rnd × Fin nTorchLean.Floats.NF β fexp rnd ×

      One fold step for building a dot-product and tracking a forward error bound.

      This is used to bound the error of foldl (fun acc k => acc + aR k * bR k) compared to the corresponding spec (real) dot-product.

      Instances For
        noncomputable def Proofs.RuntimeApprox.NFBackend.dotBound {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {n : } (epsa epsb : ) (aR bR : Fin nTorchLean.Floats.NF β fexp rnd) :

        Closed-form bound for a runtime dot-product over List.finRange n.

        dot_bound epsa epsb aR bR is the accumulated eps component produced by folding dot_step starting from 0.

        Instances For
          noncomputable def Proofs.RuntimeApprox.NFBackend.dotBoundExport {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } {n : } (epsa epsb : ) (aR bR : Fin nTorchLean.Floats.NF β fexp rnd) :

          Public-facing alias of dot_bound.

          This keeps exported tensor-bound constructors from depending directly on the local helper name that Lean treats as non-exportable in this file.

          Instances For

            Per-output bound tensor for mat_vec_mul_spec.

            Entry i is a dot-product bound for row i of A dotted with v, using dot_bound.

            Instances For

              Forward approximation bound for matrix-vector multiplication.

              In words: if A and v are each approximated by runtime AR/vR within epsA/epsV, then mat_vec_mul_spec AS vS is approximated by mat_vec_mul_spec AR vR, with error bounded by linf_norm (mat_vec_mul_bound_tensor epsA epsV AR vR).

              Per-entry bound tensor for mat_mul_spec.

              Entry (i,j) is a dot-product bound for row i of A dotted with column j of B, using dot_bound.

              Instances For

                Forward approximation bound for matrix-matrix multiplication.

                In words: if A and B are approximated by runtime matrices AR/BR within epsA/epsB, then mat_mul_spec AS BS is approximated by mat_mul_spec AR BR, with error bounded by linf_norm (mat_mul_bound_tensor epsA epsB AR BR).

                FwdNode for matrix transpose.

                This lifts approxT_matrix_transpose_spec into the FwdGraph interface so transposes can be used inside larger verified graphs.

                Instances For

                  FwdNode for matrix-vector multiplication.

                  The bound is computed by mat_vec_mul_bound_tensor and then reduced to a scalar budget via linf_norm.

                  Instances For

                    FwdNode for matrix-matrix multiplication.

                    The bound is computed by mat_mul_bound_tensor and then reduced to a scalar budget via linf_norm.

                    Instances For