TorchLean API

NN.Proofs.RuntimeApprox.NF.ReductionOps

NF Reduction Operators #

NF (rounded) backend: approximation lemmas for reduction operators used by LayerNorm/attention.

These lemmas are specialized to 2D tensors and avoid typeclass inference by using explicit Shape.reducibleAlong proofs derived from m>0/n>0.

PyTorch correspondence / citations #

This file targets reduction patterns used by normalization/attention (sums, means, maxes along an axis), analogous to operations like torch.sum, torch.mean, and torch.max. https://pytorch.org/docs/stable/generated/torch.sum.html https://pytorch.org/docs/stable/generated/torch.mean.html https://pytorch.org/docs/stable/generated/torch.max.html

Current scope: 2D axis reductions. That keeps the proofs explicit and avoids hiding shape preconditions behind automation; broader-rank reductions can reuse the same pattern later.

theorem Proofs.RuntimeApprox.NFBackend.approxT_reduce_sum_axis1_2d {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {m n : } (hm : 0 < m) (hn : 0 < n) {xS : Spec.SpecTensor (Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar))} {xR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) (Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar))} {eps : } (hx : approxT toSpec xS xR eps) :
let s := Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar); have hm' := ; have hn' := ; have hAxis := ; have hRed := ; approxT toSpec (Spec.Tensor.reduceSum 1 xS hRed) (Spec.Tensor.reduceSum 1 xR hRed) (have boundVec := match xR, hx with | Spec.Tensor.dim xRf, hx => Spec.Tensor.dim fun (i : Fin m) => Spec.Tensor.scalar (sumBound eps (xRf i)); linfNorm boundVec)
theorem Proofs.RuntimeApprox.NFBackend.approxT_reduce_mean_axis1_2d {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {m n : } (hm : 0 < m) (hn : 0 < n) {xS : Spec.SpecTensor (Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar))} {xR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) (Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar))} {eps : } (hx : approxT toSpec xS xR eps) :
let s := Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar); have hm' := ; have hn' := ; have hAxis := ; have hRed := ; approxT toSpec (Spec.Tensor.reduceMean 1 xS hRed) (Spec.Tensor.reduceMean 1 xR hRed) (have boundVec := match xR, hx with | Spec.Tensor.dim xRf, hx => Spec.Tensor.dim fun (i : Fin m) => have sumR := (xRf i).sumSpec; have epsSum := sumBound eps (xRf i); Spec.Tensor.scalar (TorchLean.Floats.neuralUlp β fexp (toSpec sumR / toSpec (TorchLean.Floats.NF.ofReal n)) / 2 + |toSpec sumR| * |1 / toSpec (TorchLean.Floats.NF.ofReal n)| + |toSpec sumR| + epsSum); linfNorm boundVec)

Extract column j from a runtime m×n tensor, represented row-wise as Fin m → Vec n.

Instances For

    Extract column j from a spec m×n tensor, represented row-wise as Fin m → Vec n.

    Instances For
      theorem Proofs.RuntimeApprox.NFBackend.approxT_reduce_sum_axis0_2d {β : TorchLean.Floats.NeuralRadix} {fexp : } [TorchLean.Floats.NeuralValidExp fexp] {rnd : } [TorchLean.Floats.NeuralValidRndToNearest rnd] {m n : } (hm : 0 < m) (_hn : 0 < n) {xS : Spec.SpecTensor (Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar))} {xR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) (Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar))} {eps : } (hx : approxT toSpec xS xR eps) :
      let s := Spec.Shape.dim m (Spec.Shape.dim n Spec.Shape.scalar); have hAxis := ; have hRed := ; approxT toSpec (Spec.Tensor.reduceSum 0 xS hRed) (Spec.Tensor.reduceSum 0 xR hRed) (have boundVec := match xR, hx with | Spec.Tensor.dim xRf, hx => Spec.Tensor.dim fun (j : Fin n) => Spec.Tensor.scalar (sumBound eps (colR xRf j)); linfNorm boundVec)