NF Reduction Operators #
NF (rounded) backend: approximation lemmas for reduction operators used by LayerNorm/attention.
These lemmas are specialized to 2D tensors and avoid typeclass inference by using explicit
Shape.reducibleAlong proofs derived from m>0/n>0.
PyTorch correspondence / citations #
This file targets reduction patterns used by normalization/attention (sums, means, maxes along an
axis), analogous to operations like torch.sum, torch.mean, and torch.max.
https://pytorch.org/docs/stable/generated/torch.sum.html
https://pytorch.org/docs/stable/generated/torch.mean.html
https://pytorch.org/docs/stable/generated/torch.max.html
Current scope: 2D axis reductions. That keeps the proofs explicit and avoids hiding shape preconditions behind automation; broader-rank reductions can reuse the same pattern later.
Extract column j from a runtime m×n tensor, represented row-wise as Fin m → Vec n.
Instances For
Extract column j from a spec m×n tensor, represented row-wise as Fin m → Vec n.