NF Shape Operators #
NF (rounded) backend: approximation lemmas for shape-only tensor operators.
These operators do not perform arithmetic on scalars (they only permute/replicate entries), so
they preserve existing approxT error bounds.
That distinction matters: shape-only ops should not introduce extra rounding error. Their proofs are mostly transport/indexing arguments rather than numerical analysis.
PyTorch correspondence / citations #
These are the proof analogues of “view-like”/index-rearrangement ops in PyTorch which do not change floating-point values, only their arrangement: https://pytorch.org/docs/stable/generated/torch.reshape.html https://pytorch.org/docs/stable/generated/torch.Tensor.view.html https://pytorch.org/docs/stable/generated/torch.permute.html
theorem
Proofs.RuntimeApprox.NFBackend.approxT_replicate
{β : TorchLean.Floats.NeuralRadix}
{fexp : ℤ → ℤ}
{rnd : ℝ → ℤ}
{s : Spec.Shape}
{xS : Spec.SpecTensor Spec.Shape.scalar}
{xR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) Spec.Shape.scalar}
{eps : ℝ}
(hx : approxT toSpec xS xR eps)
:
approxT toSpec (Spec.replicate xS) (Spec.replicate xR) eps
theorem
Proofs.RuntimeApprox.NFBackend.approxT_broadcastTo
{β : TorchLean.Floats.NeuralRadix}
{fexp : ℤ → ℤ}
[TorchLean.Floats.NeuralValidExp fexp]
{rnd : ℝ → ℤ}
{s₁ s₂ : Spec.Shape}
(cb : s₁.CanBroadcastTo s₂)
{xS : Spec.SpecTensor s₁}
{xR : Spec.Tensor (TorchLean.Floats.NF β fexp rnd) s₁}
{eps : ℝ}
(hx : approxT toSpec xS xR eps)
:
approxT toSpec (Spec.Tensor.broadcastTo cb xS) (Spec.Tensor.broadcastTo cb xR) eps