Elementwise tensor operations (Spec.Tensor.*_spec) #
This file defines shape-preserving, elementwise operations on Tensor α s.
Naming convention:
foo_specmeans “pure spec definition” (no runtime side effects).- most functions are defined by recursion on the tensor structure via
map_spec/map2_spec.
Domain / smoothness notes #
Some ops are inherently domain-sensitive or non-smooth:
sqrt_specusessqrt (max x 0)to stay total on ordered rings.log_specis total as a function call, but analytic properties require positivity assumptions.relu/clamp/ comparisons are non-smooth; analytic backprop theorems treat these via pointwise assumptions, or by switching to smooth surrogates in verification workflows.
The spec layer is where these semantics are defined; the proof layer decides which assumptions/variants to use for theorems.
Map a scalar function over a tensor (shape preserved).
This is the core "structural recursion" combinator for spec tensors.
Most elementwise ops are direct instances of mapSpec f.
PyTorch analogy: f applied pointwise (like torch.<op> broadcasting over all entries),
but here shape is fixed and enforced by the type.
Instances For
Map a binary function over two tensors of the same shape.
This is the "zipWith" combinator for the spec tensor tree. It is intentionally shape-preserving: if the shapes differ, the term is not well-typed.
PyTorch analogy: elementwise binary ops when tensors already have the same shape (no broadcasting).
Broadcasting is handled separately in NN/Spec/Core/TensorReductionShape.lean.
Instances For
Element‑wise ≤ test, implemented via ¬(>) so we only depend on DecidableRel (·>·).
Instances For
Update a tensor at a (runtime) index path.
The index path is interpreted outermost-first. Out-of-bounds indices leave the tensor unchanged. This is an executable convenience helper; most proof-facing code prefers total, shape-indexed access.
Instances For
Slice a vector [start, start+len) (with a proof that the slice stays in-bounds).