Activation Operators #
Semantic-preservation lemmas for unary activation operators in the IR -> compiled runtime bridge.
Each lemma mirrors the corresponding branch in Correctness/SemanticEquivalence.lean, but is
named and factored out so we can keep the main semantic equivalence proof focused on graph
traversal rather than repeating parent-list and typed-index boilerplate.
Build note: these proofs can be slower than the operators look. The activation itself is simple;
the proof cost comes from checking the singleton-parent contract, recovering a typed index from the
IR parent id, and showing that the dynamically evaluated DVal is the same value as the compiled
node output. The shared unary-operator skeleton keeps each activation
only supplies its tensor function.
Semantic-preservation lemma for .relu lowering.
Semantic-preservation lemma for .tanh lowering.
Semantic-preservation lemma for .sigmoid lowering.
Semantic-preservation lemma for .exp lowering.
Semantic-preservation lemma for .log lowering.
Semantic-preservation lemma for .sin lowering.
Semantic-preservation lemma for .cos lowering.
Semantic-preservation lemma for .softmax axis lowering.
Implementation note: TorchLean's compiled softmax operator supports the last axis.
This is reflected by an explicit guard in buildFrom:
axis + 1 = Shape.rank outShape (equivalently, axis = rank-1).