Activation module wrappers #
The activation specs in NN/Spec/Layers/Activation.lean define the pure tensor functions
(relu_spec, sigmoid_spec, ...). We wrap them as NNModuleSpecs so we can compose them via
SpecChain and attach simple export/pretty-print metadata.
Note: activations are shape-preserving, so the export_func.dimensions field uses (0, 0) as
metadata meaning “not applicable”.
In PyTorch you'd reach for nn.ReLU(), torch.sigmoid, torch.tanh, and torch.softmax(dim=-1);
these wrappers exist for the same reason, just as pure, typed NNModuleSpecs.
ReLU as a shape-preserving NNModuleSpec.
Instances For
Sigmoid as a shape-preserving NNModuleSpec.
Instances For
Tanh as a shape-preserving NNModuleSpec.
Instances For
Softmax along the last axis (matches Activation.softmax_spec).
In PyTorch terms: torch.softmax(x, dim=-1).