Autoencoder (spec model) #
This file defines a small fully-connected autoencoder:
- encoder:
h = act(W_enc x + b_enc) - decoder:
x̂ = W_dec h + b_dec
PyTorch mental model: nn.Sequential(nn.Linear(inputDim, hiddenDim), act, nn.Linear(hiddenDim, inputDim))
applied to a single vector (no batch dimension).
This is spec-level/reference code. It is written for auditability and differentiation, and it is
intended to be instantiated over multiple scalar backends (Float, intervals, proof-level reals,
...).
Note on activation_type:
We keep a small string switch for demos and exporters. Most TorchLean code prefers choosing the activation by composition (at the module level), but having the switch here makes the "one-file model" convenient for examples.
Parameters #
We store the encoder and decoder weights explicitly.
Shapes:
encoder_weights : (hiddenDim × inputDim)decoder_weights : (inputDim × hiddenDim)encoder_bias : (hiddenDim)decoder_bias : (inputDim)
Parameters for a 1-hidden-layer fully-connected autoencoder.
- encoder_weights : Tensor α (Shape.dim hiddenDim (Shape.dim inputDim Shape.scalar))
Encoder weights with shape
(hiddenDim × inputDim). - encoder_bias : Tensor α (Shape.dim hiddenDim Shape.scalar)
Encoder bias with shape
(hiddenDim). - decoder_weights : Tensor α (Shape.dim inputDim (Shape.dim hiddenDim Shape.scalar))
Decoder weights with shape
(inputDim × hiddenDim). - decoder_bias : Tensor α (Shape.dim inputDim Shape.scalar)
Decoder bias with shape
(inputDim). - activation_type : String
Activation choice used by
autoencoder_activation_spec(defaults to"relu").
Instances For
Forward #
Apply the chosen activation (a small convenience wrapper around Activation.*_spec).
Instances For
Pointwise derivative of the chosen activation (used for the manual backward below).
Instances For
Encode a vector into a hidden representation:
h = act(W_enc x + b_enc).
PyTorch analogy: act(linear(x)) for a single nn.Linear.
Instances For
Decode a hidden representation back to input space:
x̂ = W_dec h + b_dec.
PyTorch analogy: a second nn.Linear(hiddenDim, inputDim) without an activation.
Instances For
Full autoencoder forward pass: decode(encode(x)).
Instances For
Batched forward pass (maps the single-example forward over the outer batch axis).
Instances For
Backward (manual VJP) #
This file includes a small, explicit backward pass for the autoencoder.
PyTorch analogy: this is what autograd computes, but spelled out as pure functions. The key linear-algebra identities used are:
- If
y = W x + b, thendW = dY ⊗ x,db = dY, anddX = Wᵀ dY. - If
h = act(z), thendZ = dH ⊙ act'(z).
Gradient w.r.t. encoder weights: dW_enc = dZ ⊗ x.
Instances For
Gradient w.r.t. encoder bias: db_enc = dZ.
Instances For
Gradient w.r.t. decoder weights: dW_dec = dOut ⊗ h.
Instances For
Gradient w.r.t. decoder bias: db_dec = dOut.
Instances For
Gradient w.r.t. input: dX = W_encᵀ dZ.
Instances For
Complete backward pass: returns
(dW_enc, db_enc, dW_dec, db_dec, dX).
Instances For
Mean-squared reconstruction error (single example).
PyTorch analogy: F.mse_loss(x_hat, x, reduction="mean").