TorchLean API

NN.Spec.Models.Autoencoder

Autoencoder (spec model) #

This file defines a small fully-connected autoencoder:

PyTorch mental model: nn.Sequential(nn.Linear(inputDim, hiddenDim), act, nn.Linear(hiddenDim, inputDim)) applied to a single vector (no batch dimension).

This is spec-level/reference code. It is written for auditability and differentiation, and it is intended to be instantiated over multiple scalar backends (Float, intervals, proof-level reals, ...).

Note on activation_type:

We keep a small string switch for demos and exporters. Most TorchLean code prefers choosing the activation by composition (at the module level), but having the switch here makes the "one-file model" convenient for examples.

Parameters #

We store the encoder and decoder weights explicitly.

Shapes:

structure Spec.AutoencoderSpec (α : Type) (inputDim hiddenDim : ) :

Parameters for a 1-hidden-layer fully-connected autoencoder.

Instances For

    Forward #

    def Spec.autoencoderActivationSpec {α : Type} [Context α] {n : } (activation_type : String) (t : Tensor α (Shape.dim n Shape.scalar)) :

    Apply the chosen activation (a small convenience wrapper around Activation.*_spec).

    Instances For
      def Spec.autoencoderActivationDerivSpec {α : Type} [Context α] {n : } (activation_type : String) (t : Tensor α (Shape.dim n Shape.scalar)) :

      Pointwise derivative of the chosen activation (used for the manual backward below).

      Instances For
        def Spec.autoencoderEncodeSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input : Tensor α (Shape.dim inputDim Shape.scalar)) :

        Encode a vector into a hidden representation:

        h = act(W_enc x + b_enc).

        PyTorch analogy: act(linear(x)) for a single nn.Linear.

        Instances For
          def Spec.autoencoderDecodeSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (hidden : Tensor α (Shape.dim hiddenDim Shape.scalar)) :

          Decode a hidden representation back to input space:

          x̂ = W_dec h + b_dec.

          PyTorch analogy: a second nn.Linear(hiddenDim, inputDim) without an activation.

          Instances For
            def Spec.autoencoderForwardSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input : Tensor α (Shape.dim inputDim Shape.scalar)) :

            Full autoencoder forward pass: decode(encode(x)).

            Instances For
              def Spec.autoencoderBatchedForwardSpec {α : Type} [Context α] {batch inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input : Tensor α (Shape.dim batch (Shape.dim inputDim Shape.scalar))) :
              Tensor α (Shape.dim batch (Shape.dim inputDim Shape.scalar))

              Batched forward pass (maps the single-example forward over the outer batch axis).

              Instances For

                Backward (manual VJP) #

                This file includes a small, explicit backward pass for the autoencoder.

                PyTorch analogy: this is what autograd computes, but spelled out as pure functions. The key linear-algebra identities used are:

                def Spec.autoencoderEncoderWeightsDerivSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input grad_output : Tensor α (Shape.dim inputDim Shape.scalar)) :
                Tensor α (Shape.dim hiddenDim (Shape.dim inputDim Shape.scalar))

                Gradient w.r.t. encoder weights: dW_enc = dZ ⊗ x.

                Instances For
                  def Spec.autoencoderEncoderBiasDerivSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input grad_output : Tensor α (Shape.dim inputDim Shape.scalar)) :

                  Gradient w.r.t. encoder bias: db_enc = dZ.

                  Instances For
                    def Spec.autoencoderDecoderWeightsDerivSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input grad_output : Tensor α (Shape.dim inputDim Shape.scalar)) :
                    Tensor α (Shape.dim inputDim (Shape.dim hiddenDim Shape.scalar))

                    Gradient w.r.t. decoder weights: dW_dec = dOut ⊗ h.

                    Instances For
                      def Spec.autoencoderDecoderBiasDerivSpec {α : Type} {inputDim hiddenDim : } (_m : AutoencoderSpec α inputDim hiddenDim) (grad_output : Tensor α (Shape.dim inputDim Shape.scalar)) :

                      Gradient w.r.t. decoder bias: db_dec = dOut.

                      Instances For
                        def Spec.autoencoderInputDerivSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input grad_output : Tensor α (Shape.dim inputDim Shape.scalar)) :

                        Gradient w.r.t. input: dX = W_encᵀ dZ.

                        Instances For
                          def Spec.autoencoderBackwardSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input grad_output : Tensor α (Shape.dim inputDim Shape.scalar)) :
                          Tensor α (Shape.dim hiddenDim (Shape.dim inputDim Shape.scalar)) × Tensor α (Shape.dim hiddenDim Shape.scalar) × Tensor α (Shape.dim inputDim (Shape.dim hiddenDim Shape.scalar)) × Tensor α (Shape.dim inputDim Shape.scalar) × Tensor α (Shape.dim inputDim Shape.scalar)

                          Complete backward pass: returns

                          (dW_enc, db_enc, dW_dec, db_dec, dX).

                          Instances For
                            def Spec.autoencoderReconstructionErrorSpec {α : Type} [Context α] {inputDim hiddenDim : } (m : AutoencoderSpec α inputDim hiddenDim) (input : Tensor α (Shape.dim inputDim Shape.scalar)) (h : inputDim 0) :
                            α

                            Mean-squared reconstruction error (single example).

                            PyTorch analogy: F.mse_loss(x_hat, x, reduction="mean").

                            Instances For
                              def Spec.autoencoderCompressionRatioSpec {inputDim hiddenDim : } :

                              A compact helper used by examples: compression ratio as a Float.

                              Note: if hiddenDim = 0, this produces /NaN depending on the Float backend. The rest of the spec never needs this number; it is purely for display.

                              Instances For