GraphSpec MLP Example #
This file contains the smallest GraphSpec architecture example:
Linear(in,hid) → ReLU → Linear(hid,out).
The goal is not to duplicate TorchLean's executable MLP helper. That constructor lives under
NN.GraphSpec.Models.TorchLean.Mlp and is re-exported through NN.Entrypoint.TorchLeanModels.
Here the point is narrower and proof-oriented:
- show the sequential
GraphDSL in its simplest useful form; - make the parameter ABI visible in the type;
- provide a stable target for GraphSpec equivalence and deterministic-init proofs.
Because this is a pure sequential chain, it is authored with Graph and >>>. The companion
mlpDAGModelZeroInit lowers the same chain to the general DAG model representation so DAG-only
tooling can consume it.
2-layer MLP: Linear(in,hid) → ReLU → Linear(hid,out).
Notice how the parameter interface is explicit in the type:
- the first
Linear(in,hid)contributesW₁ : Mat hid inandb₁ : Vec hid, - the second
Linear(hid,out)contributesW₂ : Mat out hidandb₂ : Vec out, - and
ReLUcontributes no parameters.
So the overall parameter list is exactly:
[Mat hid in, Vec hid, Mat out hid, Vec out].
Instances For
The same 2-layer MLP, but exposed as a DAG Model via the structural lowering
LowerToDAG.Graph.toDAGModelZeroInit.
This is mainly for GraphSpec example ergonomics: downstream tooling that expects DAG terms can
consume this even though it was authored using the sequential >>> syntax.
Initialization: all-zero parameters (see LowerToDAG.Graph.toDAGModelZeroInit).
Instances For
Example Usage #
You can build a simple classifier head by appending a softmax:
def g (inDim hidDim outDim : Nat) :
Graph
[ Shape.Mat hidDim inDim, Shape.Vec hidDim
, Shape.Mat outDim hidDim, Shape.Vec outDim ]
(Shape.Vec inDim) (Shape.Vec outDim) :=
Models.mlp inDim hidDim outDim >>> Graph.softmax (Shape.Vec outDim)
Then:
Interp.spec (g …)is a pure functionParams → Tensor → Tensor;Compile.torchProgram (g …)is an executable TorchLeanProgramwith argumentsparams ++ [input].