Simple MLP training example (regression) #
This is a focused end-to-end example of training a small MLP in TorchLean.
It mirrors the simplest PyTorch workflow:
- build a small synthetic dataset (in-memory),
- define an MLP (
Linear -> ReLU -> Linear), - train with Adam,
- report loss before/after, plus a few probe predictions.
Run:
lake exe torchlean quickstart_mlplake exe torchlean quickstart_mlp --steps 200 --dtype float --backend eager
Optional flags (tutorial-specific):
--seed S(model init + any shuffling)--steps N
Regression task: model + MSE loss.
Instances For
def
NN.Examples.Quickstart.SimpleMLPTrain.target
{α : Type}
[API.Semantics.Scalar α]
[API.Runtime.Scalar α]
(x1 x2 : α)
:
α
Small piecewise-linear regression target:
y = 0.8 * relu(x1 + x2) - 0.4 * relu(x2 - x1) + 0.2.
This is easy for a compact ReLU MLP to fit, which keeps the quickstart dependable.
Instances For
def
NN.Examples.Quickstart.SimpleMLPTrain.buildDataset
{α : Type}
[API.Semantics.Scalar α]
[API.Runtime.Scalar α]
:
Build the training dataset at the runtime-selected scalar type α.
We write the sample coordinates as Float literals first because they are convenient to read,
and Data.supervisedDim0F lifts them into the chosen runtime scalar backend.