ViT-Style Real-Data Example #
Runnable torchlean vit example. It trains a compact ViT-style image classifier on a
prepared CIFAR-10 minibatch: patch embedding by convolution, token reshape, transformer block, and
linear head.
The reusable model wiring lives in NN.API.Models.Vit (nn.models.vit1). This file is the
runnable wrapper (CIFAR loader construction + multi-epoch training loop).
python3 scripts/datasets/download_example_data.py --cifar10
lake build -R -K cuda=true && lake exe torchlean vit --cuda --n-total 200 --steps 1
Tip: the defaults are set for a quick sanity run. For a longer run, bump --steps and
--n-total, and enable CUDA fused kernels:
lake build -R -K cuda=true
lake exe torchlean vit --cuda --fast-kernels --n-total 2000 --steps 50
def
NN.Examples.Models.Vision.Vit.loadCifarLoader
{α : Type}
[API.Semantics.Scalar α]
[API.Runtime.Scalar α]
(xPath yPath : System.FilePath)
(nRows seed : ℕ)
: