Float32 modes tutorial #
This tutorial runs the same compact MLP under two executable scalar backends:
Float: Lean runtimeFloat(binary64on ordinary platforms, trusted/external semantics);TorchLean.Floats.IEEE32Exec: TorchLean's executable bit-level IEEE-754 binary32 model.
We run a single forward pass and a single reverse-mode VJP (seeded with 1.0) and then report
max_abs_diff between the Float result and the IEEE32Exec result (converted back to Float).
PyTorch analogue:
import torch
import torch.nn as nn
model64 = nn.Sequential(nn.Linear(2, 3), nn.ReLU(), nn.Linear(3, 1)).to(torch.float64)
model32 = nn.Sequential(nn.Linear(2, 3), nn.ReLU(), nn.Linear(3, 1)).to(torch.float32)
# Copy the same explicit weights into both models, run forward/backward,
# then compare outputs and gradients after casting to a common dtype.
TorchLean's difference is that the scalar backend is part of the typeclass context. The model and
autograd call are generic; only α changes.
Run:
lake exe torchlean float32_modes
For editor inspection, put the cursor on the #float32_* commands below. Those widgets are for
visualization only; the actual tutorial code uses ordinary def and IO definitions.
Float32 widget probes #
0.1 is the canonical "not exactly representable" decimal. The widget shows the binary32 value
that IEEE32Exec receives after rounding from the host literal.
PyTorch analogue:
torch.tensor(0.1, dtype=torch.float32)
Instances For
A simple finite binary32 value for the bit-layout widget.
Instances For
Canonical quiet NaN, useful for showing classification and comparison behavior.