PINN PyTorch Checkpoint Loading #
PINN weight import (JSON → typed tensors).
This is the “loading” half of the PINN PyTorch bridge:
- read a PyTorch-style state dictionary encoded as JSON,
- parse each weight/bias matrix into a shape-checked
Tensor Float ...(shape inferred/checked), - infer a TorchLean
SequentialPINNArchfrom the layer shapes plus optional activation metadata.
The generic JSON helpers (loadWeights?, parseTensor, inferMatrixDims, …) live in
NN/Runtime/PyTorch/Import/Core.lean.
One fully-connected layer of a PINN (weights + bias) as trained in PyTorch.
- inDim : ℕ
Input dimension.
- outDim : ℕ
Output dimension.
- weights : Spec.Tensor Float (Spec.Shape.dim self.outDim (Spec.Shape.dim self.inDim Spec.Shape.scalar))
weights.
- bias : Spec.Tensor Float (Spec.Shape.dim self.outDim Spec.Shape.scalar)
bias.
Instances For
A parsed PINN state dict together with the inferred TorchLean sequential PINN architecture.
Inferred sequential fully-connected PINN architecture.
Layer stack.
Instances For
Activation metadata #
Python training scripts often record which nonlinearity they used. We treat that as optional
metadata under meta.activation. If it is missing (or unknown), we default to tanh.
Internal: parse optional activation metadata from JSON (meta.activation).
Instances For
Load a PINN state dict with arbitrary hidden widths.
Expected keys:
layers.<i>.weightandlayers.<i>.biasfor each layer indexi- optional
meta.activation
Unlike fixed-shape demos, we infer (outDim, inDim) for each layer from the JSON matrix shape.