GPU Training
Accelerate neural network training using GPU blocks.
GPU-Accelerated Training Loop
module NN;
Init method() {
model is Classifier();
optimizer is NN.Adam(model, 0.001);
gpu {
for (epoch is 0; epoch < 100; epoch++) {
predictions is model.Forward(trainData);
loss is NN.CrossEntropy(predictions, labels);
loss.Backward();
optimizer.Step();
}
}
}
Model Save / Load
// Save trained model
NN.Save(model, "model.bin");
// Load saved model
loaded is NN.Load("model.bin");
Mixed Precision
For faster training on supported GPUs:
gpu {
// Float16 operations where possible
predictions is model.Forward(data as Tensor<half>);
}
Best Practices
- Keep data on GPU — avoid unnecessary transfers
- Use batch sizes that are powers of 2 (32, 64, 128, 256)
- Monitor loss with periodic CPU readback
- Save checkpoints regularly
Next Steps
- Toolchain — CLI reference