Define the general fastai optimizer and variants
For the actual fastai documentation, you should go to the Optimizer documentation. These are minimal docs simply to bring in the source code and related tests to ensure that minimal functionality is met
OptimWrapper
Examples
Below are some examples with OptimWrapper
with Pytorch optimizers:
@delegates(optim.Adam)
def Adam(params, **kwargs):
"Convience function to make an Adam optimizer compatable with `Learner`"
return OptimWrapper(optim.Adam(params, **kwargs))
@delegates(optim.SGD)
def SGD(params, **kwargs):
"Convience function to make a SGD optimizer compatable with `Learner`"
return OptimWrapper(optim.SGD(params, **kwargs))
Differential Learning Rates and Groups with Pytorch Optimizers
Out of the box, OptimWrapper
is not able to utilize param groups and differential learning rates like fastai
has. Below contains the necissary helper functions needed, as well as a tutorial
def _mock_train(m, x, y, opt):
m.train()
for i in range(0, 100, 25):
z = m(x[i:i+25])
loss = F.mse_loss(z, y[i:i+25])
loss.backward()
opt.step()
opt.zero_grad()