Module for preprocessing torch classes to prepare for various distributed environments

This module is what is essentially a barebones version of Accelerate but it only affects the outer-most layer of the modules for what is needed in these tests.

So for example dispatched dataloaders are not a part of this, nor affecting the underlying dataset.

Preprocessors

prepare_model[source]

prepare_model(model:Module, **kwargs)

Prepares a model for distributed training. kwargs are sent to DDP

Type Default Details
model Module A PyTorch model to wrap
kwargs No Content

prepare_optimizer[source]

prepare_optimizer(opt:Optimizer)

prepare_scheduler[source]

prepare_scheduler(sched:_LRScheduler)

prepare_modules[source]

prepare_modules(*modules)

Prepares a set of modules, supports only PyTorch models, optimizers, and schedulers

Interfaces

The interface classes prepare_modules may wrap around

class OptimizerInterface[source]

OptimizerInterface(optimizer) :: Optimizer

Basic optimizer wrapper that performs the right step call for TPU

OptimizerInterface.state_dict[source]

OptimizerInterface.state_dict()

Passthrough to state dict

OptimizerInterface.step[source]

OptimizerInterface.step(closure=None)

Passthrough unless on TPU then calls the right stepper

OptimizerInterface.zero_grad[source]

OptimizerInterface.zero_grad()

Passthrough to zero_grad

class SchedulerInterface[source]

SchedulerInterface(scheduler, num_processes)

Wrapper to step the scheduler the right number of times

SchedulerInterface.step[source]

SchedulerInterface.step(*args, **kwargs)

Passthrough to scheduler.step but will also step the right number of times