The foundational event system for `Performer` based on fastai `Callback`s

DeviceType[source]

Enum = [CPU, CUDA]

Enum of all supported device placements

get_default_device[source]

get_default_device()

Returns DeviceType.CPU if GPU is not available, else DeviceType.CUDA

ManagerType[source]

Enum = [NO_GRAD, INFERENCE, NONE]

Enum of the various context manager options you can use when doing inference, with documentation of its members

NO_GRAD[source]

Run with torch.no_grad

INFERENCE[source]

Run with torch.inference_mode

NONE[source]

Keep all gradients and apply no context managers

class InferenceConfiguration[source]

InferenceConfiguration() :: ABC

The foundational class for customizing behaviors during inference.

There are three methods available that must be implemented:

  • after_drawn_batch
  • gather_predictions
  • decoding_values

If an implementation should stay to its default behavior, do the following:

  • event_name(self, *args): return super().event_name(*args)

Where event_name is any of the three events listed above

A context can be set with a ManagerType for what type of context manager should be ran at inference time

InferenceConfiguration.gather_predictions[source]

InferenceConfiguration.gather_predictions(model, batch)

Performs inference with model on batch. Any specific inference decorators such as no_grad or inference_mode is done in Performer.

Default implementation is model(*batch).

# Since there would only be one level, easy to track where and how
class ImageClassifierConfiguration(InferenceConfiguration):
    def __init__(self, vocab):
        self.vocab = vocab
    def after_drawn_batch(self, batch): super().after_drawn_batch(batch)
    def gather_predictions(self, model, batch):
        return model(*batch)
    def decoding_values(self, values):
        preds = values.argmax(dim=-1)
        decoded_preds = [self.vocab[p] for p in preds]
        return {"classes":decoded_preds, "probabilities":preds}