gluonts.model.n_beats package

class gluonts.model.n_beats.NBEATSEstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), num_stacks: int = 30, widths: Optional[List[int]] = None, num_blocks: Optional[List[int]] = None, num_block_layers: Optional[List[int]] = None, expansion_coefficient_lengths: Optional[List[int]] = None, sharing: Optional[List[bool]] = None, stack_types: Optional[List[str]] = None, loss_function: Optional[str] = 'MAPE', **kwargs)[source]

Bases: gluonts.model.estimator.GluonEstimator

An Estimator based on a single (!) NBEATS Network (approximately) as described in the paper: https://arxiv.org/abs/1905.10437. The actual NBEATS model is an ensemble of NBEATS Networks, and is implemented by the “NBEATSEnsembleEstimator”.

Noteworthy differences in this implementation compared to the paper: * The parameter L_H is not implemented; we sample training sequences using the default method in GluonTS using the “InstanceSplitter”.

Parameters
  • freq – Time time granularity of the data

  • prediction_length – Length of the prediction. Also known as ‘horizon’.

  • context_length – Number of time units that condition the predictions Also known as ‘lookback period’. Default is 2 * prediction_length.

  • trainer – Trainer object to be used (default: Trainer())

  • num_stacks – The number of stacks the network should contain. Default and recommended value for generic mode: 30 Recommended value for interpretable mode: 2

  • num_blocks – The number of blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [1] Recommended value for interpretable mode: [3]

  • block_layers – Number of fully connected layers with ReLu activation per block. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [4] Recommended value for interpretable mode: [4]

  • widths – Widths of the fully connected layers with ReLu activation in the blocks. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [512] Recommended value for interpretable mode: [256, 2048]

  • sharing – Whether the weights are shared with the other blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [False] Recommended value for interpretable mode: [True]

  • expansion_coefficient_lengths – If the type is “G” (generic), then the length of the expansion coefficient. If type is “T” (trend), then it corresponds to the degree of the polynomial. If the type is “S” (seasonal) then its not used. A list of ints of length 1 or ‘num_stacks’. Default value for generic mode: [32] Recommended value for interpretable mode: [3]

  • stack_types – One of the following values: “G” (generic), “S” (seasonal) or “T” (trend). A list of strings of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [“G”] Recommended value for interpretable mode: [“T”,”S”]

  • loss_function – The loss funtion (also known as metric) to use for training the network. Unlike other models in GluonTS this network does not use a distribution. One of the following: “sMAPE”, “MASE” or “MAPE”. The default value is “MAPE”.

  • kwargs – Arguments passed to ‘GluonEstimator’.

create_predictor(transformation: gluonts.transform._base.Transformation, trained_network: mxnet.gluon.block.HybridBlock) → gluonts.model.predictor.Predictor[source]

Create and return a predictor object.

Returns

A predictor wrapping a HybridBlock used for inference.

Return type

Predictor

create_training_network() → mxnet.gluon.block.HybridBlock[source]

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

HybridBlock

create_transformation() → gluonts.transform._base.Transformation[source]

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

freq = None
prediction_length = None
class gluonts.model.n_beats.NBEATSEnsembleEstimator(freq: str, prediction_length: int, meta_context_length: Optional[List[int]] = None, meta_loss_function: Optional[List[str]] = None, meta_bagging_size: int = 10, trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), num_stacks: int = 30, widths: Optional[List[int]] = None, num_blocks: Optional[List[int]] = None, num_block_layers: Optional[List[int]] = None, expansion_coefficient_lengths: Optional[List[int]] = None, sharing: Optional[List[bool]] = None, stack_types: Optional[List[str]] = None, **kwargs)[source]

Bases: gluonts.model.estimator.Estimator

An ensemble N-BEATS Estimator (approximately) as described in the paper: https://arxiv.org/abs/1905.10437.

The three meta parameters ‘meta_context_length’, ‘meta_loss_function’ and ‘meta_bagging_size’ together define the way the sub-models are assembled together. The total number of models used for the ensemble is:

|meta_context_length| x |meta_loss_function| x meta_bagging_size

Noteworthy differences in this implementation compared to the paper: * The parameter L_H is not implemented; we sample training sequences using the default method in GluonTS using the “InstanceSplitter”.

Parameters
  • freq – Time time granularity of the data

  • prediction_length – Length of the prediction. Also known as ‘horizon’.

  • meta_context_length – The different ‘context_length’ (aslso known as ‘lookback period’) to use for training the models. The ‘context_length’ is the number of time units that condition the predictions. Default and recommended value: [multiplier * prediction_length for multiplier in range(2, 7)]

  • meta_loss_function – The different ‘loss_function’ (also known as metric) to use for training the models. Unlike other models in GluonTS this network does not use a distribution. Default and recommended value: [“sMAPE”, “MASE”, “MAPE”]

  • meta_bagging_size – The number of models that share the parameter combination of ‘context_length’ and ‘loss_function’. Each of these models gets a different initialization random initialization. Default and recommended value: 10

  • trainer – Trainer object to be used (default: Trainer())

  • num_stacks – The number of stacks the network should contain. Default and recommended value for generic mode: 30 Recommended value for interpretable mode: 2

  • num_blocks – The number of blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [1] Recommended value for interpretable mode: [3]

  • block_layers – Number of fully connected layers with ReLu activation per block. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [4] Recommended value for interpretable mode: [4]

  • widths – Widths of the fully connected layers with ReLu activation in the blocks. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [512] Recommended value for interpretable mode: [256, 2048]

  • sharing – Whether the weights are shared with the other blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [False] Recommended value for interpretable mode: [True]

  • expansion_coefficient_lengths – If the type is “G” (generic), then the length of the expansion coefficient. If type is “T” (trend), then it corresponds to the degree of the polynomial. If the type is “S” (seasonal) then its not used. A list of ints of length 1 or ‘num_stacks’. Default value for generic mode: [32] Recommended value for interpretable mode: [3]

  • stack_types – One of the following values: “G” (generic), “S” (seasonal) or “T” (trend). A list of strings of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [“G”] Recommended value for interpretable mode: [“T”,”S”]

  • **kwargs – Arguments passed down to the individual estimators.

freq = None
classmethod from_hyperparameters(**hyperparameters) → gluonts.model.n_beats._ensemble.NBEATSEnsembleEstimator[source]
prediction_length = None
train(training_data: Iterable[Dict[str, Any]], validation_data: Optional[Iterable[Dict[str, Any]]] = None) → gluonts.model.n_beats._ensemble.NBEATSEnsemblePredictor[source]

Train the estimator on the given data.

Parameters
  • training_data – Dataset to train the model on.

  • validation_data – Dataset to validate the model on during training.

Returns

The predictor containing the trained model.

Return type

Predictor

class gluonts.model.n_beats.NBEATSEnsemblePredictor(prediction_length: int, freq: str, predictors: List[gluonts.model.predictor.RepresentableBlockPredictor], aggregation_method: Optional[str] = 'median')[source]

Bases: gluonts.model.predictor.Predictor

” An ensemble predictor for N-BEATS. Calling ‘.predict’ will result in:

|predictors|x|dataset|

predictions, if aggregation_method is ‘none’, otherwise in:

|dataset|
Parameters
  • prediction_length – Prediction horizon.

  • freq – Frequency of the predicted data.

  • predictors – The list of ‘RepresentableBlockPredictor’ that the ensemble consists of.

  • aggregation_method – The method by which to aggregate the individual predictions of the models. Either ‘median’, ‘mean’ or ‘none’, in which case no aggregation happens. Default is ‘median’.

classmethod deserialize(path: pathlib.Path, ctx: Optional[mxnet.context.Context] = None) → gluonts.model.n_beats._ensemble.NBEATSEnsemblePredictor[source]

Load a serialized predictor from the given path

Parameters
  • path – Path to the serialized files predictor.

  • ctx – Optional mxnet context to be used with the predictor. If nothing is passed will use the GPU if available and CPU otherwise.

hybridize(batch: Dict[str, Any]) → None[source]
predict(dataset: Iterable[Dict[str, Any]], num_samples: Optional[int] = 1, **kwargs) → Iterator[gluonts.model.forecast.Forecast][source]

Compute forecasts for the time series in the provided dataset. This method is not implemented in this abstract class; please use one of the subclasses.

Parameters

dataset – The dataset containing the time series to predict.

Returns

Iterator over the forecasts, in the same order as the dataset iterable was provided.

Return type

Iterator[Forecast]

serialize(path: pathlib.Path) → None[source]
set_aggregation_method(aggregation_method: str)[source]