Table Of Contents
Table Of Contents

gluonts.model.deep_factor package

class gluonts.model.deep_factor.DeepFactorEstimator(freq: str, prediction_length: int, num_hidden_global: int = 50, num_layers_global: int = 1, num_factors: int = 10, num_hidden_local: int = 5, num_layers_local: int = 1, cell_type: str = 'lstm', trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), context_length: Optional[int] = None, num_eval_samples: int = 100, cardinality: List[int] = [1], embedding_dimension: int = 10, distr_output: gluonts.distribution.distribution_output.DistributionOutput = gluonts.distribution.student_t.StudentTOutput())[source]

Bases: gluonts.model.estimator.GluonEstimator

DeepFactorEstimator is an implementation of the 2019 ICML paper “Deep Factors for Forecasting” https://arxiv.org/abs/1905.12417. It uses a global RNN model to learn patterns across multiple related time series and an arbitrary local model to model the time series on a per time series basis. In the current implementation, the local model is a RNN (DF-RNN).

Parameters:
  • freq – Time series frequency.
  • prediction_length – Prediction length.
  • num_hidden_global – Number of units per hidden layer for the global RNN model (default: 50).
  • num_layers_global – Number of hidden layers for the global RNN model (default: 1).
  • num_factors – Number of global factors (default: 10).
  • num_hidden_local – Number of units per hidden layer for the local RNN model (default: 5).
  • num_layers_local – Number of hidden layers for the global local model (default: 1).
  • cell_type – Type of recurrent cells to use (available: ‘lstm’ or ‘gru’; default: ‘lstm’).
  • trainer – Trainer object to be used (default: Trainer()).
  • context_length – Training length (default: None, in which case context_length = prediction_length).
  • num_eval_samples – Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy (default: 100).
  • cardinality – List consisting of the number of time series (default: list([1]).
  • embedding_dimension – Dimension of the embeddings for categorical features (the same dimension is used for all embeddings, default: 10).
  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).
create_predictor(transformation: gluonts.transform.Transformation, trained_network: gluonts.model.deep_factor._network.DeepFactorTrainingNetwork) → gluonts.model.predictor.Predictor[source]

Create and return a predictor object.

Returns:A predictor wrapping a HybridBlock used for inference.
Return type:Predictor
create_training_network() → gluonts.model.deep_factor._network.DeepFactorTrainingNetwork[source]

Create and return the network used for training (i.e., computing the loss).

Returns:The network that computes the loss given input data.
Return type:HybridBlock
create_transformation() → gluonts.transform.Transformation[source]

Create and return the transformation needed for training and inference.

Returns:The transformation that will be applied entry-wise to datasets, at training and inference time.
Return type:Transformation