gluonts.model.deepvar package

class gluonts.model.deepvar.DeepVAREstimator(freq: str, prediction_length: int, target_dim: int, trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), context_length: Optional[int] = None, num_layers: int = 2, num_cells: int = 40, cell_type: str = 'lstm', num_parallel_samples: int = 100, dropout_rate: float = 0.1, cardinality: List[int] = [1], embedding_dimension: int = 5, distr_output: Optional[gluonts.distribution.distribution_output.DistributionOutput] = None, rank: Optional[int] = 5, scaling: bool = True, pick_incomplete: bool = False, lags_seq: Optional[List[int]] = None, time_features: Optional[List[gluonts.time_feature._base.TimeFeature]] = None, conditioning_length: int = 200, use_marginal_transformation=False, **kwargs)[source]

Bases: gluonts.model.estimator.GluonEstimator

Constructs a DeepVAR estimator, which is a multivariate variant of DeepAR.

These models have been described as VEC-LSTM in this paper: https://arxiv.org/abs/1910.03002

Note that this implementation will change over time and we further work on this method. To replicate the results of the paper, please refer to our (frozen) implementation here: https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release

Parameters
  • freq – Frequency of the data to train on and predict

  • prediction_length – Length of the prediction horizon

  • target_dim – Dimensionality of the input dataset

  • trainer – Trainer object to be used (default: Trainer())

  • context_length – Number of steps to unroll the RNN for before computing predictions (default: None, in which case context_length = prediction_length)

  • num_layers – Number of RNN layers (default: 2)

  • num_cells – Number of RNN cells for each layer (default: 40)

  • cell_type – Type of recurrent cells to use (available: ‘lstm’ or ‘gru’; default: ‘lstm’)

  • num_parallel_samples – Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy (default: 100)

  • dropout_rate – Dropout regularization parameter (default: 0.1)

  • cardinality – Number of values of each categorical feature (default: [1])

  • embedding_dimension – Dimension of the embeddings for categorical features (default: 5])

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: LowrankMultivariateGaussianOutput with dim=target_dim and rank=5). Note that target dim of the DistributionOutput and the estimator constructor call need to match. Also note that the rank in this constructor is meaningless if the DistributionOutput is constructed outside of this class.

  • rank – Rank for the LowrankMultivariateGaussianOutput. (default: 5)

  • scaling – Whether to automatically scale the target values (default: true)

  • pick_incomplete – Whether training examples can be sampled with only a part of past_length time-units

  • lags_seq – Indices of the lagged target values to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq)

  • time_features – Time features to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq)

  • conditioning_length – Set maximum length for conditioning the marginal transformation

  • use_marginal_transformation – Whether marginal (empirical cdf, gaussian ppf) transformation is used.

create_predictor(transformation: gluonts.transform._base.Transformation, trained_network: mxnet.gluon.block.HybridBlock) → gluonts.model.predictor.Predictor[source]

Create and return a predictor object.

Returns

A predictor wrapping a HybridBlock used for inference.

Return type

Predictor

create_training_network() → gluonts.model.deepvar._network.DeepVARTrainingNetwork[source]

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

HybridBlock

create_transformation() → gluonts.transform._base.Transformation[source]

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

freq = None
prediction_length = None