Table Of Contents
Table Of Contents

gluonts.model.seq2seq package

class gluonts.model.seq2seq.MQCNNEstimator(prediction_length: int, freq: str, context_length: Optional[int] = None, mlp_final_dim: int = 20, mlp_hidden_dimension_seq: List[int] = [], quantiles: List[float] = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08))[source]

Bases: gluonts.model.seq2seq._mq_dnn_estimator.MQDNNEstimator

An MQDNNEstimator with a Convolutional Neural Network (CNN) as an encoder. Implements the MQ-CNN Forecaster, proposed in [WTN+17].

class gluonts.model.seq2seq.MQRNNEstimator(prediction_length: int, freq: str, context_length: Optional[int] = None, mlp_final_dim: int = 20, mlp_hidden_dimension_seq: List[int] = [], trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), quantiles: List[float] = [0.1, 0.5, 0.9])[source]

Bases: gluonts.model.seq2seq._mq_dnn_estimator.MQDNNEstimator

An MQDNNEstimator with a Recurrent Neural Network (RNN) as an encoder. Implements the MQ-RNN Forecaster, proposed in [WTN+17].

class gluonts.model.seq2seq.RNN2QRForecaster(freq: str, prediction_length: int, cardinality: List[int], embedding_dimension: int, encoder_rnn_layer: int, encoder_rnn_num_hidden: int, decoder_mlp_layer: List[int], decoder_mlp_static_dim: int, encoder_rnn_model: str = 'lstm', encoder_rnn_bidirectional: bool = True, scaler: gluonts.block.scaler.Scaler = <class 'gluonts.block.scaler.NOPScaler'>, context_length: Optional[int] = None, quantiles: List[float] = [0.1, 0.5, 0.9], trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), num_parallel_samples: int = 100)[source]

Bases: gluonts.model.seq2seq._seq2seq_estimator.Seq2SeqEstimator

class gluonts.model.seq2seq.Seq2SeqEstimator(freq: str, prediction_length: int, cardinality: List[int], embedding_dimension: int, encoder: gluonts.block.encoder.Seq2SeqEncoder, decoder_mlp_layer: List[int], decoder_mlp_static_dim: int, scaler: gluonts.block.scaler.Scaler = gluonts.block.scaler.NOPScaler(), context_length: Optional[int] = None, quantiles: List[float] = [0.1, 0.5, 0.9], trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), num_parallel_samples: int = 100)[source]

Bases: gluonts.model.estimator.GluonEstimator

Quantile-Regression Sequence-to-Sequence Estimator

create_predictor(transformation: gluonts.transform.Transformation, trained_network: gluonts.model.seq2seq._seq2seq_network.Seq2SeqTrainingNetwork) → gluonts.model.predictor.Predictor[source]

Create and return a predictor object.

Returns:A predictor wrapping a HybridBlock used for inference.
Return type:Predictor
create_training_network() → mxnet.gluon.block.HybridBlock[source]

Create and return the network used for training (i.e., computing the loss).

Returns:The network that computes the loss given input data.
Return type:HybridBlock
create_transformation() → gluonts.transform.Transformation[source]

Create and return the transformation needed for training and inference.

Returns:The transformation that will be applied entry-wise to datasets, at training and inference time.
Return type:Transformation