gluonts.model.simple_feedforward package¶
-
class
gluonts.model.simple_feedforward.
SimpleFeedForwardEstimator
(freq: str, prediction_length: int, trainer: gluonts.trainer._base.Trainer = gluonts.trainer._base.Trainer(batch_size=32, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init="xavier", learning_rate=0.001, learning_rate_decay_factor=0.5, minimum_learning_rate=5e-05, num_batches_per_epoch=50, patience=10, weight_decay=1e-08), num_hidden_dimensions: Optional[List[int]] = None, context_length: Optional[int] = None, distr_output: gluonts.distribution.distribution_output.DistributionOutput = gluonts.distribution.student_t.StudentTOutput(), batch_normalization: bool = False, mean_scaling: bool = True, num_parallel_samples: int = 100)[source]¶ Bases:
gluonts.model.estimator.GluonEstimator
SimpleFeedForwardEstimator shows how to build a simple MLP model predicting the next target time-steps given the previous ones.
Given that we want to define a gluon model trainable by SGD, we inherit the parent class GluonEstimator that handles most of the logic for fitting a neural-network.
We thus only have to define:
How the data is transformed before being fed to our model:
def create_transformation(self) -> Transformation
How the training happens:
def create_training_network(self) -> HybridBlock
how the predictions can be made for a batch given a trained network:
def create_predictor( self, transformation: Transformation, trained_net: HybridBlock, ) -> Predictor
Parameters: - freq – Time time granularity of the data
- prediction_length – Length of the prediction horizon
- trainer – Trainer object to be used (default: Trainer())
- num_hidden_dimensions – Number of hidden nodes in each layer (default: [40, 40])
- context_length – Number of time units that condition the predictions (default: None, in which case context_length = prediction_length)
- distr_output – Distribution to fit (default: StudentTOutput())
- batch_normalization – Whether to use batch normalization (default: False)
- mean_scaling – Scale the network input by the data mean and the network output by its inverse (default: True)
- num_parallel_samples – Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy (default: 100)
-
create_predictor
(transformation: gluonts.transform.Transformation, trained_network: mxnet.gluon.block.HybridBlock) → gluonts.model.predictor.Predictor[source]¶ Create and return a predictor object.
Returns: A predictor wrapping a HybridBlock used for inference. Return type: Predictor
-
create_training_network
() → mxnet.gluon.block.HybridBlock[source]¶ Create and return the network used for training (i.e., computing the loss).
Returns: The network that computes the loss given input data. Return type: HybridBlock
-
create_transformation
() → gluonts.transform.Transformation[source]¶ Create and return the transformation needed for training and inference.
Returns: The transformation that will be applied entry-wise to datasets, at training and inference time. Return type: Transformation