Table Of Contents
Table Of Contents

[Download]

Quick Start Tutorial

The GluonTS toolkit contains components and tools for building time series models using MXNet. The models that are currently included are forecasting models but the components also support other time series use cases, such as classification or anomaly detection.

The toolkit is not intended as a forecasting solution for businesses or end users but it rather targets scientists and engineers who want to tweak algorithms or build and experiment with their own models.

GluonTS contains:

  • Components for building new models (likelihoods, feature processing pipelines, calendar features etc.)
  • Data loading and processing
  • A number of pre-built models
  • Plotting and evaluation facilities
  • Artificial and real datasets (only external datasets with blessed license)
In [1]:
# Third-party imports
%matplotlib inline
import mxnet as mx
from mxnet import gluon
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json

Datasets

GluonTS datasets

GluonTS comes with a number of publicly available datasets.

In [2]:
from gluonts.dataset.repository.datasets import get_dataset, dataset_recipes
from gluonts.dataset.util import to_pandas
In [3]:
print(f"Available datasets: {list(dataset_recipes.keys())}")
Available datasets: ['constant', 'exchange_rate', 'solar-energy', 'electricity', 'traffic', 'm4_hourly', 'm4_daily', 'm4_weekly', 'm4_monthly', 'm4_quarterly', 'm4_yearly']

To download one of the built-in datasets, simply call get_dataset with one of the above names. GluonTS can re-use the saved dataset so that it does not need to be downloaded again: simply set regenerate=False.

In [4]:
dataset = get_dataset("m4_hourly", regenerate=True)
INFO:root:downloading and processing m4_hourly
saving time-series into /var/lib/jenkins/.mxnet/gluon-ts/datasets/m4_hourly/train/data.json
saving time-series into /var/lib/jenkins/.mxnet/gluon-ts/datasets/m4_hourly/test/data.json

In general, the datasets provided by GluonTS are objects that consists of three main members:

  • dataset.train is an iterable collection of data entries used for training. Each entry corresponds to one time series
  • dataset.test is an iterable collection of data entries used for inference. The test dataset is an extended version of the train dataset that contains a window in the end of each time series that was not seen during training. This window has length equal to the recommended prediction length.
  • dataset.metadata containts metadata of the dataset such as the frequency of the time series, a recommended prediction horizon, associated features, etc.
In [5]:
entry = next(iter(dataset.train))
train_series = to_pandas(entry)
train_series.plot()
plt.grid(which="both")
plt.legend(["train series"], loc="upper left")
plt.show()
../../_images/examples_basic_forecasting_tutorial_tutorial_8_0.png
In [6]:
entry = next(iter(dataset.test))
test_series = to_pandas(entry)
test_series.plot()
plt.axvline(train_series.index[-1], color='r') # end of train dataset
plt.grid(which="both")
plt.legend(["test series", "end of train series"], loc="upper left")
plt.show()
../../_images/examples_basic_forecasting_tutorial_tutorial_9_0.png
In [7]:
print(f"Length of forecasting window in test dataset: {len(test_series) - len(train_series)}")
print(f"Recommended prediction horizon: {dataset.metadata.prediction_length}")
print(f"Frequency of the time series: {dataset.metadata.freq}")
Length of forecasting window in test dataset: 48
Recommended prediction horizon: 48
Frequency of the time series: H

Custom datasets

At this point, it is important to emphasize that GluonTS does not require this specific format for a custom dataset that a user may have. The only requirements for a custom dataset are to be iterable and have a “target” and a “start” field. To make this more clear, assume the common case where a dataset is in the form of a numpy.array and the index of the time series in a pandas.Timestamp (possibly different for each time series):

In [8]:
N = 10  # number of time series
T = 100  # number of timesteps
prediction_length = 24
freq = "1H"
custom_dataset = np.random.normal(size=(N, T))
start = pd.Timestamp("01-01-2019", freq=freq)  # can be different for each time series

Now, you can split your dataset and bring it in a GluonTS appropriate format with just two lines of code:

In [9]:
from gluonts.dataset.common import ListDataset
In [10]:
# train dataset: cut the last window of length "prediction_length", add "target" and "start" fields
train_ds = ListDataset([{'target': x, 'start': start}
                        for x in custom_dataset[:, :-prediction_length]],
                       freq=freq)
# test dataset: use the whole dataset, add "target" and "start" fields
test_ds = ListDataset([{'target': x, 'start': start}
                       for x in custom_dataset],
                      freq=freq)

Training an existing model (Estimator)

GluonTS comes with a number of pre-built models. All the user needs to do is configure some hyperparameters. The existing models focus on (but are not limited to) probabilistic forecasting. Probabilistic forecasts are predictions in the form of a probability distribution, rather than simply a single point estimate.

We will begin with GulonTS’s pre-built feedforward neural network estimator, a simple but powerful forecasting model. We will use this model to demonstrate the process of training a model, producing forecasts, and evaluating the results.

GluonTS’s built-in feedforward neural network (SimpleFeedForwardEstimator) accepts an input window of length context_length and predicts the distribution of the values of the subsequent prediction_length values. In GluonTS parlance, the feedforward neural network model is an example of Estimator. In GluonTS, Estimator objects represent a forecasting model as well as details such as its coefficients, weights, etc.

In general, each estimator (pre-built or custom) is configured by a number of hyperparameters that can be either common (but not binding) among all estimators (e.g., the prediction_length) or specific for the particular estimator (e.g., number of layers for a neural network or the stride in a CNN).

Finally, each estimator is configured by a Trainer, which defines how the model will be trained i.e., the number of epochs, the learning rate, etc.

In [11]:
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.trainer import Trainer
INFO:root:Using CPU
In [12]:
estimator = SimpleFeedForwardEstimator(
    num_hidden_dimensions=[10],
    prediction_length=dataset.metadata.prediction_length,
    context_length=100,
    freq=dataset.metadata.freq,
    trainer=Trainer(ctx="cpu",
                    epochs=5,
                    learning_rate=1e-3,
                    num_batches_per_epoch=100
                   )
)

After specifing our estimator with all the necessary hyperparameters we can train it using our training dataset dataset.train by invoking the train method of the estimator. The training algorithm returns a fitted model (or a Predictor in GluonTS parlance) that can be used to construct forecasts.

In [13]:
predictor = estimator.train(dataset.train)
INFO:root:Start model training
INFO:root:Number of parameters in SimpleFeedForwardTrainingNetwork: 483
INFO:root:Epoch[0] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 186.13it/s, avg_epoch_loss=5.53]
INFO:root:Epoch[0] Elapsed time 0.543 seconds
INFO:root:Epoch[0] Evaluation metric 'epoch_loss'=5.532786
INFO:root:Epoch[1] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 207.67it/s, avg_epoch_loss=4.75]
INFO:root:Epoch[1] Elapsed time 0.483 seconds
INFO:root:Epoch[1] Evaluation metric 'epoch_loss'=4.745639
INFO:root:Epoch[2] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 210.95it/s, avg_epoch_loss=4.65]
INFO:root:Epoch[2] Elapsed time 0.475 seconds
INFO:root:Epoch[2] Evaluation metric 'epoch_loss'=4.646042
INFO:root:Epoch[3] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 208.60it/s, avg_epoch_loss=4.72]
INFO:root:Epoch[3] Elapsed time 0.480 seconds
INFO:root:Epoch[3] Evaluation metric 'epoch_loss'=4.721551
INFO:root:Epoch[4] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 210.59it/s, avg_epoch_loss=4.62]
INFO:root:Epoch[4] Elapsed time 0.476 seconds
INFO:root:Epoch[4] Evaluation metric 'epoch_loss'=4.615928
INFO:root:Loading parameters from best epoch (4)
INFO:root:Final loss: 4.6159275221824645 (occurred at epoch 4)
INFO:root:End model training

With a predictor in hand, we can now predict the last window of the dataset.test and evaluate our model’s performance.

GluonTS comes with the make_evaluation_predictions function that automates the process of prediction and model evaluation. Roughly, this function performs the following steps:

  • Removes the final window of length prediction_length of the dataset.test that we want to predict
  • The estimator uses the remaining data to predict (in the form of sample paths) the “future” window that was just removed
  • The module outputs the forecast sample paths and the dataset.test (as python generator objects)
In [14]:
from gluonts.evaluation.backtest import make_evaluation_predictions
In [15]:
forecast_it, ts_it = make_evaluation_predictions(
    dataset=dataset.test,  # test dataset
    predictor=predictor,  # predictor
    num_eval_samples=100,  # number of sample paths we want for evaluation
)

First, we can convert these generators to lists to ease the subsequent computations.

In [16]:
forecasts = list(forecast_it)
tss = list(ts_it)

We can examine the first element of these lists (that corresponds to the first time series of the dataset). Let’s start with the list containing the time series, i.e., tss. We expect the first entry of tss to contain the (target of the) first time series of dataset.test.

In [17]:
# first entry of the time series list
ts_entry = tss[0]
In [18]:
# first 5 values of the time series (convert from pandas to numpy)
np.array(ts_entry[:5]).reshape(-1,)
Out[18]:
array([605., 586., 586., 559., 511.], dtype=float32)
In [19]:
# first entry of dataset.test
dataset_test_entry = next(iter(dataset.test))
In [20]:
# first 5 values
dataset_test_entry['target'][:5]
Out[20]:
array([605., 586., 586., 559., 511.], dtype=float32)

The entries in the forecast list are a bit more complex. They are objects that contain all the sample paths in the form of numpy.ndarray with dimension (num_samples, prediction_length), the start date of the forecast, the frequency of the time series, etc. We can access all these information by simply invoking the corresponding attribute of the forecast object.

In [21]:
# first entry of the forecast list
forecast_entry = forecasts[0]
In [22]:
print(f"Number of sample paths: {forecast_entry.num_samples}")
print(f"Dimension of samples: {forecast_entry.samples.shape}")
print(f"Start date of the forecast window: {forecast_entry.start_date}")
print(f"Frequency of the time series: {forecast_entry.freq}")
Number of sample paths: 100
Dimension of samples: (100, 48)
Start date of the forecast window: 1750-01-30 04:00:00
Frequency of the time series: H

We can also do calculations to summarize the sample paths, such computing the mean or a quantile for each of the 48 time steps in the forecast window.

In [23]:
print(f"Mean of the future window:\n {forecast_entry.mean}")
print(f"0.5-quantile (median) of the future window:\n {forecast_entry.quantile(0.5)}")
Mean of the future window:
 [622.68884 583.9171  527.2449  487.87244 482.5809  533.849   499.74637
 477.8866  540.8988  623.393   583.6582  677.62244 793.9289  806.3187
 895.6728  867.10693 927.6244  761.6409  846.26556 856.137   787.1546
 801.3244  694.85175 726.8326  636.5713  619.97955 567.1971  505.13947
 535.40393 540.4406  564.2066  529.27466 536.23303 543.73865 637.2441
 710.3739  740.89984 799.3273  881.1241  869.25037 898.8381  885.6911
 885.2084  877.96313 863.9045  844.8786  814.9486  732.8773 ]
0.5-quantile (median) of the future window:
 [628.8034  584.85925 534.13617 488.1163  511.19254 540.0306  504.64417
 484.41162 553.60913 545.6763  583.33923 671.84283 806.9286  811.4107
 899.3908  871.40894 911.4076  764.7379  841.9638  851.0691  782.337
 801.94165 715.4908  724.1985  633.62164 615.09216 565.2521  508.94467
 529.0382  527.3974  520.03217 518.5677  527.23364 540.1315  634.3493
 693.623   736.6415  814.3438  873.8712  884.4707  902.13416 899.36676
 878.67206 871.4051  850.6954  836.82104 811.3886  720.04004]

Forecast objects have a plot method that can summarize the forecast paths as the mean, prediction intervals, etc. The prediction intervals are shaded in different colors as a “fan chart”.

In [24]:
def plot_prob_forecasts(ts_entry, forecast_entry):
    plot_length = 150
    prediction_intervals = (50.0, 90.0)
    legend = ["observations", "median prediction"] + [f"{k}% prediction interval" for k in prediction_intervals][::-1]

    fig, ax = plt.subplots(1, 1, figsize=(10, 7))
    ts_entry[-plot_length:].plot(ax=ax)  # plot the time series
    forecast_entry.plot(prediction_intervals=prediction_intervals, color='g')
    plt.grid(which="both")
    plt.legend(legend, loc="upper left")
    plt.show()
In [25]:
plot_prob_forecasts(ts_entry, forecast_entry)
../../_images/examples_basic_forecasting_tutorial_tutorial_38_0.png

We can also evaluate the quality of our forecasts numerically. In GluonTS, the Evaluator class can compute aggregate performance metrics, as well as metrics per time series (which can be useful for analyzing performance across heterogeneous time series).

In [26]:
from gluonts.evaluation import Evaluator
In [27]:
evaluator = Evaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics, item_metrics = evaluator(iter(tss), iter(forecasts), num_series=len(dataset.test))
Running evaluation: 100%|██████████| 414/414 [00:01<00:00, 243.95it/s]

Aggregate metrics aggregate both across time-steps and across time series.

In [28]:
print(json.dumps(agg_metrics, indent=4))
{
    "MSE": 10328885.51417528,
    "abs_error": 10532066.565883636,
    "abs_target_sum": 145558863.59960938,
    "abs_target_mean": 7324.822041043146,
    "seasonal_error": 336.9046924038305,
    "MASE": 3.301203234153786,
    "sMAPE": 0.18250160070360624,
    "MSIS": 35.66897287242533,
    "QuantileLoss[0.1]": 4453364.4553545,
    "Coverage[0.1]": 0.11518719806763285,
    "QuantileLoss[0.5]": 10532066.609985352,
    "Coverage[0.5]": 0.5916364734299517,
    "QuantileLoss[0.9]": 7168539.242866801,
    "Coverage[0.9]": 0.8847624798711754,
    "RMSE": 3213.85835315984,
    "NRMSE": 0.43876265322920344,
    "ND": 0.07235606479351424,
    "wQuantileLoss[0.1]": 0.03059493833095885,
    "wQuantileLoss[0.5]": 0.07235606509649622,
    "wQuantileLoss[0.9]": 0.04924838697961667,
    "mean_wQuantileLoss": 0.050733130135690585,
    "MAE_Coverage": 0.040687063875469734
}

Individual metrics are aggregated only across time-steps.

In [29]:
item_metrics.head()
Out[29]:
item_id MSE abs_error abs_target_sum abs_target_mean seasonal_error MASE sMAPE MSIS QuantileLoss[0.1] Coverage[0.1] QuantileLoss[0.5] Coverage[0.5] QuantileLoss[0.9] Coverage[0.9]
0 NaN 3584.960286 2181.766602 31644.0 659.250000 42.371302 1.072742 0.066562 5.908717 778.976172 0.000000 2181.766449 0.812500 1514.537964 0.979167
1 NaN 186942.375000 18793.296875 124149.0 2586.437500 165.107988 2.371339 0.147569 13.624180 4103.478052 0.291667 18793.298096 1.000000 8654.388232 1.000000
2 NaN 26422.901042 5985.281250 65030.0 1354.791667 78.889053 1.580617 0.086480 10.301703 3208.542566 0.000000 5985.281311 0.208333 2006.366406 0.854167
3 NaN 225630.125000 18132.835938 235783.0 4912.145833 258.982249 1.458661 0.075176 8.504473 9012.540967 0.041667 18132.835693 0.500000 8078.736523 0.979167
4 NaN 100667.187500 11514.721680 131088.0 2731.000000 200.494083 1.196494 0.080516 6.507255 4021.681519 0.020833 11514.721558 0.854167 7682.577148 1.000000
In [30]:
item_metrics.plot(x='MSIS', y='MASE', kind='scatter')
plt.grid(which="both")
plt.show()
../../_images/examples_basic_forecasting_tutorial_tutorial_46_0.png

Create your own forecast model

For creating your own forecast model you need to:

  • Define the training and prediction network
  • Define a new estimator that specifies any data processing and uses the networks

The training and prediction networks can be arbitrarily complex but they should follow some basic rules:

  • Both should have a hybrid_forward method that defines what should happen when the network is called
  • The trainng network’s hybrid_forward should return a loss based on the prediction and the true values
  • The prediction network’s hybrid_forward should return the predictions

For example, we can create a simple training network that defines a neural network which takes as an input the past values of the time series and outputs a future predicted window of length prediction_length. It uses the L1 loss in the hybrid_forward method to evaluate the error among the predictions and the true values of the time series. The corresponding prediction network should be identical to the training network in terms of architecture (we achieve this by inheriting the training network class), and its hybrid_forward method outputs directly the predictions.

Note that this simple model does only point forecasts by construction, i.e., we train it to outputs directly the future values of the time series and not any probabilistic view of the future (to achieve this we should train a network to learn a probability distribution and then sample from it to create sample paths).

In [31]:
class MyTrainNetwork(gluon.HybridBlock):
    def __init__(self, prediction_length, **kwargs):
        super().__init__(**kwargs)
        self.prediction_length = prediction_length

        with self.name_scope():
            # Set up a 3 layer neural network that directly predicts the target values
            self.nn = mx.gluon.nn.HybridSequential()
            self.nn.add(mx.gluon.nn.Dense(units=40, activation='relu'))
            self.nn.add(mx.gluon.nn.Dense(units=40, activation='relu'))
            self.nn.add(mx.gluon.nn.Dense(units=self.prediction_length, activation='softrelu'))

    def hybrid_forward(self, F, past_target, future_target):
        prediction = self.nn(past_target)
        # calculate L1 loss with the future_target to learn the median
        return (prediction - future_target).abs().mean(axis=-1)


class MyPredNetwork(MyTrainNetwork):
    # The prediction network only receives past_target and returns predictions
    def hybrid_forward(self, F, past_target):
        prediction = self.nn(past_target)
        return prediction.expand_dims(axis=1)

Now, we need to construct the estimator which should also follow some rules:

  • It should include a create_transformation method that defines all the possible feature transformations and how the data is split during training
  • It should include a create_training_network method that returns the training network configured with any necessary hyperparameters
  • It should include a create_predictor method that creates the prediction network, and returns a Predictor object

A Predictor defines the predict method of a given predictor. Roughly, this method takes the test dataset, it passes it through the prediction network and yields the predictions. You can think of the Predictor object as a wrapper of the prediction network that defines its predict method.

Earlier, we used the make_evaluation_predictions to evaluate our predictor. Internally, the make_evaluation_predictions function invokes the predict method of the predictor to get the forecasts.

In [32]:
from gluonts.model.estimator import GluonEstimator
from gluonts.model.predictor import Predictor, RepresentableBlockPredictor
from gluonts.core.component import validated
from gluonts.support.util import copy_parameters
from gluonts.transform import ExpectedNumInstanceSampler, Transformation, InstanceSplitter, FieldName
from mxnet.gluon import HybridBlock
In [33]:
class MyEstimator(GluonEstimator):
    @validated()
    def __init__(
        self,
        freq: str,
        context_length: int,
        prediction_length: int,
        trainer: Trainer = Trainer()
    ) -> None:
        super().__init__(trainer=trainer)
        self.context_length = context_length
        self.prediction_length = prediction_length
        self.freq = freq


    def create_transformation(self):
        # Feature transformation that the model uses for input.
        # Here we use a transformation that randomly select training samples from all time series.
        return InstanceSplitter(
                    target_field=FieldName.TARGET,
                    is_pad_field=FieldName.IS_PAD,
                    start_field=FieldName.START,
                    forecast_start_field=FieldName.FORECAST_START,
                    train_sampler=ExpectedNumInstanceSampler(num_instances=1),
                    past_length=self.context_length,
                    future_length=self.prediction_length,
                )

    def create_training_network(self) -> MyTrainNetwork:
        return MyTrainNetwork(
            prediction_length=self.prediction_length
        )

    def create_predictor(
        self, transformation: Transformation, trained_network: HybridBlock
    ) -> Predictor:
        prediction_network = MyPredNetwork(
            prediction_length=self.prediction_length
        )

        copy_parameters(trained_network, prediction_network)

        return RepresentableBlockPredictor(
            input_transform=transformation,
            prediction_net=prediction_network,
            batch_size=self.trainer.batch_size,
            freq=self.freq,
            prediction_length=self.prediction_length,
            ctx=self.trainer.ctx,
        )
INFO:root:Using CPU

Now, we can repeat the same pipeline as in the case we had a pre-built model: train the predictor, create the forecasts and evaluate the results.

In [34]:
estimator = MyEstimator(
    prediction_length=dataset.metadata.prediction_length,
    context_length=100,
    freq=dataset.metadata.freq,
    trainer=Trainer(ctx="cpu",
                    epochs=5,
                    learning_rate=1e-3,
                    num_batches_per_epoch=100
                   )
)
In [35]:
predictor = estimator.train(dataset.train)
INFO:root:Start model training
INFO:root:Number of parameters in MyTrainNetwork: 128
INFO:root:Epoch[0] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 200.18it/s, avg_epoch_loss=2.44e+3]
INFO:root:Epoch[0] Elapsed time 0.501 seconds
INFO:root:Epoch[0] Evaluation metric 'epoch_loss'=2439.283120
INFO:root:Epoch[1] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 237.44it/s, avg_epoch_loss=1.06e+3]
INFO:root:Epoch[1] Elapsed time 0.422 seconds
INFO:root:Epoch[1] Evaluation metric 'epoch_loss'=1062.187615
INFO:root:Epoch[2] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 238.44it/s, avg_epoch_loss=780]
INFO:root:Epoch[2] Elapsed time 0.420 seconds
INFO:root:Epoch[2] Evaluation metric 'epoch_loss'=780.103816
INFO:root:Epoch[3] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 238.09it/s, avg_epoch_loss=826]
INFO:root:Epoch[3] Elapsed time 0.421 seconds
INFO:root:Epoch[3] Evaluation metric 'epoch_loss'=826.354195
INFO:root:Epoch[4] Learning rate is 0.001
100%|██████████| 100/100 [00:00<00:00, 235.84it/s, avg_epoch_loss=549]
INFO:root:Epoch[4] Elapsed time 0.425 seconds
INFO:root:Epoch[4] Evaluation metric 'epoch_loss'=548.848188
INFO:root:Loading parameters from best epoch (4)
INFO:root:Final loss: 548.8481881523132 (occurred at epoch 4)
INFO:root:End model training
In [36]:
forecast_it, ts_it = make_evaluation_predictions(
    dataset=dataset.test,
    predictor=predictor,
    num_eval_samples=100
)
In [37]:
forecasts = list(forecast_it)
tss = list(ts_it)
In [38]:
plot_prob_forecasts(tss[0], forecasts[0])
../../_images/examples_basic_forecasting_tutorial_tutorial_57_0.png

Observe that we cannot actually see any prediction intervals in the predictions. This is expected since the model that we defined does not do probabilistic forecasting but it just gives point estimates. By requiring 100 sample paths (defined in make_evaluation_predictions) in such a network, we get 100 times the same output.

In [39]:
evaluator = Evaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics, item_metrics = evaluator(iter(tss), iter(forecasts), num_series=len(dataset.test))
Running evaluation: 100%|██████████| 414/414 [00:01<00:00, 252.48it/s]
In [40]:
print(json.dumps(agg_metrics, indent=4))
{
    "MSE": 53926974.52019439,
    "abs_error": 13383084.788986206,
    "abs_target_sum": 145558863.59960938,
    "abs_target_mean": 7324.822041043146,
    "seasonal_error": 336.9046924038305,
    "MASE": 5.872580200325534,
    "sMAPE": 0.2680300354373203,
    "MSIS": 234.90320864134986,
    "QuantileLoss[0.1]": 15490842.47921667,
    "Coverage[0.1]": 0.5597826086956522,
    "QuantileLoss[0.5]": 13383084.577418804,
    "Coverage[0.5]": 0.5597826086956522,
    "QuantileLoss[0.9]": 11275326.675620938,
    "Coverage[0.9]": 0.5597826086956522,
    "RMSE": 7343.498792823104,
    "NRMSE": 1.0025497891519148,
    "ND": 0.09194276774377154,
    "wQuantileLoss[0.1]": 0.10642321667080012,
    "wQuantileLoss[0.5]": 0.09194276629028807,
    "wQuantileLoss[0.9]": 0.07746231590977601,
    "mean_wQuantileLoss": 0.09194276629028808,
    "MAE_Coverage": 0.28659420289855075
}
In [41]:
item_metrics.head(10)
Out[41]:
item_id MSE abs_error abs_target_sum abs_target_mean seasonal_error MASE sMAPE MSIS QuantileLoss[0.1] Coverage[0.1] QuantileLoss[0.5] Coverage[0.5] QuantileLoss[0.9] Coverage[0.9]
0 NaN 1.673222e+04 2944.828613 31644.0 659.250000 42.371302 1.447928 0.107994 57.917124 3437.830286 0.750000 2944.828674 0.750000 2451.827063 0.750000
1 NaN 3.882986e+05 22173.589844 124149.0 2586.437500 165.107988 2.797865 0.191544 111.914586 35083.662378 0.979167 22173.590210 0.979167 9263.518042 0.979167
2 NaN 1.040566e+05 10056.593750 65030.0 1354.791667 78.889053 2.655785 0.170522 106.231403 3476.489014 0.270833 10056.593506 0.270833 16636.697998 0.270833
3 NaN 7.893035e+05 20473.880859 235783.0 4912.145833 258.982249 1.646982 0.105067 65.879299 15497.951514 0.437500 20473.882568 0.437500 25449.813623 0.437500
4 NaN 2.259717e+05 14963.994141 131088.0 2731.000000 200.494083 1.554908 0.134546 62.196323 16916.028735 0.583333 14963.993286 0.583333 13011.957837 0.583333
5 NaN 1.275673e+06 24593.515625 303379.0 6320.395833 212.875740 2.406873 0.099767 96.274933 26593.481445 0.708333 24593.516602 0.708333 22593.551758 0.708333
6 NaN 4.000425e+07 133556.828125 1985325.0 41360.937500 1947.687870 1.428583 0.088288 57.143322 71681.392187 0.333333 133556.820312 0.333333 195432.248438 0.333333
7 NaN 3.039847e+07 102756.398438 1540706.0 32098.041667 1624.044379 1.318165 0.086699 52.726597 75763.448438 0.458333 102756.398438 0.458333 129749.348438 0.458333
8 NaN 4.015743e+07 169988.687500 1640860.0 34184.583333 1850.988166 1.913265 0.119807 76.530597 243974.101172 0.916667 169988.677734 0.916667 96003.254297 0.916667
9 NaN 5.302571e+03 1616.162598 21408.0 446.000000 10.526627 3.198561 0.095613 127.942413 1165.969141 0.541667 1616.162598 0.541667 2066.356055 0.541667
In [42]:
item_metrics.plot(x='MSIS', y='MASE', kind='scatter')
plt.grid(which="both")
plt.show()
../../_images/examples_basic_forecasting_tutorial_tutorial_62_0.png