Table Of Contents
Table Of Contents

gluonts.model.transformer.trans_decoder module

class gluonts.model.transformer.trans_decoder.TransformerDecoder(decoder_length: int, config: Dict, **kwargs)[source]

Bases: mxnet.gluon.block.HybridBlock

cache_reset()[source]
hybrid_forward(F, data: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], enc_out: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], mask: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, is_train: bool = True) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]

A transformer encoder block consists of a self-attention and a feed-forward layer with pre/post process blocks in between.