easy_vision.python.core.decoders¶
easy_vision.python.core.decoders.decoder¶
easy_vision.python.core.decoders.fc_decoders¶
easy_vision.python.core.decoders.rnn_decoders¶
-
class
easy_vision.python.core.decoders.rnn_decoders.
RNNDecoderWithAttention
(config, vocab_size, time_major=True, is_training=True, scope='AttentionDecoder')[source]¶ Bases:
easy_vision.python.core.decoders.decoder.Decoder
Typical RNN decoder with attention mechanism.
-
__init__
(config, vocab_size, time_major=True, is_training=True, scope='AttentionDecoder')[source]¶ Parameters: - config – protos.decoder_pb2.RNNDecoderWithAttention
- vocab_size – the number of characters
- time_major – if time major, input feature must be [time, batch, channel]
- is_training – train or not(eval/predict)
- scope – variable scope
-
easy_vision.python.core.decoders.transformer_decoder¶
-
class
easy_vision.python.core.decoders.transformer_decoder.
TransformerDecoder
(config, vocab_size, is_training=True, scope='TransformerDecoder')[source]¶ Bases:
easy_vision.python.core.decoders.decoder.Decoder
Transformer model decoder
-
__init__
(config, vocab_size, is_training=True, scope='TransformerDecoder')[source]¶ Parameters: - config – protos.decoder_pb2.TransformerDecoder
- vocab_size – the number of characters
- time_major – if time major, input feature must be [time, batch, channel]
- is_training – train or not(eval/predict)
- scope – variable scope
-
decode_pass
(targets, encoder_outputs, inputs_attention_bias)[source]¶ Generate logits for each value in the target sequence.
Parameters: - targets – target values for the output sequence. int tensor with shape [batch_size, target_length]
- encoder_outputs – continuous representation of input sequence. float tensor with shape [batch_size, input_length, hidden_size]
- inputs_attention_bias – float tensor with shape [batch_size, 1, 1, input_length]
Returns: float32 tensor with shape [batch_size, target_length, vocab_size]
-