Calculate the embeddigns for a sequence of Tokens and create WordPieceEmbeddingsSentence objects from them
A sequence of Tokenized Sentences for which embeddings will be calculated
Define which output layer you want from the model word_emb, lstm_outputs1, lstm_outputs2, elmo. See https://tfhub.dev/google/elmo/3 for reference
A Seq of WordpieceEmbeddingsSentence, one element for each input sentence
word_emb: the character-based word representations with shape [batch_size, max_length, 512].
The dimension of chosen layer
Tag a seq of TokenizedSentences, will get the embeddings according to key.
The Tokens for which we calculate embeddings
Specification of the output embedding for Elmo
Elmo's embeddings dimension: either 512 or 1024
The Embeddings Vector. For each Seq Element we have a Sentence, and for each sentence we have an Array for each of its words. Each of its words gets a float array to represent its Embeddings
Elmo Model wrapper with TensorFlow Wrapper