Description
Word Embeddings lookup annotator that maps tokens to vectors.
Predicted Entities
How to use
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer()\
.setInputCols("document")\
.setOutputCol("token")
embeddings = WordEmbeddingsModel.pretrained("w2v_cc_300d", "it")\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")
val embeddings = WordEmbeddingsModel.pretrained("w2v_cc_300d", "it")
.setInputCols(Array("document", "token"))
.setOutputCol("embeddings")
import nlu
nlu.load("it.embed.word2vec").predict("""Put your text here.""")
Model Information
Model Name: | w2v_cc_300d |
Type: | embeddings |
Compatibility: | Spark NLP 3.4.1+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [document, token] |
Output Labels: | [embeddings] |
Language: | it |
Size: | 1.2 GB |
Case sensitive: | false |
Dimension: | 300 |