Explain Document pipeline for Spanish (explain_document_lg)


The explain_document_lg is a pretrained pipeline that we can use to process text with a simple pipeline that performs basic processing steps and recognizes entities . It performs most of the common text processing tasks on your dataframe

Live Demo Open in Colab Download

How to use

from sparknlp.pretrained import PretrainedPipelinein
pipeline = PretrainedPipeline('explain_document_lg', lang = 'es')
annotations =  pipeline.fullAnnotate(""Hola de John Snow Labs! "")[0]

val pipeline = new PretrainedPipeline("explain_document_lg", lang = "es")
val result = pipeline.fullAnnotate("Hola de John Snow Labs! ")(0)

import nlu
text = [""Hola de John Snow Labs! ""]
result_df = nlu.load('es.explain.lg').predict(text)


|    | document                     | sentence                    | token                                   | lemma                                   | pos                                        | embeddings                   | ner                                   | entities            |
|  0 | ['Hola de John Snow Labs! '] | ['Hola de John Snow Labs!'] | ['Hola', 'de', 'John', 'Snow', 'Labs!'] | ['Hola', 'de', 'John', 'Snow', 'Labs!'] | ['PART', 'ADP', 'PROPN', 'PROPN', 'PROPN'] | [[-0.016199000179767,.,...]] | ['O', 'O', 'B-PER', 'I-PER', 'I-PER'] | ['John Snow Labs!'] |

Model Information

Model Name: explain_document_lg
Type: pipeline
Compatibility: Spark NLP 3.0.0+
License: Open Source
Edition: Official
Language: es