Explain Document Pipeline for Spanish

Description

The explain_document_sm is a pretrained pipeline that we can use to process text with a simple pipeline that performs basic processing steps. It performs most of the common text processing tasks on your dataframe

Open in Colab Download Copy S3 URI

How to use


from sparknlp.pretrained import PretrainedPipelinein
pipeline = PretrainedPipeline('explain_document_sm', lang = 'es')
annotations =  pipeline.fullAnnotate(""Hola de John Snow Labs! "")[0]
annotations.keys()


val pipeline = new PretrainedPipeline("explain_document_sm", lang = "es")
val result = pipeline.fullAnnotate("Hola de John Snow Labs! ")(0)



import nlu
text = [""Hola de John Snow Labs! ""]
result_df = nlu.load('es.explain').predict(text)
result_df

Results

|    | document                     | sentence                    | token                                   | lemma                                   | pos                                        | embeddings                   | ner                                    | entities               |
|---:|:-----------------------------|:----------------------------|:----------------------------------------|:----------------------------------------|:-------------------------------------------|:-----------------------------|:---------------------------------------|:-----------------------|
|  0 | ['Hola de John Snow Labs! '] | ['Hola de John Snow Labs!'] | ['Hola', 'de', 'John', 'Snow', 'Labs!'] | ['Hola', 'de', 'John', 'Snow', 'Labs!'] | ['PART', 'ADP', 'PROPN', 'PROPN', 'PROPN'] | [[0.1754499971866607,.,...]] | ['O', 'O', 'B-PER', 'I-PER', 'B-MISC'] | ['John Snow', 'Labs!'] |

Model Information

Model Name: explain_document_sm
Type: pipeline
Compatibility: Spark NLP 3.0.0+
License: Open Source
Edition: Official
Language: es