sparknlp.annotator.classifier_dl.classifier_dl#

Contains classes for ClassifierDL.

Module Contents#

Classes#

ClassifierDLApproach

Trains a ClassifierDL for generic Multi-class Text Classification.

ClassifierDLModel

ClassifierDL for generic Multi-class Text Classification.

class ClassifierDLApproach[source]#

Trains a ClassifierDL for generic Multi-class Text Classification.

ClassifierDL uses the state-of-the-art Universal Sentence Encoder as an input for text classifications. The ClassifierDL annotator uses a deep learning model (DNNs) we have built inside TensorFlow and supports up to 100 classes.

For instantiated/pretrained models, see ClassifierDLModel.

For extended examples of usage, see the Spark NLP Workshop Spark NLP Workshop.

Input Annotation types

Output Annotation type

SENTENCE_EMBEDDINGS

CATEGORY

Parameters:
lr

Learning Rate, by default 0.005

batchSize

Batch size, by default 64

dropout

Dropout coefficient, by default 0.5

maxEpochs

Maximum number of epochs to train, by default 30

configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

validationSplit

Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off.

enableOutputLogs

Whether to use stdout in addition to Spark logs, by default False

outputLogsPath

Folder path to save training logs

labelColumn

Column with label per each token

verbose

Level of verbosity during training

randomSeed

Random seed for shuffling

See also

MultiClassifierDLApproach

for multi-class classification

SentimentDLApproach

for sentiment analysis

Notes

  • This annotator accepts a label column of a single item in either type of String, Int, Float, or Double.

  • UniversalSentenceEncoder, Transformer based embeddings, or SentenceEmbeddings can be used for the inputCol.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline

In this example, the training data "sentiment.csv" has the form of:

text,label
This movie is the best movie I have wached ever! In my opinion this movie can win an award.,0
This was a terrible movie! The acting was bad really bad!,1
...

Then traning can be done like so:

>>> smallCorpus = spark.read.option("header","True").csv("src/test/resources/classifier/sentiment.csv")
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> useEmbeddings = UniversalSentenceEncoder.pretrained() \
...     .setInputCols("document") \
...     .setOutputCol("sentence_embeddings")
>>> docClassifier = ClassifierDLApproach() \
...     .setInputCols("sentence_embeddings") \
...     .setOutputCol("category") \
...     .setLabelColumn("label") \
...     .setBatchSize(64) \
...     .setMaxEpochs(20) \
...     .setLr(5e-3) \
...     .setDropout(0.5)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     useEmbeddings,
...     docClassifier
... ])
>>> pipelineModel = pipeline.fit(smallCorpus)
lr[source]#
batchSize[source]#
dropout[source]#
maxEpochs[source]#
configProtoBytes[source]#
validationSplit[source]#
enableOutputLogs[source]#
outputLogsPath[source]#
labelColumn[source]#
verbose[source]#
randomSeed[source]#
setVerbose(self, value)[source]#

Sets level of verbosity during training

Parameters:
valueint

Level of verbosity

setRandomSeed(self, seed)[source]#

Sets random seed for shuffling

Parameters:
seedint

Random seed for shuffling

setLabelColumn(self, value)[source]#

Sets name of column for data labels

Parameters:
valuestr

Column for data labels

setConfigProtoBytes(self, b)[source]#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
bList[int]

ConfigProto from tensorflow, serialized into byte array

setLr(self, v)[source]#

Sets Learning Rate, by default 0.005

Parameters:
vfloat

Learning Rate

setBatchSize(self, v)[source]#

Sets batch size, by default 64.

Parameters:
vint

Batch size

setDropout(self, v)[source]#

Sets dropout coefficient, by default 0.5

Parameters:
vfloat

Dropout coefficient

setMaxEpochs(self, epochs)[source]#

Sets maximum number of epochs to train, by default 30

Parameters:
epochsint

Maximum number of epochs to train

setValidationSplit(self, v)[source]#

Sets the proportion of training dataset to be validated against the model on each Epoch, by default it is 0.0 and off. The value should be between 0.0 and 1.0.

Parameters:
vfloat

Proportion of training dataset to be validated

setEnableOutputLogs(self, value)[source]#

Sets whether to use stdout in addition to Spark logs, by default False

Parameters:
valuebool

Whether to use stdout in addition to Spark logs

setOutputLogsPath(self, p)[source]#

Sets folder path to save training logs

Parameters:
pstr

Folder path to save training logs

class ClassifierDLModel(classname='com.johnsnowlabs.nlp.annotators.classifier.dl.ClassifierDLModel', java_model=None)[source]#

ClassifierDL for generic Multi-class Text Classification.

ClassifierDL uses the state-of-the-art Universal Sentence Encoder as an input for text classifications. The ClassifierDL annotator uses a deep learning model (DNNs) we have built inside TensorFlow and supports up to 100 classes.

This is the instantiated model of the ClassifierDLApproach. For training your own model, please see the documentation of that class.

Pretrained models can be loaded with pretrained() of the companion object:

>>> classifierDL = ClassifierDLModel.pretrained() \
...     .setInputCols(["sentence_embeddings"]) \
...     .setOutputCol("classification")

The default model is "classifierdl_use_trec6", if no name is provided. It uses embeddings from the UniversalSentenceEncoder and is trained on the TREC-6 dataset.

For available pretrained models please see the Models Hub.

For extended examples of usage, see the Spark NLP Workshop.

Input Annotation types

Output Annotation type

SENTENCE_EMBEDDINGS

CATEGORY

Parameters:
configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

classes

Get the tags used to trained this ClassifierDLModel

See also

MultiClassifierDLModel

for multi-class classification

SentimentDLModel

for sentiment analysis

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> sentence = SentenceDetector() \
...     .setInputCols("document") \
...     .setOutputCol("sentence")
>>> useEmbeddings = UniversalSentenceEncoder.pretrained() \
...     .setInputCols("document") \
...     .setOutputCol("sentence_embeddings")
>>> sarcasmDL = ClassifierDLModel.pretrained("classifierdl_use_sarcasm") \
...     .setInputCols("sentence_embeddings") \
...     .setOutputCol("sarcasm")
>>> pipeline = Pipeline() \
...     .setStages([
...       documentAssembler,
...       sentence,
...       useEmbeddings,
...       sarcasmDL
...     ])
>>> data = spark.createDataFrame([
...     ["I'm ready!"],
...     ["If I could put into words how much I love waking up at 6 am on Mondays I would."]
... ]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.selectExpr("explode(arrays_zip(sentence, sarcasm)) as out") \
...     .selectExpr("out.sentence.result as sentence", "out.sarcasm.result as sarcasm") \
...     .show(truncate=False)
+-------------------------------------------------------------------------------+-------+
|sentence                                                                       |sarcasm|
+-------------------------------------------------------------------------------+-------+
|I'm ready!                                                                     |normal |
|If I could put into words how much I love waking up at 6 am on Mondays I would.|sarcasm|
+-------------------------------------------------------------------------------+-------+
name = ClassifierDLModel[source]#
configProtoBytes[source]#
classes[source]#
setConfigProtoBytes(self, b)[source]#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
bList[int]

ConfigProto from tensorflow, serialized into byte array

static pretrained(name='classifierdl_use_trec6', lang='en', remote_loc=None)[source]#

Downloads and loads a pretrained model.

Parameters:
namestr, optional

Name of the pretrained model, by default “classifierdl_use_trec6”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:
ClassifierDLModel

The restored model