sparknlp.annotator.classifier_dl.multi_classifier_dl#

Contains classes for MultiClassifierDL.

Module Contents#

Classes#

MultiClassifierDLApproach

Trains a MultiClassifierDL for Multi-label Text Classification.

MultiClassifierDLModel

MultiClassifierDL for Multi-label Text Classification.

class MultiClassifierDLApproach[source]#

Trains a MultiClassifierDL for Multi-label Text Classification.

MultiClassifierDL uses a Bidirectional GRU with a convolutional model that we have built inside TensorFlow and supports up to 100 classes.

In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y (assigning a value of 0 or 1 for each element (label) in y).

For instantiated/pretrained models, see MultiClassifierDLModel.

The input to MultiClassifierDL are Sentence Embeddings such as the state-of-the-art UniversalSentenceEncoder, BertSentenceEmbeddings, SentenceEmbeddings or other sentence embeddings.

For extended examples of usage, see the Spark NLP Workshop.

Input Annotation types

Output Annotation type

SENTENCE_EMBEDDINGS

CATEGORY

Parameters:
lr

Learning Rate, by default 0.001

batchSize

Batch size, by default 64

maxEpochs

Maximum number of epochs to train, by default 10

configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

validationSplit

Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off, by default 0.0

enableOutputLogs

Whether to use stdout in addition to Spark logs, by default False

outputLogsPath

Folder path to save training logs

labelColumn

Column with label per each token

verbose

Level of verbosity during training

randomSeed

Random seed, by default 44

shufflePerEpoch

whether to shuffle the training data on each Epoch, by default False

threshold

The minimum threshold for each label to be accepted, by default 0.5

See also

ClassifierDLApproach

for single-class classification

SentimentDLApproach

for sentiment analysis

Notes

  • This annotator requires an array of labels in type of String.

  • UniversalSentenceEncoder, BertSentenceEmbeddings, SentenceEmbeddings or other sentence embeddings can be used for the inputCol.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline

In this example, the training data has the form:

+----------------+--------------------+--------------------+
|              id|                text|              labels|
+----------------+--------------------+--------------------+
|ed58abb40640f983|PN NewsYou mean ... |             [toxic]|
|a1237f726b5f5d89|Dude.  Place the ...|   [obscene, insult]|
|24b0d6c8733c2abe|Thanks  - thanks ...|            [insult]|
|8c4478fb239bcfc0|" Gee, 5 minutes ...|[toxic, obscene, ...|
+----------------+--------------------+--------------------+

Process training data to create text with associated array of labels:

>>> trainDataset.printSchema()
root
|-- id: string (nullable = true)
|-- text: string (nullable = true)
|-- labels: array (nullable = true)
|    |-- element: string (containsNull = true)

Then create pipeline for training:

>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document") \
...     .setCleanupMode("shrink")
>>> embeddings = UniversalSentenceEncoder.pretrained() \
...     .setInputCols("document") \
...     .setOutputCol("embeddings")
>>> docClassifier = MultiClassifierDLApproach() \
...     .setInputCols("embeddings") \
...     .setOutputCol("category") \
...     .setLabelColumn("labels") \
...     .setBatchSize(128) \
...     .setMaxEpochs(10) \
...     .setLr(1e-3) \
...     .setThreshold(0.5) \
...     .setValidationSplit(0.1)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     embeddings,
...     docClassifier
... ])
>>> pipelineModel = pipeline.fit(trainDataset)
lr[source]#
batchSize[source]#
maxEpochs[source]#
configProtoBytes[source]#
validationSplit[source]#
enableOutputLogs[source]#
outputLogsPath[source]#
labelColumn[source]#
verbose[source]#
randomSeed[source]#
shufflePerEpoch[source]#
threshold[source]#
setVerbose(self, v)[source]#

Sets level of verbosity during training.

Parameters:
vint

Level of verbosity

setRandomSeed(self, seed)[source]#

Sets random seed for shuffling.

Parameters:
seedint

Random seed for shuffling

setLabelColumn(self, v)[source]#

Sets name of column for data labels.

Parameters:
vstr

Column for data labels

setConfigProtoBytes(self, v)[source]#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
vList[str]

ConfigProto from tensorflow, serialized into byte array

setLr(self, v)[source]#

Sets Learning Rate, by default 0.001.

Parameters:
vfloat

Learning Rate

setBatchSize(self, v)[source]#

Sets batch size, by default 64.

Parameters:
vint

Batch size

setMaxEpochs(self, v)[source]#

Sets maximum number of epochs to train, by default 10.

Parameters:
vint

Maximum number of epochs to train

setValidationSplit(self, v)[source]#

Sets the proportion of training dataset to be validated against the model on each Epoch, by default it is 0.0 and off. The value should be between 0.0 and 1.0.

Parameters:
vfloat

Proportion of training dataset to be validated

setEnableOutputLogs(self, v)[source]#

Sets whether to use stdout in addition to Spark logs, by default False.

Parameters:
vbool

Whether to use stdout in addition to Spark logs

setOutputLogsPath(self, v)[source]#

Sets folder path to save training logs.

Parameters:
vstr

Folder path to save training logs

setShufflePerEpoch(self, v)[source]#
setThreshold(self, v)[source]#

Sets minimum threshold for each label to be accepted, by default 0.5.

Parameters:
vfloat

The minimum threshold for each label to be accepted, by default 0.5

class MultiClassifierDLModel(classname='com.johnsnowlabs.nlp.annotators.classifier.dl.MultiClassifierDLModel', java_model=None)[source]#

MultiClassifierDL for Multi-label Text Classification.

MultiClassifierDL Bidirectional GRU with Convolution model we have built inside TensorFlow and supports up to 100 classes.

In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y (assigning a value of 0 or 1 for each element (label) in y).

The input to MultiClassifierDL are Sentence Embeddings such as the state-of-the-art UniversalSentenceEncoder, BertSentenceEmbeddings, SentenceEmbeddings or other sentence embeddings.

This is the instantiated model of the MultiClassifierDLApproach. For training your own model, please see the documentation of that class.

Pretrained models can be loaded with pretrained() of the companion object:

>>> multiClassifier = MultiClassifierDLModel.pretrained() \
>>>     .setInputCols(["sentence_embeddings"]) \
>>>     .setOutputCol("categories")

The default model is "multiclassifierdl_use_toxic", if no name is provided. It uses embeddings from the UniversalSentenceEncoder and classifies toxic comments.

The data is based on the Jigsaw Toxic Comment Classification Challenge. For available pretrained models please see the Models Hub.

For extended examples of usage, see the Spark NLP Workshop.

Input Annotation types

Output Annotation type

SENTENCE_EMBEDDINGS

CATEGORY

Parameters:
configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

threshold

The minimum threshold for each label to be accepted, by default 0.5

classes

Get the tags used to trained this MultiClassifierDLModel

See also

ClassifierDLModel

for single-class classification

SentimentDLModel

for sentiment analysis

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> useEmbeddings = UniversalSentenceEncoder.pretrained() \
...     .setInputCols("document") \
...     .setOutputCol("sentence_embeddings")
>>> multiClassifierDl = MultiClassifierDLModel.pretrained() \
...     .setInputCols("sentence_embeddings") \
...     .setOutputCol("classifications")
>>> pipeline = Pipeline() \
...     .setStages([
...         documentAssembler,
...         useEmbeddings,
...         multiClassifierDl
...     ])
>>> data = spark.createDataFrame([
...     ["This is pretty good stuff!"],
...     ["Wtf kind of crap is this"]
... ]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.select("text", "classifications.result").show(truncate=False)
+--------------------------+----------------+
|text                      |result          |
+--------------------------+----------------+
|This is pretty good stuff!|[]              |
|Wtf kind of crap is this  |[toxic, obscene]|
+--------------------------+----------------+
name = MultiClassifierDLModel[source]#
configProtoBytes[source]#
threshold[source]#
classes[source]#
setThreshold(self, v)[source]#

Sets minimum threshold for each label to be accepted, by default 0.5.

Parameters:
vfloat

The minimum threshold for each label to be accepted, by default 0.5

setConfigProtoBytes(self, b)[source]#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
bList[int]

ConfigProto from tensorflow, serialized into byte array

static pretrained(name='multiclassifierdl_use_toxic', lang='en', remote_loc=None)[source]#

Downloads and loads a pretrained model.

Parameters:
namestr, optional

Name of the pretrained model, by default “multiclassifierdl_use_toxic”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:
MultiClassifierDLModel

The restored model