sparknlp.annotator.RoBertaForTokenClassification

class sparknlp.annotator.RoBertaForTokenClassification(classname='com.johnsnowlabs.nlp.annotators.classifier.dl.RoBertaForTokenClassification', java_model=None)[source]

Bases: sparknlp.common.AnnotatorModel, sparknlp.common.HasCaseSensitiveProperties, sparknlp.common.HasBatchedAnnotate

RoBertaForTokenClassification can load RoBerta Models with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

Pretrained models can be loaded with pretrained() of the companion object:

>>> token_classifier = RoBertaForTokenClassification.pretrained() \
...     .setInputCols(["token", "document"]) \
...     .setOutputCol("label")

The default model is "roberta_base_token_classifier_conll03", if no name is provided.

For available pretrained models please see the Models Hub.

Models from the HuggingFace 🤗 Transformers library are also compatible with Spark NLP 🚀. To see which models are compatible and how to import them see Import Transformers into Spark NLP 🚀.

Input Annotation types

Output Annotation type

DOCUMENT, TOKEN

NAMED_ENTITY

Parameters
batchSize

Batch size. Large values allows faster processing but requires more memory, by default 8

caseSensitive

Whether to ignore case in tokens for embeddings matching, by default True

configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

maxSentenceLength

Max sentence length to process, by default 128

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> tokenizer = Tokenizer() \
...     .setInputCols(["document"]) \
...     .setOutputCol("token")
>>> tokenClassifier = RoBertaForTokenClassification.pretrained() \
...     .setInputCols(["token", "document"]) \
...     .setOutputCol("label") \
...     .setCaseSensitive(True)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     tokenizer,
...     tokenClassifier
... ])
>>> data = spark.createDataFrame([["John Lenon was born in London and lived in Paris. My name is Sarah and I live in London"]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.select("label.result").show(truncate=False)
+------------------------------------------------------------------------------------+
|result                                                                              |
+------------------------------------------------------------------------------------+
|[B-PER, I-PER, O, O, O, B-LOC, O, O, O, B-LOC, O, O, O, O, B-PER, O, O, O, O, B-LOC]|
+------------------------------------------------------------------------------------+

Methods

__init__([classname, java_model])

Initialize this instance with a Java model object.

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

getBatchSize()

Gets current batch size.

getCaseSensitive()

Gets whether to ignore case in tokens for embeddings matching.

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

loadSavedModel(folder, spark_session)

Loads a locally saved model.

pretrained([name, lang, remote_loc])

Downloads and loads a pretrained model.

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(v)

Sets batch size.

setCaseSensitive(value)

Sets whether to ignore case in tokens for embeddings matching.

setConfigProtoBytes(b)

Sets configProto from tensorflow, serialized into byte array.

setInputCols(*value)

Sets column names of input annotations.

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setMaxSentenceLength(value)

Sets max sentence length to process, by default 128.

setOutputCol(value)

Sets output column name of annotations.

setParamValue(paramName)

Sets the value of a parameter.

setParams()

transform(dataset[, params])

Transforms the input dataset with optional parameters.

write()

Returns an MLWriter instance for this ML instance.

Attributes

batchSize

caseSensitive

configProtoBytes

getter_attrs

inputCols

lazyAnnotator

maxSentenceLength

name

outputCol

params

Returns all params ordered by name.

clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

getBatchSize()

Gets current batch size.

Returns
int

Current batch size

getCaseSensitive()

Gets whether to ignore case in tokens for embeddings matching.

Returns
bool

Whether to ignore case in tokens for embeddings matching

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

static loadSavedModel(folder, spark_session)[source]

Loads a locally saved model.

Parameters
folderstr

Folder of the saved model

spark_sessionpyspark.sql.SparkSession

The current SparkSession

Returns
RoBertaForTokenClassification

The restored model

property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

static pretrained(name='roberta_base_token_classifier_conll03', lang='en', remote_loc=None)[source]

Downloads and loads a pretrained model.

Parameters
namestr, optional

Name of the pretrained model, by default “roberta_base_token_classifier_conll03”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns
RoBertaForTokenClassification

The restored model

classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(v)

Sets batch size.

Parameters
vint

Batch size

setCaseSensitive(value)

Sets whether to ignore case in tokens for embeddings matching.

Parameters
valuebool

Whether to ignore case in tokens for embeddings matching

setConfigProtoBytes(b)[source]

Sets configProto from tensorflow, serialized into byte array.

Parameters
bList[str]

ConfigProto from tensorflow, serialized into byte array

setInputCols(*value)

Sets column names of input annotations.

Parameters
*valuestr

Input columns for the annotator

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setMaxSentenceLength(value)[source]

Sets max sentence length to process, by default 128.

Parameters
valueint

Max sentence length to process

setOutputCol(value)

Sets output column name of annotations.

Parameters
valuestr

Name of output column

setParamValue(paramName)

Sets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

transform(dataset, params=None)

Transforms the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns

transformed dataset

New in version 1.3.0.

uid

A unique id for the object.

write()

Returns an MLWriter instance for this ML instance.