sparknlp_jsl.annotator.classification.medical_bert_for_token_classifier#

Module Contents#

Classes#

MedicalBertForTokenClassifier

MedicalBertForTokenClassifier can load Bert Models with a token

class MedicalBertForTokenClassifier(classname='com.johnsnowlabs.nlp.annotators.classification.MedicalBertForTokenClassifier', java_model=None)#

Bases: sparknlp_jsl.common.AnnotatorModelInternal, sparknlp_jsl.common.HasCaseSensitiveProperties, sparknlp_jsl.common.HasBatchedAnnotate, sparknlp_jsl.common.HasEngine

MedicalBertForTokenClassifier can load Bert Models with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

Pretrained models can be loaded with pretrained() of the companion object:

>>> embeddings = MedicalBertForTokenClassifier.pretrained() \
...     .setInputCols(["token", "document"]) \
...     .setOutputCol("label")

The default model is "bert_token_classifier_ner_bionlp", if no name is provided.

For available pretrained models please see the Models Hub.

Models from the HuggingFace 🤗 Transformers library are also compatible with Spark NLP 🚀. To see which models are compatible and how to import them see Import Transformers into Spark NLP 🚀.

Input Annotation types

Output Annotation type

DOCUMENT, TOKEN

NAMED_ENTITY

Parameters:
  • configProtoBytes – ConfigProto from tensorflow, serialized into byte array.

  • maxSentenceLength – Max sentence length to process, by default 128.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> tokenizer = Tokenizer() \
...     .setInputCols(["document"]) \
...     .setOutputCol("token")
>>> tokenClassifier = MedicalBertForTokenClassifier.pretrained() \
...     .setInputCols(["token", "document"]) \
...     .setOutputCol("label") \
...     .setCaseSensitive(True)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     tokenizer,
...     tokenClassifier
... ])
>>> data = spark.createDataFrame([["Both the erbA IRES and the erbA/myb virus constructs transformed erythroid cells after infection of bone marrow or blastoderm cultures."]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.select("label.result").show(truncate=False)
+------------------------------------------------------------------------------------+
|result
|
+------------------------------------------------------------------------------------+
|[O, O, B-Organism, I-Organism, O, O, B-Organism, I-Organism, O, O, B-Cell, I-Cell, O,
O, O, B-Multi-tissue_structure, I-Multi-tissue_structure, O, B-Cell, I-Cell, O]|
+------------------------------------------------------------------------------------+
batchSize#
caseSensitive#
configProtoBytes#
engine#
getter_attrs = []#
inputAnnotatorTypes#
inputCols#
lazyAnnotator#
maxSentenceLength#
name = MedicalBertForTokenClassifier#
optionalInputAnnotatorTypes = []#
outputAnnotatorType#
outputCol#
skipLPInputColsValidation = True#
clear(param)#

Clears a param from the param map if it has been explicitly set.

copy(extra=None)#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra (dict, optional) – Extra parameters to copy to the new instance

Returns:

Copy of this instance

Return type:

JavaParams

explainParam(param)#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra (dict, optional) – extra param values

Returns:

merged param map

Return type:

dict

getBatchSize()#

Gets current batch size.

Returns:

Current batch size

Return type:

int

getCaseSensitive()#

Gets whether to ignore case in tokens for embeddings matching.

Returns:

Whether to ignore case in tokens for embeddings matching

Return type:

bool

getClasses()#

Returns labels used to train this model

getEngine()#
Returns:

Deep Learning engine used for this model”

Return type:

str

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName)#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

hasDefault(param)#

Checks whether a param has a default value.

hasParam(paramName)#

Tests whether this instance contains a param with a given (string) name.

inputColsValidation(value)#
isDefined(param)#

Checks whether a param is explicitly set by user or has a default value.

isSet(param)#

Checks whether a param is explicitly set by user.

classmethod load(path)#

Reads an ML instance from the input path, a shortcut of read().load(path).

static loadSavedModel(folder, spark_session)#

Loads a locally saved model.

Parameters:
Returns:

The restored model

Return type:

MedicalBertForTokenClassifier

static loadSavedModelOpenSource(bertForTokenClassifierPath, tfModelPath, spark_session)#

Loads a locally saved model.

Parameters:
  • bertForTokenClassifierPath (str) – Folder of the bertForTokenClassifier

  • tfModelPath (str) – Folder taht contains the tf model

  • spark_session (pyspark.sql.SparkSession) – The current SparkSession

Returns:

The restored model

Return type:

MedicalBertForTokenClassifier

static pretrained(name='bert_token_classifier_ner_bionlp', lang='en', remote_loc='clinical/models')#

Download a pre-trained MedicalBertForTokenClassifier.

Parameters:
  • name (str) – Name of the pre-trained model.

  • lang (str) – Language of the pre-trained model.

  • remote_loc (str) – Remote location of the pre-trained model. If None, use the open-source location. Other values are “clinical/models”, “finance/models”, or “legal/models”.

Returns:

A pre-trained MedicalBertForTokenClassifier.

Return type:

MedicalBertForTokenClassifier

classmethod read()#

Returns an MLReader instance for this class.

save(path)#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)#

Sets a parameter in the embedded param map.

setBatchSize(v)#

Sets batch size.

Parameters:

v (int) – Batch size

setCaseSensitive(value)#

Sets whether to ignore case in tokens for embeddings matching.

Parameters:

value (bool) – Whether to ignore case in tokens for embeddings matching

setConfigProtoBytes(b)#

Sets configProto from tensorflow, serialized into byte array.

Parameters:

b (List[str]) – ConfigProto from tensorflow, serialized into byte array

setForceInputTypeValidation(etfm)#
setInputCols(*value)#

Sets column names of input annotations.

Parameters:

*value (List[str]) – Input columns for the annotator

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:

value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline

setMaxSentenceLength(value)#

Sets max sentence length to process, by default 128.

Parameters:

value (int) – Max sentence length to process

setOutputCol(value)#

Sets output column name of annotations.

Parameters:

value (str) – Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

setParams()#
transform(dataset, params=None)#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset

  • params (dict, optional) – an optional param map that overrides embedded params.

Returns:

transformed dataset

Return type:

pyspark.sql.DataFrame

write()#

Returns an MLWriter instance for this ML instance.