sparknlp_jsl.annotator.MedicalNerModel#
- class sparknlp_jsl.annotator.MedicalNerModel(classname='com.johnsnowlabs.nlp.annotators.ner.MedicalNerModel', java_model=None)[source]#
Bases:
AnnotatorModel
,HasStorageRef
,HasBatchedAnnotate
This Named Entity recognition annotator is a generic NER model based on Neural Networks.
Neural Network architecture is Char CNNs - BiLSTM - CRF that achieves state-of-the-art in most datasets.
This is the instantiated model of the
NerDLApproach
. For training your own model, please see the documentation of that class.Pretrained models can be loaded with
pretrained()
of the companion object:>>> nerModel = MedicalNerDLModel.pretrained() \ ... .setInputCols(["sentence", "token", "embeddings"]) \ ... .setOutputCol("ner")
The default model is
"ner_dl"
, if no name is provided.For available pretrained models please see the Models Hub. Additionally, pretrained pipelines are available for this module, see Pipelines.
Note that some pretrained models require specific types of embeddings, depending on which they were trained on. For example, the default model
"ner_dl"
requires the WordEmbeddings"glove_100d"
.For extended examples of usage, see the Spark NLP Workshop.
Input Annotation types
Output Annotation type
DOCUMENT, TOKEN, WORD_EMBEDDINGS
NAMED_ENTITY
- Parameters:
- batchSize
Size of every batch, by default 8
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- includeConfidence
Whether to include confidence scores in annotation metadata, by default False
- includeAllConfidenceScores
Whether to include all confidence scores in annotation metadata or just the score of the predicted tag, by default False
- inferenceBatchSize
Number of sentences to process in a single batch during inference
- classes
Tags used to trained this NerDLModel
- labelCasing:
Setting all labels of the NER models upper/lower case. values upper|lower
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.common import * >>> from sparknlp.annotator import * >>> from sparknlp.training import * >>> import sparknlp_jsl >>> from sparknlp_jsl.base import * >>> from sparknlp_jsl.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> sentence = SentenceDetector() \ ... .setInputCols(["document"]) \ ... .setOutputCol("sentence") >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("token") >>> embeddings = WordEmbeddingsModel.pretrained() \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("bert") >>> nerTagger = MedicalNerDLModel.pretrained() \ ... .setInputCols(["sentence", "token", "bert"]) \ ... .setOutputCol("ner") >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... sentence, ... tokenizer, ... embeddings, ... nerTagger ... ]) >>> data = spark.createDataFrame([["U.N. official Ekeus heads for Baghdad."]]).toDF("text") >>> result = pipeline.fit(data).transform(data)
Methods
__init__
([classname, java_model])Initialize this instance with a Java model object.
clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets current batch size.
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
Gets unique reference name for identification.
getTrainingClassDistribution
()hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
loadSavedModel
(ner_model_path, folder, ...)pretrained
([name, lang, remote_loc])read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setBatchSize
(v)Sets batch size.
Sets configProto from tensorflow, serialized into byte array.
setIncludeConfidence
(value)Sets whether to include confidence scores in annotation metadata, by default False.
setInferenceBatchSize
(value)Sets number of sentences to process in a single batch during inference
setInputCols
(*value)Sets column names of input annotations.
setLabelCasing
(value)Setting all labels of the NER models upper/lower case.
setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setParams
()setStorageRef
(value)Sets unique reference name for identification.
transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
batchSize
classes
configProtoBytes
getter_attrs
includeAllConfidenceScores
includeConfidence
inferenceBatchSize
inputCols
labelCasing
lazyAnnotator
name
outputCol
Returns all params ordered by name.
storageRef
trainingClassDistribution
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra – extra param values
- Returns:
merged param map
- getBatchSize()#
Gets current batch size.
- Returns:
- int
Current batch size
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- getStorageRef()#
Gets unique reference name for identification.
- Returns:
- str
Unique reference name for identification
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setBatchSize(v)#
Sets batch size.
- Parameters:
- vint
Batch size
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[str]
ConfigProto from tensorflow, serialized into byte array
- setIncludeConfidence(value)[source]#
Sets whether to include confidence scores in annotation metadata, by default False.
- Parameters:
- valuebool
Whether to include the confidence value in the output.
- setInferenceBatchSize(value)[source]#
Sets number of sentences to process in a single batch during inference
- Parameters:
- valueint
number of sentences to process in a single batch during inference
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
- *valuestr
Input columns for the annotator
- setLabelCasing(value)[source]#
Setting all labels of the NER models upper/lower case. values upper|lower
- Parameters:
- valuestr
Setting all labels of the NER models upper/lower case. values upper|lower
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- setStorageRef(value)#
Sets unique reference name for identification.
- Parameters:
- valuestr
Unique reference name for identification
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
- Parameters:
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params.
- Returns:
transformed dataset
New in version 1.3.0.
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.