sparknlp_jsl.annotator.deid.replacer
#
Module Contents#
Classes#
Replaces entities in the original text with new ones. |
- class Replacer(classname='com.johnsnowlabs.nlp.annotators.deid.Replacer', java_model=None)#
Bases:
sparknlp_jsl.common.AnnotatorModelInternal
Replaces entities in the original text with new ones.
This class allows to replace entities in the original text with the ones obtained with, for example, :py:class:: `DeIdentificationModel`_ or :py:class:: `DateNormalizer`_.
- useReplacement#
Enable or disable Replacement of entities. Default: True.
- Type:
bool
Examples:
>>> documentAssembler = DocumentAssembler()\ ... .setInputCol("text")\ ... .setOutputCol("sentence")
>>> tokenizer = Tokenizer()\ ... .setInputCols("sentence")\ ... .setOutputCol("token")
>>> word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\ ... .setInputCols(["sentence", "token"])\ ... .setOutputCol("embeddings")
>>> clinical_ner = MedicalNerModel.pretrained("ner_deid_generic_augmented", "en", "clinical/models") \ ... .setInputCols(["sentence", "token", "embeddings"]) \ ... .setOutputCol("ner")
>>> ner_converter_name = NerConverterInternal()\ ... .setInputCols(["sentence","token","ner"])\ ... .setOutputCol("ner_chunk")
>>> nameChunkObfuscator = NameChunkObfuscatorApproach()\ ... .setInputCols("ner_chunk")\ ... .setOutputCol("replacement")\ ... .setRefFileFormat("csv")\ ... .setObfuscateRefFile("names_test.txt")\ ... .setRefSep("#")
>>> replacer_name = Replacer()\ ... .setInputCols("replacement","sentence")\ ... .setOutputCol("obfuscated_document_name")\ ... .setUseReplacement(True)
>>> nlpPipeline = Pipeline(stages=[ ... documentAssembler, ... tokenizer, ... word_embeddings, ... clinical_ner, ... ner_converter_name, ... nameChunkObfuscator, ... replacer_name, ... ])
>>> empty_data = spark.createDataFrame([[""]]).toDF("text") >>> model_chunck_obfuscator = nlpPipeline.fit(empty_data) >>> sample_text = '''John Davies is a 62 y.o. patient admitted. Mr. Davies was seen by attending physician Dr. Lorand and was scheduled for emergency assessment.''' >>> lmodel = LightPipeline(model_chunck_obfuscator) >>> res = lmodel.fullAnnotate(sample_text) >>> print("Original text. : ", res[0]['sentence'][0].result) >>> print("Obfuscated text : ", res[0]['obfuscated_document_name'][0].result) Original text. : John Davies is a 62 y.o. patient admitted. Mr. Davies was seen by attending physician Dr. Lorand and was scheduled for emergency assessment. Obfuscated text : Fitzpatrick is a <AGE> y.o. patient admitted. Mr. Bowman was seen by attending physician Dr. Acosta and was scheduled for emergency assessment.
- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- name = Replacer#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- skipLPInputColsValidation = True#
- useReplacement#
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- getUseReplacement()#
Gets the value of useReplacement or its default value.
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setForceInputTypeValidation(etfm)#
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setParams()#
- setUseReplacement(value: bool)#
Enable or disable Replacement of entities
- Parameters:
value (bool) – True for Replacing, False otherwise.
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input datasetparams (dict, optional) – an optional param map that overrides embedded params.
- Returns:
transformed dataset
- Return type:
- write()#
Returns an MLWriter instance for this ML instance.