sparknlp_jsl.finance.sequence_generation.qa_ner_generator#

Module Contents#

Classes#

FinanceNerQuestionGenerator

The chunk mapper Approach load a JsonDictionary that have the relations to be mapped in the ChunkMapperModel

class FinanceNerQuestionGenerator(classname='com.johnsnowlabs.finance.sequence_generation.FinanceNerQuestionGenerator', java_model=None)#

Bases: sparknlp_jsl.annotator.NerQuestionGenerator

The chunk mapper Approach load a JsonDictionary that have the relations to be mapped in the ChunkMapperModel

Input Annotation types

Output Annotation type

CHUNK

LABEL_DEPENDENCY

Parameters:
  • dictionary – Dictionary path where is the json that contains the mappinmgs columns

  • rel – Relation that we going to use to map the chunk

  • lowerCase – Parameter to decide if we going to use the chunk mapper or not

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp_jsl.common import *
>>> from sparknlp.annotator import *
>>> from sparknlp.training import *
>>> import sparknlp_jsl
>>> from sparknlp_jsl.base import *
>>> from sparknlp_jsl.annotator import *
>>> from pyspark.ml import Pipeline
>>> documenter = DocumentAssembler()    ...     .setInputCol("text")    ...     .setOutputCol("documents")
>>> sentence_detector = SentenceDetector()     ...     .setInputCols("documents")     ...     .setOutputCol("sentences")
>>> tokenizer = Tokenizer()     ...     .setInputCols("sentences")     ...     .setOutputCol("tokens")
>>> embeddings = WordEmbeddingsModel()     ...     .pretrained("embeddings_clinical", "en", "clinical/models")    ...     .setInputCols(["sentences", "tokens"])    ...     .setOutputCol("embeddings")
>>> ner_model = MedicalNerModel()    ...     .pretrained("ner_posology_large", "en", "clinical/models")    ...     .setInputCols(["sentences", "tokens", "embeddings"])    ...     .setOutputCol("ner")
>>> ner_converter = NerConverterInternal()    ...     .setInputCols("sentences", "tokens", "ner")    ...     .setOutputCol("ner_chunks")
>>> chunkerMapperapproach = ChunkMapperApproach()    ...    .setInputCols(["ner_chunk"])    ...    .setOutputCol("mappings")    ...    .setDictionary("/home/jsl/mappings2.json")     ...    .setRels(["action"])     >>> sampleData = "The patient was given Warfarina Lusa and amlodipine 10 MG."
>>> pipeline = Pipeline().setStages([
...     documenter,
...     sentence_detector,
...     tokenizer,
...     embeddings,
...     ner_model,
...     ner_converter])
>>> results = pipeline.fit(data).transform(data)
>>> results.select("mappings").show(truncate=False)
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|mappings                                                                                                                                                                                                                                                                                                                                                                                               |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[{labeled_dependency, 22, 35, Analgesic, {chunk -> 0, relation -> action, confidence -> 0.56995, all_relations -> Antipyretic, entity -> Warfarina Lusa, sentence -> 0}, []}, {labeled_dependency, 41, 50, NONE, {entity -> amlodipine, sentence -> 0, chunk -> 1, confidence -> 0.9989}, []}, {labeled_dependency, 55, 56, NONE, {entity -> MG, sentence -> 0, chunk -> 2, confidence -> 0.9123}, []}]|
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
entities1#
entities2#
getter_attrs = []#
inputAnnotatorTypes#
inputCols#
lazyAnnotator#
name = 'NerQuestionGenerator'#
optionalInputAnnotatorTypes = []#
outputAnnotatorType = 'document'#
outputCol#
questionMark#
questionPronoun#
skipLPInputColsValidation = True#
strategyType#
uid = ''#
clear(param: pyspark.ml.param.Param) None#

Clears a param from the param map if it has been explicitly set.

copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra (dict, optional) – Extra parameters to copy to the new instance

Returns:

Copy of this instance

Return type:

JavaParams

explainParam(param: str | Param) str#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() str#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra (dict, optional) – extra param values

Returns:

merged param map

Return type:

dict

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param: str) Any#
getOrDefault(param: Param[T]) T

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName: str) Param#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

hasDefault(param: str | Param[Any]) bool#

Checks whether a param has a default value.

hasParam(paramName: str) bool#

Tests whether this instance contains a param with a given (string) name.

inputColsValidation(value)#
isDefined(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user or has a default value.

isSet(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user.

classmethod load(path: str) RL#

Reads an ML instance from the input path, a shortcut of read().load(path).

classmethod read()#

Returns an MLReader instance for this class.

save(path: str) None#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: Param, value: Any) None#

Sets a parameter in the embedded param map.

setEntities1(entities: list)#

Sets the list of entity types that appear first in the question.

Parameters:

entities (list) – List of entity types.

setEntities2(entities)#

Sets the list of entity types that appear second in the question.

Parameters:

entities (list) – List of entity types.

setForceInputTypeValidation(etfm)#
setInputCols(*value)#

Sets column names of input annotations.

Parameters:

*value (List[str]) – Input columns for the annotator

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:

value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline

setOutputCol(value)#

Sets output column name of annotations.

Parameters:

value (str) – Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

setParams()#
setQuestionMark(value: bool)#

Sets whether we want to add a question mark at the end of the question.

Defaults to False.

Parameters:

value (bool) – True if we want to add a question mark at the end of the question, False otherwise.

setQuestionPronoun(pronoun)#

Sets the pronoun to be used in the question.

E.g., ‘When’, ‘Where’, ‘Why’, ‘How’, ‘Who’, ‘What’. Defaults to empty string (“”).

Parameters:

pronoun (str) – The pronoun to be used.

setStrategyType(value: str)#

Sets the strategy to be used in the proccess. Either Paired or Combined.

If set to Paired (default), applies a one-vs-one strategy. In this case, the number of chunks in Entity 1 must be aligned with the number of chunks in Entity 2. E.g., if Entity 1 has 3 chunks and Entity 2 has 3 chunks, the first chunk of Entity 1 will be grouped with first chunk of Entity 2, the second with second, third with third, etc.

If set to Combined, applies a one-vs-all strategy. In this case, the number of chunks in Entity 1 don’t need to be the same as the number of chunks in Entity 2, and each chunk in Entity 1 will be grouped with all chunks in Entity 2.

Parameters:

value (str) – The strategy to be used.

transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset

  • params (dict, optional) – an optional param map that overrides embedded params.

Returns:

transformed dataset

Return type:

pyspark.sql.DataFrame

write() JavaMLWriter#

Returns an MLWriter instance for this ML instance.