sparknlp_jsl.annotator.chunker.chunk_converter
#
Module Contents#
Classes#
Convert chunks from regexMatcher to chunks with a entity in the metadata. |
- class ChunkConverter(classname='com.johnsnowlabs.nlp.annotators.chunker.ChunkConverter', java_model=None)#
Bases:
sparknlp_jsl.common.AnnotatorModelInternal
,sparknlp_jsl.annotator.source_tracking_metadata_params.SourceTrackingMetadataParams
Convert chunks from regexMatcher to chunks with a entity in the metadata. Use the identifier or field as a entity.
Input Annotation types
Output Annotation type
CHUNK
CHUNK
Examples
>>> test_data = spark.createDataFrame( ... [ ... ( ... 1, ... "My first sentence with the first rule. This is my second sentence with ceremonies rule.", ... ), ... ]).toDF("id", "text") ... >>> document_assembler = DocumentAssembler().setInputCol("text").setOutputCol("document") >>> sentence_detector = ( ... SentenceDetector().setInputCols(["document"]).setOutputCol("sentence") ... ) >>> regex_matcher = ( ... RegexMatcher() ... .setInputCols("sentence") ... .setOutputCol("regex") ... .setExternalRules( ... path="../src/test/resources/regex-matcher/rules.txt", delimiter="," ... )) >>> chunkConverter = ChunkConverter().setInputCols("regex").setOutputCol("chunk") >>> pipeline = Pipeline( ... stages=[ ... document_assembler, ... sentence_detector, ... regex_matcher, ... regex_matcher, ... chunkConverter, ... ]) >>> model = pipeline.fit(test_data) >>> outdf = model.transform(test_data) +------------------------------------------------------------------------------------------------+ |col | +------------------------------------------------------------------------------------------------+ |[chunk, 23, 31, the first, [identifier -> NAME, sentence -> 0, chunk -> 0, entity -> NAME], []] | |[chunk, 71, 80, ceremonies, [identifier -> NAME, sentence -> 1, chunk -> 0, entity -> NAME], []]| +------------------------------------------------------------------------------------------------+
- allPossibleFieldsToStandardize#
- getter_attrs = []#
- includeOutputColumn#
- includeStandardField#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- name = 'ChunkConverter'#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- outputColumnKey#
- resetSentenceIndices#
- skipLPInputColsValidation = True#
- standardFieldKey#
- uid#
- clear(param: pyspark.ml.param.Param) None #
Clears a param from the param map if it has been explicitly set.
- copy(extra: pyspark.ml._typing.ParamMap | None = None) JP #
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param: str | Param) str #
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams() str #
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap #
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param: str) Any #
- getOrDefault(param: Param[T]) T
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName: str) Param #
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param: str | Param[Any]) bool #
Checks whether a param has a default value.
- hasParam(paramName: str) bool #
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user or has a default value.
- isSet(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user.
- classmethod load(path: str) RL #
Reads an ML instance from the input path, a shortcut of read().load(path).
- classmethod read()#
Returns an MLReader instance for this class.
- save(path: str) None #
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param: Param, value: Any) None #
Sets a parameter in the embedded param map.
- setAllPossibleFieldsToStandardize(fields)#
Sets array with all possible fields containing the value to write in the standard field ordered by priority
- Parameters:
fields (list) – array with all possible fields containing the value to write in the standard field ordered by priority
- setForceInputTypeValidation(etfm)#
- setIncludeOutputColumn(p)#
Sets whether to include a metadata key/value to specify the output column name for the annotation
- Parameters:
p (bool) – whether to include a metadata key/value to specify the output column name for the annotation
- setIncludeStandardField(p)#
Sets whether to include a metadata key/value to specify the output column name for the annotation
- Parameters:
p (bool) – whether to include a metadata key/value to specify the output column name for the annotation
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setOutputColumnKey(s)#
Set key name for the source column value
- Parameters:
s (str) – key name for the source column value
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setParams()#
- setResetSentenceIndices(value)#
Set whether to reset sentence indices to treat the entire output as if it originates from a single document.
When set to true, the metadata of each entity will be updated by assigning the sentence key a value of 0, effectively treating the entire output as if it originates from a single document. regardless of the original sentence boundaries. Default: False.
- Parameters:
value (bool) – If set to true, sentence indices will be reset to treat the entire output as if it originates from a single document.
- setStandardFieldKey(s)#
Set key name for the source column value
- Parameters:
s (str) – key name for the source column value
- transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame #
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input datasetparams (dict, optional) – an optional param map that overrides embedded params.
- Returns:
transformed dataset
- Return type:
- write() JavaMLWriter #
Returns an MLWriter instance for this ML instance.