sparknlp_jsl.annotator.chunker.chunk_converter
#
Module Contents#
Classes#
Convert chunks from regexMatcher to chunks with a entity in the metadata. |
- class ChunkConverter(classname='com.johnsnowlabs.nlp.annotators.chunker.ChunkConverter', java_model=None)#
Bases:
sparknlp_jsl.common.AnnotatorModelInternal
Convert chunks from regexMatcher to chunks with a entity in the metadata. Use the identifier or field as a entity.
Input Annotation types
Output Annotation type
DOCUMENT, CHUNK
CHUNK
Examples
>>> test_data = spark.createDataFrame( ... [ ... ( ... 1, ... "My first sentence with the first rule. This is my second sentence with ceremonies rule.", ... ), ... ]).toDF("id", "text") ... >>> document_assembler = DocumentAssembler().setInputCol("text").setOutputCol("document") >>> sentence_detector = ( ... SentenceDetector().setInputCols(["document"]).setOutputCol("sentence") ... ) >>> regex_matcher = ( ... RegexMatcher() ... .setInputCols("sentence") ... .setOutputCol("regex") ... .setExternalRules( ... path="../src/test/resources/regex-matcher/rules.txt", delimiter="," ... )) >>> chunkConverter = ChunkConverter().setInputCols("regex").setOutputCol("chunk") >>> pipeline = Pipeline( ... stages=[ ... document_assembler, ... sentence_detector, ... regex_matcher, ... regex_matcher, ... chunkConverter, ... ]) >>> model = pipeline.fit(test_data) >>> outdf = model.transform(test_data) +------------------------------------------------------------------------------------------------+ |col | +------------------------------------------------------------------------------------------------+ |[chunk, 23, 31, the first, [identifier -> NAME, sentence -> 0, chunk -> 0, entity -> NAME], []] | |[chunk, 71, 80, ceremonies, [identifier -> NAME, sentence -> 1, chunk -> 0, entity -> NAME], []]| +------------------------------------------------------------------------------------------------+
- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- name = ChunkConverter#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- skipLPInputColsValidation = True#
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setForceInputTypeValidation(etfm)#
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setParams()#
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input datasetparams (dict, optional) – an optional param map that overrides embedded params.
- Returns:
transformed dataset
- Return type:
- write()#
Returns an MLWriter instance for this ML instance.