sparknlp_jsl.annotator.regex.regex_matcher#
Contains classes for the RegexMatcherInternal.
Module Contents#
Classes#
Uses rules to match a set of regular expressions and associate them with a provided entity. |
|
Instantiated model of the RegexMatcherInternal. |
- class RegexMatcherInternal#
Bases:
sparknlp_jsl.common.AnnotatorApproachInternal,sparknlp_jsl.annotator.MergeCommonParamsUses rules to match a set of regular expressions and associate them with a provided entity.
A rule consists of a regex pattern and an entity, delimited by a character of choice. An example could be “d{4}/dd/dd,date” which will match strings like “1970/01/01” to the entity “date”.
Rules must be provided by either
setRules()(followed bysetDelimiter()) or an external file.To use an external file, a dictionary of predefined regular expressions must be provided with
setExternalRules(). The dictionary can be set in the form of a delimited text file.Input Annotation types
Output Annotation type
DOCUMENTCHUNK- Parameters:
strategy – Can be either MATCH_FIRST|MATCH_ALL|MATCH_COMPLETE, by default “MATCH_ALL”
rules – Regex rules to match the entity with
delimiter – Delimiter for rules provided with setRules
externalRules – external resource to rules, needs ‘delimiter’ in options
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline
In this example, the
rules.txthas the form of:the\s\w+, followed by 'the' ceremonies, ceremony
where each regex is separated by the entity
",">>> documentAssembler = DocumentAssembler().setInputCol("text").setOutputCol("document") >>> sentence = SentenceDetector().setInputCols(["document"]).setOutputCol("sentence") >>> regexMatcher = RegexMatcherInternal() \ ... .setExternalRules("src/test/resources/regex-matcher/rules.txt", ",") \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("regex") \ ... .setStrategy("MATCH_ALL") >>> pipeline = Pipeline().setStages([documentAssembler, sentence, regexMatcher]) >>> data = spark.createDataFrame([[ ... "My first sentence with the first rule. This is my second sentence with ceremonies rule." ... ]]).toDF("text") >>> results = pipeline.fit(data).transform(data) >>> results.selectExpr("explode(regex) as result").show(truncate=False) +--------------------------------------------------------------------------------------------+ |result | +--------------------------------------------------------------------------------------------+ |[chunk, 23, 31, the first, [entity -> followed by 'the', sentence -> 0, chunk -> 0], []]| |[chunk, 71, 80, ceremonies, [entity -> ceremony, sentence -> 1, chunk -> 0], []] | +--------------------------------------------------------------------------------------------+
- delimiter#
- externalRules#
- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- mergeOverlapping#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType = 'chunk'#
- outputCol#
- rules#
- skipLPInputColsValidation = True#
- strategy#
- uid = ''#
- clear(param: pyspark.ml.param.Param) None#
Clears a param from the param map if it has been explicitly set.
- copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param: str | Param) str#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams() str#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- fit(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = ...) M#
- fit(dataset: pyspark.sql.dataframe.DataFrame, params: List[pyspark.ml._typing.ParamMap] | Tuple[pyspark.ml._typing.ParamMap]) List[M]
Fits a model to the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame) – input dataset.params (dict or list or tuple, optional) – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.
- Returns:
fitted model(s)
- Return type:
Transformeror a list ofTransformer
- fitMultiple(dataset: pyspark.sql.dataframe.DataFrame, paramMaps: Sequence[pyspark.ml._typing.ParamMap]) Iterator[Tuple[int, M]]#
Fits a model to the input dataset for each param map in paramMaps.
New in version 2.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame) – input dataset.paramMaps (
collections.abc.Sequence) – A Sequence of param maps.
- Returns:
A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.
- Return type:
_FitMultipleIterator
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param: str) Any#
- getOrDefault(param: Param[T]) T
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName: str) Param#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param: str | Param[Any]) bool#
Checks whether a param has a default value.
- hasParam(paramName: str) bool#
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param: str | Param[Any]) bool#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param: str | Param[Any]) bool#
Checks whether a param is explicitly set by user.
- classmethod load(path: str) RL#
Reads an ML instance from the input path, a shortcut of read().load(path).
- classmethod read()#
Returns an MLReader instance for this class.
- save(path: str) None#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param: Param, value: Any) None#
Sets a parameter in the embedded param map.
- setDelimiter(value)#
Sets the delimiter for rules.
- Parameters:
value (str) – Delimiter for the rules
- setExternalRules(path, delimiter, read_as=ReadAs.TEXT, options={'format': 'text'})#
Sets external resource to rules, needs ‘delimiter’ in options.
Only one of either parameter rules or externalRules must be set.
- Parameters:
path (str) – Path to the source files
delimiter (str) – Delimiter for the dictionary file. Can also be set it options.
read_as (str, optional) – How to read the file, by default ReadAs.TEXT
options (dict, optional) – Options to read the resource, by default {“format”: “text”}
- setForceInputTypeValidation(etfm)#
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setMergeOverlapping(value)#
Sets whether to merge overlapping chunks. Defaults to true
- Parameters:
value (boolean) – whether to merge overlapping chunks. Defaults to true
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setRules(value)#
Sets the regex rules to match the entity with.
The rules must consist of a regex pattern and an entity for that pattern. The regex pattern and the entity must be delimited by a character that will also have to set with setDelimiter.
Only one of either parameter rules or externalRules must be set.
Examples
>>> regexMatcher = RegexMatcherInternal() \ ... .setRules(["\d{4}\/\d\d\/\d\d,date", "\d{2}\/\d\d\/\d\d,short_date"]) \ ... .setDelimiter(",") \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("regex") \ ... .setStrategy("MATCH_ALL")
- Parameters:
value (List[str]) – List of rules
- setStrategy(value)#
Sets matching strategy, by default “MATCH_ALL”.
Can be either MATCH_FIRST|MATCH_ALL|MATCH_COMPLETE.
- Parameters:
value (str) – Matching Strategy
- write() JavaMLWriter#
Returns an MLWriter instance for this ML instance.
- class RegexMatcherInternalModel(classname='com.johnsnowlabs.nlp.annotators.regex.RegexMatcherInternalModel', java_model=None)#
Bases:
sparknlp_jsl.common.AnnotatorModel,sparknlp_jsl.annotator.MergeCommonParamsInstantiated model of the RegexMatcherInternal.
This is the instantiated model of the
RegexMatcherInternal. For training your own model, please see the documentation of that class.Input Annotation types
Output Annotation type
DOCUMENTCHUNK- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- mergeOverlapping#
- name = 'RegexMatcherInternalModel'#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType = 'chunk'#
- outputCol#
- uid = ''#
- clear(param: pyspark.ml.param.Param) None#
Clears a param from the param map if it has been explicitly set.
- copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param: str | Param) str#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams() str#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param: str) Any#
- getOrDefault(param: Param[T]) T
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName: str) Param#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param: str | Param[Any]) bool#
Checks whether a param has a default value.
- hasParam(paramName: str) bool#
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param: str | Param[Any]) bool#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param: str | Param[Any]) bool#
Checks whether a param is explicitly set by user.
- classmethod load(path: str) RL#
Reads an ML instance from the input path, a shortcut of read().load(path).
- static pretrained(name='email_matcher', lang='en', remote_loc='clinical/models')#
Downloads and loads a pretrained model.
- Parameters:
name (str, optional) – Name of the pretrained model, by default “email_matcher”
lang (str, optional) – Language of the pretrained model, by default “en”
remote_loc (str, optional) – Optional remote address of the resource, by default ‘clinical/models’. Will use Spark NLPs repositories otherwise.
- Returns:
The restored model
- Return type:
- classmethod read()#
Returns an MLReader instance for this class.
- save(path: str) None#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param: Param, value: Any) None#
Sets a parameter in the embedded param map.
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setMergeOverlapping(value)#
Sets whether to merge overlapping chunks. Defaults to true
- Parameters:
value (boolean) – whether to merge overlapping chunks. Defaults to true
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setParams()#
- transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame#
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame) – input datasetparams (dict, optional) – an optional param map that overrides embedded params.
- Returns:
transformed dataset
- Return type:
- write() JavaMLWriter#
Returns an MLWriter instance for this ML instance.