sparknlp_jsl.annotator.router#

Module Contents#

Classes#

Router

Convert chunks from regexMatcher to chunks with a entity in the metadata.

class Router(classname='com.johnsnowlabs.annotator.Router', java_model=None)#

Bases: sparknlp_jsl.common.AnnotatorModelInternal

Convert chunks from regexMatcher to chunks with a entity in the metadata. Use the identifier or field as a entity.

Input Annotation types

Output Annotation type

ANY

ANY

Parameters:
  • inputType – The type of the entity that you want to filter by default sentence_embeddings.Possible values document|token|wordpiece|word_embeddings|sentence_embeddings|category|date|sentiment|pos|chunk|named_entity|regex|dependency|labeled_dependency|language|keyword

  • filterFieldsElements – The filterfieldsElements are the allowed values for the metadata field that is being used

  • metadataField – The key in the metadata dictionary that you want to filter (by default entity)

Examples

>>> test_data = spark.createDataFrame(sentences).toDF("text")
>>> document = DocumentAssembler().setInputCol("text").setOutputCol("document")
>>> sentence = SentenceDetector().setInputCols("document").setOutputCol("sentence")
>>> regexMatcher = RegexMatcher().setExternalRules("../src/test/resources/regex-matcher/rules2.txt", ",") \
...     .setInputCols("sentence") \
...     .setOutputCol("regex") \
...     .setStrategy("MATCH_ALL")
>>> chunk2Doc = Chunk2Doc().setInputCols("regex").setOutputCol("doc_chunk")
>>> embeddings = BertSentenceEmbeddings.pretrained("sent_small_bert_L2_128") \
...     .setInputCols("doc_chunk") \
...     .setOutputCol("bert") \
...     .setCaseSensitive(False) \
...     .setMaxSentenceLength(32)
>>> router_name_embeddings = Router() \
...     .setInputType("sentence_embeddings") \
...     .setInputCols("bert") \
...     .setMetadataField("identifier") \
...     .setFilterFieldsElements(["name"]) \
...     .setOutputCol("names_embeddings")    >>> router_city_embeddings = Router() \
...     .setInputType("sentence_embeddings") \
...     .setInputCols(["bert"]) \
...     .setMetadataField("identifier") \
...     .setFilterFieldsElements(["city"]) \
...     .setOutputCol("cities_embeddings")
>>> router_names = Router() \
...     .setInputType("chunk") \
...     .setInputCols("regex") \
...     .setMetadataField("identifier") \
...     .setFilterFieldsElements(["name"]) \
...     .setOutputCol("names_chunks")
>>> pipeline = Pipeline().setStages(
>>>     [document, sentence, regexMatcher, chunk2Doc, router_names, embeddings, router_name_embeddings,
...      router_city_embeddings])
filterFieldsElements#
getter_attrs = []#
inputAnnotatorTypes#
inputCols#
inputType#
lazyAnnotator#
metadataField#
name = 'Router'#
optionalInputAnnotatorTypes = []#
outputAnnotatorType#
outputCol#
skipLPInputColsValidation = True#
uid#
clear(param: pyspark.ml.param.Param) None#

Clears a param from the param map if it has been explicitly set.

copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra (dict, optional) – Extra parameters to copy to the new instance

Returns:

Copy of this instance

Return type:

JavaParams

explainParam(param: str | Param) str#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() str#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra (dict, optional) – extra param values

Returns:

merged param map

Return type:

dict

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param: str) Any#
getOrDefault(param: Param[T]) T

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName: str) Param#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

hasDefault(param: str | Param[Any]) bool#

Checks whether a param has a default value.

hasParam(paramName: str) bool#

Tests whether this instance contains a param with a given (string) name.

inputColsValidation(value)#
isDefined(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user or has a default value.

isSet(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user.

classmethod load(path: str) RL#

Reads an ML instance from the input path, a shortcut of read().load(path).

classmethod read()#

Returns an MLReader instance for this class.

save(path: str) None#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: Param, value: Any) None#

Sets a parameter in the embedded param map.

setFilterFieldsElements(value)#

Sets the filterfieldsElements are the allowed values for the metadata field that is being used

Parameters:

value (list) – The filterfieldsElements are the allowed values for the metadata field that is being used

setForceInputTypeValidation(etfm)#
setInputCols(*value)#

Sets column names of input annotations. :param *value: Input columns for the annotator :type *value: str

setInputType(value)#

Sets the type of the entity that you want to filter by default sentence_embedding

Parameters:

value (int) – The type of the entity that you want to filter by default sentence_embedding

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:

value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline

setMetadataField(value)#

Sets the key in the metadata dictionary that you want to filter (by default ‘entity’)

Parameters:

value (str) – The key in the metadata dictionary that you want to filter (by default ‘entity’)

setOutputCol(value)#

Sets output column name of annotations.

Parameters:

value (str) – Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

setParams()#
transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset

  • params (dict, optional) – an optional param map that overrides embedded params.

Returns:

transformed dataset

Return type:

pyspark.sql.DataFrame

write() JavaMLWriter#

Returns an MLWriter instance for this ML instance.