sparknlp.annotator.TypedDependencyParserModel#
- class sparknlp.annotator.TypedDependencyParserModel(classname='com.johnsnowlabs.nlp.annotators.parser.typdep.TypedDependencyParserModel', java_model=None)[source]#
Bases:
sparknlp.common.AnnotatorModel
Labeled parser that finds a grammatical relation between two words in a sentence. Its input is either a CoNLL2009 or ConllU dataset.
Dependency parsers provide information about word relationship. For example, dependency parsing can tell you what the subjects and objects of a verb are, as well as which words are modifying (describing) the subject. This can help you find precise answers to specific questions.
The parser requires the dependant tokens beforehand with e.g. DependencyParser.
Pretrained models can be loaded with
pretrained()
of the companion object:>>> typedDependencyParser = TypedDependencyParserModel.pretrained() \ ... .setInputCols(["dependency", "pos", "token"]) \ ... .setOutputCol("dependency_type")
The default model is
"dependency_typed_conllu"
, if no name is provided. For available pretrained models please see the Models Hub.For extended examples of usage, see the Spark NLP Workshop.
Input Annotation types
Output Annotation type
TOKEN, POS, DEPENDENCY
LABELED_DEPENDENCY
- Parameters
- None
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> sentence = SentenceDetector() \ ... .setInputCols(["document"]) \ ... .setOutputCol("sentence") >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("token") >>> posTagger = PerceptronModel.pretrained() \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("pos") >>> dependencyParser = DependencyParserModel.pretrained() \ ... .setInputCols(["sentence", "pos", "token"]) \ ... .setOutputCol("dependency") >>> typedDependencyParser = TypedDependencyParserModel.pretrained() \ ... .setInputCols(["dependency", "pos", "token"]) \ ... .setOutputCol("dependency_type") >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... sentence, ... tokenizer, ... posTagger, ... dependencyParser, ... typedDependencyParser ... ]) >>> data = spark.createDataFrame([[ ... "Unions representing workers at Turner Newall say they are 'disappointed' after talks with stricken parent " + ... "firm Federal Mogul." ... ]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.selectExpr("explode(arrays_zip(token.result, dependency.result, dependency_type.result)) as cols") \ ... .selectExpr("cols['0'] as token", "cols['1'] as dependency", "cols['2'] as dependency_type") \ ... .show(8, truncate = False) +------------+------------+---------------+ |token |dependency |dependency_type| +------------+------------+---------------+ |Unions |ROOT |root | |representing|workers |amod | |workers |Unions |flat | |at |Turner |case | |Turner |workers |flat | |Newall |say |nsubj | |say |Unions |parataxis | |they |disappointed|nsubj | +------------+------------+---------------+
Methods
__init__
([classname, java_model])Initialize this instance with a Java model object.
clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
pretrained
([name, lang, remote_loc])Downloads and loads a pretrained model.
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setInputCols
(*value)Sets column names of input annotations.
setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setParams
()transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
conllFormat
getter_attrs
inputCols
lazyAnnotator
name
outputCol
Returns all params ordered by name.
trainDependencyPipe
trainOptions
trainParameters
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters
extra – Extra parameters to copy to the new instance
- Returns
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters
extra – extra param values
- Returns
merged param map
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- static pretrained(name='dependency_typed_conllu', lang='en', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters
- namestr, optional
Name of the pretrained model, by default “dependency_typed_conllu”
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns
- TypedDependencyParserModel
The restored model
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters
- *valuestr
Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
- Parameters
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params.
- Returns
transformed dataset
New in version 1.3.0.
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.