sparknlp_jsl.annotator.ChunkKeyPhraseExtraction#

class sparknlp_jsl.annotator.ChunkKeyPhraseExtraction(classname='com.johnsnowlabs.nlp.embeddings.ChunkKeyPhraseExtraction', java_model=None)[source]#

Bases: BertSentenceEmbeddings

Chunk KeyPhrase Extraction uses Bert Sentence Embeddings to determine the most relevant key phrases describing a text. The input to the model consists of chunk annotations and sentence or document annotation. The model compares the chunks against the corresponding sentences/documents and selects the chunks which are most representative of the broader text context (i.e. the document or the sentence they belong to). The key phrases candidates (i.e. the input chunks) can be generated in various ways, e.g. by NGramGenerator, TextMatcher or NerConverter. The model operates either at sentence (selecting the most descriptive chunks from the sentence they belong to) or at document level. In the latter case, the key phrases are selected to represent all the input document annotations.

Input Annotation types

Output Annotation type

DOCUMENT, CHUNK

CHUNK

Parameters:
topN

The number of key phrases to select.

selectMostDifferent

Finds the topN * 2 key phrases and then selects topN of them, such as that they are the most different from each other

divergence

The divergence value determines how different from each the extracted key phrases are. Uses Maximal Marginal Relevance (MMR). MMR should not be used in conjunction with selectMostDifferent as they aim to achieve the same goal, but in different ways.

documentLevelProcessing

Extract key phrases from the whole document from particular sentences which the chunks refer to.

concatenateSentences

Concatenate the input sentence/documentation annotations before computing their embeddings.

Examples

>>> documenter = sparknlp.DocumentAssembler()     ...     .setInputCol("text")     ...     .setOutputCol("document")
...
>>> sentencer = sparknlp.annotators.SentenceDetector()     ...     .setInputCols(["document"])    ...     .setOutputCol("sentences")
...
>>> tokenizer = sparknlp.annotators.Tokenizer()     ...     .setInputCols(["document"])     ...     .setOutputCol("tokens")     ...
>>>  embeddings = sparknlp.annotators.WordEmbeddingsModel()     ...     .pretrained("embeddings_clinical", "en", "clinical/models")     ...     .setInputCols(["document", "tokens"])     ...     .setOutputCol("embeddings")
...
>>> ner_tagger = MedicalNerModel()     ...     .pretrained("ner_jsl_slim", "en", "clinical/models")     ...     .setInputCols(["sentences", "tokens", "embeddings"])     ...     .setOutputCol("ner_tags")
...
>>> ner_converter = NerConverter()    ...     .setInputCols("sentences", "tokens", "ner_tags")    ...     .setOutputCol("ner_chunks")
...
>>> key_phrase_extractor = ChunkKeyPhraseExtraction    ...     .pretrained()    ...     .setTopN(1)    ...     .setDocumentLevelProcessing(False)    ...     .setDivergence(0.4)    ...     .setInputCols(["sentences", "ner_chunks"])    ...     .setOutputCol("ner_chunk_key_phrases")
...
>>> pipeline = sparknlp.base.Pipeline()     ...     .setStages([documenter, sentencer, tokenizer, embeddings, ner_tagger, ner_converter, key_phrase_extractor])
...
>>> data = spark.createDataFrame([["Her Diabetes has become type 2 in the last year with her Diabetes.He complains of swelling in his right forearm."]]).toDF("text")
>>> results = pipeline.fit(data).transform(data)
>>> results    ...     .selectExpr("explode(ner_chunk_key_phrases) AS key_phrase")    ...     .selectExpr(
...         "key_phrase.result",
...         "key_phrase.metadata.entity",
...         "key_phrase.metadata.DocumentSimilarity",
...         "key_phrase.metadata.MMRScore")    ...     .show(truncate=False)

result

DocumentSimilarity

MMRScore

gestational diabetes mellitus 28-year-old type two diabetes mellitus

0.7391447825527298 0.4366776288430703 0.7323921930094919

0.44348688715422274 0.13577881610104517 0.085800103824974

Methods

__init__([classname, java_model])

Initialize this instance with a Java model object.

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

getBatchSize()

Gets current batch size.

getCaseSensitive()

Gets whether to ignore case in tokens for embeddings matching.

getDimension()

Gets embeddings dimension.

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

getStorageRef()

Gets unique reference name for identification.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

loadSavedModel(folder, spark_session)

Loads a locally saved model.

pretrained([name, lang, remote_loc])

Downloads and loads a pretrained model.

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(v)

Sets batch size.

setCaseSensitive(value)

Sets whether to ignore case in tokens for embeddings matching.

setConcatenateSentences(value)

Concatenate the input sentence/documentation annotations before computing their embeddings.

setConfigProtoBytes(b)

Sets configProto from tensorflow, serialized into byte array.

setDimension(value)

Sets embeddings dimension.

setDivergence(value)

Set the level of divergence of the extracted key phrases. The value should be in the interval [0, 1].

setDocumentLevelProcessing(value)

Extract key phrases from the whole document or from particular sentences which the chunks refer to.

setDropPunctuation(value)

This parameter determines whether to remove punctuation marks from the input chunks.

setInputCols(*value)

Sets column names of input annotations.

setIsLong(value)

Sets whether to use Long type instead of Int type for inputs buffer.

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setMaxSentenceLength(value)

Sets max sentence length to process.

setOutputCol(value)

Sets output column name of annotations.

setParamValue(paramName)

Sets the value of a parameter.

setParams()

setSelectMostDifferent(value)

Let the model return the top N key phrases which are the most different from each other.

setStorageRef(value)

Sets unique reference name for identification.

setTopN(value)

Set the number of key phrases to extract.

transform(dataset[, params])

Transforms the input dataset with optional parameters.

write()

Returns an MLWriter instance for this ML instance.

Attributes

batchSize

caseSensitive

concatenateSentences

configProtoBytes

dimension

divergence

documentLevelProcessing

dropPunctuation

getter_attrs

inputCols

isLong

lazyAnnotator

maxSentenceLength

name

outputCol

params

Returns all params ordered by name.

selectMostDifferent

storageRef

topN

clear(param)#

Clears a param from the param map if it has been explicitly set.

copy(extra=None)#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra – Extra parameters to copy to the new instance

Returns:

Copy of this instance

explainParam(param)#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra – extra param values

Returns:

merged param map

getBatchSize()#

Gets current batch size.

Returns:
int

Current batch size

getCaseSensitive()#

Gets whether to ignore case in tokens for embeddings matching.

Returns:
bool

Whether to ignore case in tokens for embeddings matching

getDimension()#

Gets embeddings dimension.

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName)#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

getStorageRef()#

Gets unique reference name for identification.

Returns:
str

Unique reference name for identification

hasDefault(param)#

Checks whether a param has a default value.

hasParam(paramName)#

Tests whether this instance contains a param with a given (string) name.

isDefined(param)#

Checks whether a param is explicitly set by user or has a default value.

isSet(param)#

Checks whether a param is explicitly set by user.

classmethod load(path)#

Reads an ML instance from the input path, a shortcut of read().load(path).

static loadSavedModel(folder, spark_session)#

Loads a locally saved model.

Parameters:
folderstr

Folder of the saved model

spark_sessionpyspark.sql.SparkSession

The current SparkSession

Returns:
BertSentenceEmbeddings

The restored model

property params#

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

static pretrained(name='sbert_jsl_medium_uncased', lang='en', remote_loc='clinical/models')[source]#

Downloads and loads a pretrained model.

Parameters:
namestr, optional

Name of the pretrained model, by default “sent_small_bert_L2_768”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:
BertSentenceEmbeddings

The restored model

classmethod read()#

Returns an MLReader instance for this class.

save(path)#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)#

Sets a parameter in the embedded param map.

setBatchSize(v)#

Sets batch size.

Parameters:
vint

Batch size

setCaseSensitive(value)#

Sets whether to ignore case in tokens for embeddings matching.

Parameters:
valuebool

Whether to ignore case in tokens for embeddings matching

setConcatenateSentences(value)[source]#

Concatenate the input sentence/documentation annotations before computing their embeddings. This parameter is only used if documentLevelProcessing is true. If concatenateSentences is set to true, the model will concatenate the document/sentence input annotations and compute a single embedding. If it is false, the model will compute the embedding of each sentence separately and then average the resulting embedding vectors. The default value is ‘false’.

Parameters:
valueboolean

Whether to concatenate the input sentence/document annotations in order to compute the embedding of the whole document.

setConfigProtoBytes(b)#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
bList[int]

ConfigProto from tensorflow, serialized into byte array

setDimension(value)#

Sets embeddings dimension.

Parameters:
valueint

Embeddings dimension

setDivergence(value)[source]#
Set the level of divergence of the extracted key phrases. The value should be in the interval [0, 1].

This parameter should not be used if setSelectMostDifferent is true - the two parameters aim to achieve the same goal in different ways. The default is 0, i.e. there is no constraint on the order of key phrases

extracted.

Parameters:
valuefloat

Divergence value

setDocumentLevelProcessing(value)[source]#
Extract key phrases from the whole document or from particular sentences which the chunks refer to.

The default value is ‘false’.

Parameters:
valueboolean

Whether to extract key phrases from the whole document(all sentences).

setDropPunctuation(value)[source]#

This parameter determines whether to remove punctuation marks from the input chunks. Chunks coming from NER models are not affected. The default value is ‘true’.

Parameters:
valueboolean

Whether to remove punctuation marks from input chunks.

setInputCols(*value)#

Sets column names of input annotations.

Parameters:
*valuestr

Input columns for the annotator

setIsLong(value)#

Sets whether to use Long type instead of Int type for inputs buffer.

Some Bert models require Long instead of Int.

Parameters:
valuebool

Whether to use Long type instead of Int type for inputs buffer

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setMaxSentenceLength(value)#

Sets max sentence length to process.

Parameters:
valueint

Max sentence length to process

setOutputCol(value)#

Sets output column name of annotations.

Parameters:
valuestr

Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

setSelectMostDifferent(value)[source]#

Let the model return the top N key phrases which are the most different from each other. Using this paramter only makes sense if the divergence parameter is set to 0. The default value is ‘false’

Parameters:
valueboolean

whether to select the most different key phrases or not.

setStorageRef(value)#

Sets unique reference name for identification.

Parameters:
valuestr

Unique reference name for identification

setTopN(value)[source]#

Set the number of key phrases to extract. The default value is 3.

Parameters:
valueinteger

Number of key phrases to extract.

transform(dataset, params=None)#

Transforms the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns:

transformed dataset

New in version 1.3.0.

uid#

A unique id for the object.

write()#

Returns an MLWriter instance for this ML instance.