sparknlp.annotator.YakeKeywordExtraction

class sparknlp.annotator.YakeKeywordExtraction[source]

Bases: sparknlp.common.AnnotatorModel

Yake is an Unsupervised, Corpus-Independent, Domain and Language-Independent and Single-Document keyword extraction algorithm.

Extracting keywords from texts has become a challenge for individuals and organizations as the information grows in complexity and size. The need to automate this task so that text can be processed in a timely and adequate manner has led to the emergence of automatic keyword extraction tools. Yake is a novel feature-based system for multi-lingual keyword extraction, which supports texts of different sizes, domain or languages. Unlike other approaches, Yake does not rely on dictionaries nor thesauri, neither is trained against any corpora. Instead, it follows an unsupervised approach which builds upon features extracted from the text, making it thus applicable to documents written in different languages without the need for further knowledge. This can be beneficial for a large number of tasks and a plethora of situations where access to training corpora is either limited or restricted. The algorithm makes use of the position of a sentence and token. Therefore, to use the annotator, the text should be first sent through a Sentence Boundary Detector and then a tokenizer.

See the parameters section for tweakable parameters to get the best result from the annotator.

Note that each keyword will be given a keyword score greater than 0 (The lower the score better the keyword). Therefore to filter the keywords, an upper bound for the score can be set with setThreshold().

For extended examples of usage, see the Spark NLP Workshop.

Input Annotation types

Output Annotation type

TOKEN

CHUNK

Parameters
minNGrams

Minimum N-grams a keyword should have, by default 2

maxNGrams

Maximum N-grams a keyword should have, by default 3

threshold

Keyword Score threshold, by default -1

windowSize

Window size for Co-Occurrence, by default 3

nKeywords

Number of Keywords to extract, by default 30

stopWords

the words to be filtered out, by default english stop words from Spark ML

References

Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289

Paper abstract:

As the amount of generated information grows, reading and summarizing texts of large collections turns into a challenging task. Many documents do not come with descriptive terms, thus requiring humans to generate keywords on-the-fly. The need to automate this kind of task demands the development of keyword extraction systems with the ability to automatically identify keywords within the text. One approach is to resort to machine-learning algorithms. These, however, depend on large annotated text corpora, which are not always available. An alternative solution is to consider an unsupervised approach. In this article, we describe YAKE!, a light-weight unsupervised automatic keyword extraction method which rests on statistical text features extracted from single documents to select the most relevant keywords of a text. Our system does not need to be trained on a particular set of documents, nor does it depend on dictionaries, external corpora, text size, language, or domain. To demonstrate the merits and significance of YAKE!, we compare it against ten state-of-the-art unsupervised approaches and one supervised method. Experimental results carried out on top of twenty datasets show that YAKE! significantly outperforms other unsupervised methods on texts of different sizes, languages, and domains.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> sentenceDetector = SentenceDetector() \
...     .setInputCols(["document"]) \
...     .setOutputCol("sentence")
>>> token = Tokenizer() \
...     .setInputCols(["sentence"]) \
...     .setOutputCol("token") \
...     .setContextChars(["(", "]", "?", "!", ".", ","])
>>> keywords = YakeKeywordExtraction() \
...     .setInputCols(["token"]) \
...     .setOutputCol("keywords") \
...     .setThreshold(0.6) \
...     .setMinNGrams(2) \
...     .setNKeywords(10)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     sentenceDetector,
...     token,
...     keywords
... ])
>>> data = spark.createDataFrame([[
...     "Sources tell us that Google is acquiring Kaggle, a platform that hosts data science and machine learning competitions. Details about the transaction remain somewhat vague, but given that Google is hosting its Cloud Next conference in San Francisco this week, the official announcement could come as early as tomorrow. Reached by phone, Kaggle co-founder CEO Anthony Goldbloom declined to deny that the acquisition is happening. Google itself declined 'to comment on rumors'. Kaggle, which has about half a million data scientists on its platform, was founded by Goldbloom  and Ben Hamner in 2010. The service got an early start and even though it has a few competitors like DrivenData, TopCoder and HackerRank, it has managed to stay well ahead of them by focusing on its specific niche. The service is basically the de facto home for running data science and machine learning competitions. With Kaggle, Google is buying one of the largest and most active communities for data scientists - and with that, it will get increased mindshare in this community, too (though it already has plenty of that thanks to Tensorflow and other projects). Kaggle has a bit of a history with Google, too, but that's pretty recent. Earlier this month, Google and Kaggle teamed up to host a $100,000 machine learning competition around classifying YouTube videos. That competition had some deep integrations with the Google Cloud Platform, too. Our understanding is that Google will keep the service running - likely under its current name. While the acquisition is probably more about Kaggle's community than technology, Kaggle did build some interesting tools for hosting its competition and 'kernels', too. On Kaggle, kernels are basically the source code for analyzing data sets and developers can share this code on the platform (the company previously called them 'scripts'). Like similar competition-centric sites, Kaggle also runs a job board, too. It's unclear what Google will do with that part of the service. According to Crunchbase, Kaggle raised $12.5 million (though PitchBook says it's $12.75) since its   launch in 2010. Investors in Kaggle include Index Ventures, SV Angel, Max Levchin, NaRavikant, Google chie economist Hal Varian, Khosla Ventures and Yuri Milner"
... ]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)

Combine the result and score (contained in keywords.metadata)

>>> scores = result \
...     .selectExpr("explode(arrays_zip(keywords.result, keywords.metadata)) as resultTuples") \
...     .selectExpr("resultTuples['0'] as keyword", "resultTuples['1'].score as score")

Order ascending, as lower scores means higher importance

>>> scores.orderBy("score").show(5, truncate = False)
+---------------------+-------------------+
|keyword              |score              |
+---------------------+-------------------+
|google cloud         |0.32051516486864573|
|google cloud platform|0.37786450577630676|
|ceo anthony goldbloom|0.39922830978423146|
|san francisco        |0.40224744669493756|
|anthony goldbloom    |0.41584827825302534|
+---------------------+-------------------+

Methods

__init__()

Initialize this instance with a Java model object.

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

getStopWords()

Gets the words to be filtered out, by default english stop words from Spark ML.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

loadDefaultStopWords()

Loads the default stop words for the given language.

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setInputCols(*value)

Sets column names of input annotations.

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setMaxNGrams(value)

Sets maximum N-grams a keyword should have, by default 3.

setMinNGrams(value)

Sets minimum N-grams a keyword should have, by default 2.

setNKeywords(value)

Sets number of Keywords to extract, by default 30.

setOutputCol(value)

Sets output column name of annotations.

setParamValue(paramName)

Sets the value of a parameter.

setParams()

setStopWords(value)

Sets the words to be filtered out, by default english stop words from Spark ML.

setThreshold(value)

Sets keyword Score threshold, by default -1.

setWindowSize(value)

Sets window size for Co-Occurrence, by default 3.

transform(dataset[, params])

Transforms the input dataset with optional parameters.

write()

Returns an MLWriter instance for this ML instance.

Attributes

getter_attrs

inputCols

lazyAnnotator

maxNGrams

minNGrams

nKeywords

name

outputCol

params

Returns all params ordered by name.

stopWords

threshold

windowSize

clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

getStopWords()[source]

Gets the words to be filtered out, by default english stop words from Spark ML.

Returns
List[str]

The words to be filtered out

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

loadDefaultStopWords()[source]

Loads the default stop words for the given language.

Supported languages: danish, dutch, english, finnish, french, german, hungarian, italian, norwegian, portuguese, russian, spanish, swedish, turkish

Parameters
languagestr, optional

Language stopwords to load, by default “english”

property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

setInputCols(*value)

Sets column names of input annotations.

Parameters
*valuestr

Input columns for the annotator

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setMaxNGrams(value)[source]

Sets maximum N-grams a keyword should have, by default 3.

Parameters
valueint

Maximum N-grams a keyword should have

setMinNGrams(value)[source]

Sets minimum N-grams a keyword should have, by default 2.

Parameters
valueint

Minimum N-grams a keyword should have

setNKeywords(value)[source]

Sets number of Keywords to extract, by default 30.

Parameters
valueint

Number of Keywords to extract

setOutputCol(value)

Sets output column name of annotations.

Parameters
valuestr

Name of output column

setParamValue(paramName)

Sets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

setStopWords(value)[source]

Sets the words to be filtered out, by default english stop words from Spark ML.

Parameters
valueList[str]

The words to be filtered out

setThreshold(value)[source]

Sets keyword Score threshold, by default -1.

Parameters
valueint

Keyword Score threshold, by default -1

setWindowSize(value)[source]

Sets window size for Co-Occurrence, by default 3.

Parameters
valueint

Window size for Co-Occurrence

transform(dataset, params=None)

Transforms the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns

transformed dataset

New in version 1.3.0.

uid

A unique id for the object.

write()

Returns an MLWriter instance for this ML instance.