Packages

class YakeKeywordExtraction extends AnnotatorModel[YakeKeywordExtraction] with HasSimpleAnnotate[YakeKeywordExtraction] with YakeParams

Yake is an Unsupervised, Corpus-Independent, Domain and Language-Independent and Single-Document keyword extraction algorithm.

Extracting keywords from texts has become a challenge for individuals and organizations as the information grows in complexity and size. The need to automate this task so that text can be processed in a timely and adequate manner has led to the emergence of automatic keyword extraction tools. Yake is a novel feature-based system for multi-lingual keyword extraction, which supports texts of different sizes, domain or languages. Unlike other approaches, Yake does not rely on dictionaries nor thesauri, neither is trained against any corpora. Instead, it follows an unsupervised approach which builds upon features extracted from the text, making it thus applicable to documents written in different languages without the need for further knowledge. This can be beneficial for a large number of tasks and a plethora of situations where access to training corpora is either limited or restricted. The algorithm makes use of the position of a sentence and token. Therefore, to use the annotator, the text should be first sent through a Sentence Boundary Detector and then a tokenizer.

See the parameters section for tweakable parameters to get the best result from the annotator.

Note that each keyword will be given a keyword score greater than 0 (The lower the score better the keyword). Therefore to filter the keywords, an upper bound for the score can be set with setThreshold.

For extended examples of usage, see the Examples and the YakeTestSpec.

Sources :

Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289

Paper abstract:

As the amount of generated information grows, reading and summarizing texts of large collections turns into a challenging task. Many documents do not come with descriptive terms, thus requiring humans to generate keywords on-the-fly. The need to automate this kind of task demands the development of keyword extraction systems with the ability to automatically identify keywords within the text. One approach is to resort to machine-learning algorithms. These, however, depend on large annotated text corpora, which are not always available. An alternative solution is to consider an unsupervised approach. In this article, we describe YAKE!, a light-weight unsupervised automatic keyword extraction method which rests on statistical text features extracted from single documents to select the most relevant keywords of a text. Our system does not need to be trained on a particular set of documents, nor does it depend on dictionaries, external corpora, text size, language, or domain. To demonstrate the merits and significance of YAKE!, we compare it against ten state-of-the-art unsupervised approaches and one supervised method. Experimental results carried out on top of twenty datasets show that YAKE! significantly outperforms other unsupervised methods on texts of different sizes, languages, and domains.

Example

import spark.implicits._
import com.johnsnowlabs.nlp.base.DocumentAssembler
import com.johnsnowlabs.nlp.annotator.{SentenceDetector, Tokenizer}
import com.johnsnowlabs.nlp.annotators.keyword.yake.YakeKeywordExtraction
import org.apache.spark.ml.Pipeline

val documentAssembler = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")

val sentenceDetector = new SentenceDetector()
  .setInputCols("document")
  .setOutputCol("sentence")

val token = new Tokenizer()
  .setInputCols("sentence")
  .setOutputCol("token")
  .setContextChars(Array("(", ")", "?", "!", ".", ","))

val keywords = new YakeKeywordExtraction()
  .setInputCols("token")
  .setOutputCol("keywords")
  .setThreshold(0.6f)
  .setMinNGrams(2)
  .setNKeywords(10)

val pipeline = new Pipeline().setStages(Array(
  documentAssembler,
  sentenceDetector,
  token,
  keywords
))

val data = Seq(
  "Sources tell us that Google is acquiring Kaggle, a platform that hosts data science and machine learning competitions. Details about the transaction remain somewhat vague, but given that Google is hosting its Cloud Next conference in San Francisco this week, the official announcement could come as early as tomorrow. Reached by phone, Kaggle co-founder CEO Anthony Goldbloom declined to deny that the acquisition is happening. Google itself declined 'to comment on rumors'. Kaggle, which has about half a million data scientists on its platform, was founded by Goldbloom  and Ben Hamner in 2010. The service got an early start and even though it has a few competitors like DrivenData, TopCoder and HackerRank, it has managed to stay well ahead of them by focusing on its specific niche. The service is basically the de facto home for running data science and machine learning competitions. With Kaggle, Google is buying one of the largest and most active communities for data scientists - and with that, it will get increased mindshare in this community, too (though it already has plenty of that thanks to Tensorflow and other projects). Kaggle has a bit of a history with Google, too, but that's pretty recent. Earlier this month, Google and Kaggle teamed up to host a $100,000 machine learning competition around classifying YouTube videos. That competition had some deep integrations with the Google Cloud Platform, too. Our understanding is that Google will keep the service running - likely under its current name. While the acquisition is probably more about Kaggle's community than technology, Kaggle did build some interesting tools for hosting its competition and 'kernels', too. On Kaggle, kernels are basically the source code for analyzing data sets and developers can share this code on the platform (the company previously called them 'scripts'). Like similar competition-centric sites, Kaggle also runs a job board, too. It's unclear what Google will do with that part of the service. According to Crunchbase, Kaggle raised $12.5 million (though PitchBook says it's $12.75) since its   launch in 2010. Investors in Kaggle include Index Ventures, SV Angel, Max Levchin, Naval Ravikant, Google chief economist Hal Varian, Khosla Ventures and Yuri Milner"
).toDF("text")
val result = pipeline.fit(data).transform(data)

// combine the result and score (contained in keywords.metadata)
val scores = result
  .selectExpr("explode(arrays_zip(keywords.result, keywords.metadata)) as resultTuples")
  .select($"resultTuples.0" as "keyword", $"resultTuples.1.score")

// Order ascending, as lower scores means higher importance
scores.orderBy("score").show(5, truncate = false)
+---------------------+-------------------+
|keyword              |score              |
+---------------------+-------------------+
|google cloud         |0.32051516486864573|
|google cloud platform|0.37786450577630676|
|ceo anthony goldbloom|0.39922830978423146|
|san francisco        |0.40224744669493756|
|anthony goldbloom    |0.41584827825302534|
+---------------------+-------------------+
Linear Supertypes
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. YakeKeywordExtraction
  2. YakeParams
  3. HasSimpleAnnotate
  4. AnnotatorModel
  5. CanBeLazy
  6. RawAnnotator
  7. HasOutputAnnotationCol
  8. HasInputAnnotationCols
  9. HasOutputAnnotatorType
  10. ParamsAndFeaturesWritable
  11. HasFeatures
  12. DefaultParamsWritable
  13. MLWritable
  14. Model
  15. Transformer
  16. PipelineStage
  17. Logging
  18. Params
  19. Serializable
  20. Serializable
  21. Identifiable
  22. AnyRef
  23. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new YakeKeywordExtraction()

    Annotator reference id.

    Annotator reference id. Used to identify elements in metadata or to refer to this annotator type

  2. new YakeKeywordExtraction(uid: String)

Type Members

  1. type AnnotationContent = Seq[Row]

    internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI

    internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI

    Attributes
    protected
    Definition Classes
    AnnotatorModel
  2. type AnnotatorType = String
    Definition Classes
    HasOutputAnnotatorType

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T
    Attributes
    protected
    Definition Classes
    Params
  4. def $$[T](feature: StructFeature[T]): T
    Attributes
    protected
    Definition Classes
    HasFeatures
  5. def $$[K, V](feature: MapFeature[K, V]): Map[K, V]
    Attributes
    protected
    Definition Classes
    HasFeatures
  6. def $$[T](feature: SetFeature[T]): Set[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  7. def $$[T](feature: ArrayFeature[T]): Array[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  8. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def _transform(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): DataFrame
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  10. def afterAnnotate(dataset: DataFrame): DataFrame
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  11. def annotate(annotations: Seq[Annotation]): Seq[Annotation]

    takes a document and annotations and produces new annotations of this annotator's annotation type

    takes a document and annotations and produces new annotations of this annotator's annotation type

    annotations

    Annotations that correspond to inputAnnotationCols generated by previous annotators if any

    returns

    any number of annotations processed for every input annotation. Not necessary one to one relationship

    Definition Classes
    YakeKeywordExtractionHasSimpleAnnotate
  12. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  13. def assignTags(resultFlattenIndexed: Array[(String, Int)]): Array[(String, Int, Int, String)]
  14. def beforeAnnotate(dataset: Dataset[_]): Dataset[_]
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  15. def calculateTokenScores(basicStats: Array[(String, Int)], coOccurLeftAggregate: Map[String, Map[String, Int]], coOccurRightAggregate: Map[String, Map[String, Int]]): Iterable[Token]

    Calculate token scores given statistics

    Calculate token scores given statistics

    Refer YAKE Paper

    T Position = ln ( ln ( 3 + Median(Sentence Index)) T Case = max(TF(U(t)) , TF(A(t))) / ln(TF(t)) TF Norm =TF(t) / (MeanTF + 1 ∗ σ) T Rel = 1 + ( DL + DR ) * TF(t)/MaxTF T Sentence \= SF(t)/# Sentences TS = ( TPos ∗ TRel ) / ( TCase + (( TFNorm + TSent ) / TRel ))

    basicStats

    Basic stats

    coOccurLeftAggregate

    Left Co Occurrence

    coOccurRightAggregate

    Right Co Occurrence

  16. final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  17. final def clear(param: Param[_]): YakeKeywordExtraction.this.type
    Definition Classes
    Params
  18. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  19. def copy(extra: ParamMap): YakeKeywordExtraction

    requirement for annotators copies

    requirement for annotators copies

    Definition Classes
    RawAnnotator → Model → Transformer → PipelineStage → Params
  20. def copyValues[T <: Params](to: T, extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  21. final def defaultCopy[T <: Params](extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  22. def dfAnnotate: UserDefinedFunction

    Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column

    Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column

    returns

    udf function to be applied to inputCols using this annotator's annotate function as part of ML transformation

    Definition Classes
    HasSimpleAnnotate
  23. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  24. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  25. def explainParam(param: Param[_]): String
    Definition Classes
    Params
  26. def explainParams(): String
    Definition Classes
    Params
  27. def extraValidate(structType: StructType): Boolean
    Attributes
    protected
    Definition Classes
    RawAnnotator
  28. def extraValidateMsg: String

    Override for additional custom schema checks

    Override for additional custom schema checks

    Attributes
    protected
    Definition Classes
    RawAnnotator
  29. final def extractParamMap(): ParamMap
    Definition Classes
    Params
  30. final def extractParamMap(extra: ParamMap): ParamMap
    Definition Classes
    Params
  31. val features: ArrayBuffer[Feature[_, _, _]]
    Definition Classes
    HasFeatures
  32. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  33. def get[T](feature: StructFeature[T]): Option[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  34. def get[K, V](feature: MapFeature[K, V]): Option[Map[K, V]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  35. def get[T](feature: SetFeature[T]): Option[Set[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  36. def get[T](feature: ArrayFeature[T]): Option[Array[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  37. final def get[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  38. def getBasicStats(result: Array[Annotation]): Array[(String, Int)]

    Calculates basic statistics like total Sentences in the document and assign a tag for each token

    Calculates basic statistics like total Sentences in the document and assign a tag for each token

    result

    Document to annotate as array of tokens with sentence metadata

    returns

    Dataframe with columns SentenceID, token, totalSentences, tag

  39. def getCandidateKeywords(sentences: Array[(String, Int, Int, String)]): Map[String, Int]

    Generate candidate keywords

    Generate candidate keywords

    sentences

    sentences as a list

    returns

    candidate keywords

  40. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  41. def getCoOccurrence(sentences: ListBuffer[ListBuffer[String]], left: Boolean): Map[String, Map[String, Int]]

    Calculate Co Occurrence for left to right given a window size

    Calculate Co Occurrence for left to right given a window size

    sentences

    DataFrame with tokens

    returns

    Co Occurrence for token x from left to right as a Map

  42. final def getDefault[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  43. def getInputCols: Array[String]

    returns

    input annotations columns currently used

    Definition Classes
    HasInputAnnotationCols
  44. def getKeywords(candidate: Map[String, Int], tokens: Iterable[Token]): ListMap[String, Double]

    Extract keywords

    Extract keywords

    candidate

    candidate keywords

    tokens

    tokens with scores

    returns

    keywords

  45. def getLazyAnnotator: Boolean
    Definition Classes
    CanBeLazy
  46. final def getOrDefault[T](param: Param[T]): T
    Definition Classes
    Params
  47. final def getOutputCol: String

    Gets annotation column name going to generate

    Gets annotation column name going to generate

    Definition Classes
    HasOutputAnnotationCol
  48. def getParam(paramName: String): Param[Any]
    Definition Classes
    Params
  49. def getSentences(tokenizedArray: Array[Annotation]): ListBuffer[ListBuffer[String]]

    Separate sentences given tokens with sentence metadata

    Separate sentences given tokens with sentence metadata

    tokenizedArray

    Tokens with sentence metadata

    returns

    separated sentences

  50. def getStopWords: Array[String]

    Definition Classes
    YakeParams
  51. final def hasDefault[T](param: Param[T]): Boolean
    Definition Classes
    Params
  52. def hasParam(paramName: String): Boolean
    Definition Classes
    Params
  53. def hasParent: Boolean
    Definition Classes
    Model
  54. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  55. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  56. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  57. val inputAnnotatorTypes: Array[AnnotatorType]

    Input Annotator Types: TOKEN

    Input Annotator Types: TOKEN

    Definition Classes
    YakeKeywordExtractionHasInputAnnotationCols
  58. final val inputCols: StringArrayParam

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  59. final def isDefined(param: Param[_]): Boolean
    Definition Classes
    Params
  60. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  61. final def isSet(param: Param[_]): Boolean
    Definition Classes
    Params
  62. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  63. val lazyAnnotator: BooleanParam
    Definition Classes
    CanBeLazy
  64. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  65. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  66. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  67. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  68. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  69. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  70. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  71. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  72. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  73. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  74. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  75. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  76. val maxNGrams: IntParam

    Maximum N-grams a keyword should have (Default: 3).

    Maximum N-grams a keyword should have (Default: 3).

    Definition Classes
    YakeParams
  77. val minNGrams: IntParam

    Minimum N-grams a keyword should have (Default: 1).

    Minimum N-grams a keyword should have (Default: 1).

    Definition Classes
    YakeParams
  78. def msgHelper(schema: StructType): String
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  79. val nKeywords: IntParam

    Number of Keywords to extract (Default: 30).

    Number of Keywords to extract (Default: 30).

    Definition Classes
    YakeParams
  80. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  81. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  82. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  83. def onWrite(path: String, spark: SparkSession): Unit
    Attributes
    protected
    Definition Classes
    ParamsAndFeaturesWritable
  84. val optionalInputAnnotatorTypes: Array[String]
    Definition Classes
    HasInputAnnotationCols
  85. val outputAnnotatorType: AnnotatorType

    Output Annotator Types: CHUNK

    Output Annotator Types: CHUNK

    Definition Classes
    YakeKeywordExtractionHasOutputAnnotatorType
  86. final val outputCol: Param[String]
    Attributes
    protected
    Definition Classes
    HasOutputAnnotationCol
  87. lazy val params: Array[Param[_]]
    Definition Classes
    Params
  88. var parent: Estimator[YakeKeywordExtraction]
    Definition Classes
    Model
  89. def processSentences(annotations: Seq[Annotation]): Seq[Annotation]

    Execute the YAKE algorithm for each sentence

    Execute the YAKE algorithm for each sentence

    annotations

    token array to annotate

    returns

    annotated token array

  90. def save(path: String): Unit
    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  91. def set[T](feature: StructFeature[T], value: T): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  92. def set[K, V](feature: MapFeature[K, V], value: Map[K, V]): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  93. def set[T](feature: SetFeature[T], value: Set[T]): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  94. def set[T](feature: ArrayFeature[T], value: Array[T]): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  95. final def set(paramPair: ParamPair[_]): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    Params
  96. final def set(param: String, value: Any): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    Params
  97. final def set[T](param: Param[T], value: T): YakeKeywordExtraction.this.type
    Definition Classes
    Params
  98. def setDefault[T](feature: StructFeature[T], value: () ⇒ T): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  99. def setDefault[K, V](feature: MapFeature[K, V], value: () ⇒ Map[K, V]): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  100. def setDefault[T](feature: SetFeature[T], value: () ⇒ Set[T]): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  101. def setDefault[T](feature: ArrayFeature[T], value: () ⇒ Array[T]): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  102. final def setDefault(paramPairs: ParamPair[_]*): YakeKeywordExtraction.this.type
    Attributes
    protected
    Definition Classes
    Params
  103. final def setDefault[T](param: Param[T], value: T): YakeKeywordExtraction.this.type
    Attributes
    protected[org.apache.spark.ml]
    Definition Classes
    Params
  104. final def setInputCols(value: String*): YakeKeywordExtraction.this.type
    Definition Classes
    HasInputAnnotationCols
  105. def setInputCols(value: Array[String]): YakeKeywordExtraction.this.type

    Overrides required annotators column if different than default

    Overrides required annotators column if different than default

    Definition Classes
    HasInputAnnotationCols
  106. def setLazyAnnotator(value: Boolean): YakeKeywordExtraction.this.type
    Definition Classes
    CanBeLazy
  107. def setMaxNGrams(value: Int): YakeKeywordExtraction.this.type

    Definition Classes
    YakeParams
  108. def setMinNGrams(value: Int): YakeKeywordExtraction.this.type

    Definition Classes
    YakeParams
  109. def setNKeywords(value: Int): YakeKeywordExtraction.this.type

    Definition Classes
    YakeParams
  110. final def setOutputCol(value: String): YakeKeywordExtraction.this.type

    Overrides annotation column name when transforming

    Overrides annotation column name when transforming

    Definition Classes
    HasOutputAnnotationCol
  111. def setParent(parent: Estimator[YakeKeywordExtraction]): YakeKeywordExtraction
    Definition Classes
    Model
  112. def setStopWords(value: Array[String]): YakeKeywordExtraction.this.type

    Definition Classes
    YakeParams
  113. def setThreshold(value: Float): YakeKeywordExtraction.this.type

    Definition Classes
    YakeParams
  114. def setWindowSize(value: Int): YakeKeywordExtraction.this.type

    Definition Classes
    YakeParams
  115. val stopWords: StringArrayParam

    the words to be filtered out (Default: English stop words from MLlib)

    the words to be filtered out (Default: English stop words from MLlib)

    Definition Classes
    YakeParams
  116. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  117. val threshold: FloatParam

    Threshold to filter keywords (Default: -1).

    Threshold to filter keywords (Default: -1). By default it is disabled. Each keyword will be given a keyword score greater than 0. (The lower the score better the keyword). This sets the upper bound for the keyword score.

    Definition Classes
    YakeParams
  118. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  119. final def transform(dataset: Dataset[_]): DataFrame

    Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content

    Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content

    dataset

    Dataset[Row]

    Definition Classes
    AnnotatorModel → Transformer
  120. def transform(dataset: Dataset[_], paramMap: ParamMap): DataFrame
    Definition Classes
    Transformer
    Annotations
    @Since( "2.0.0" )
  121. def transform(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): DataFrame
    Definition Classes
    Transformer
    Annotations
    @Since( "2.0.0" ) @varargs()
  122. final def transformSchema(schema: StructType): StructType

    requirement for pipeline transformation validation.

    requirement for pipeline transformation validation. It is called on fit()

    Definition Classes
    RawAnnotator → PipelineStage
  123. def transformSchema(schema: StructType, logging: Boolean): StructType
    Attributes
    protected
    Definition Classes
    PipelineStage
    Annotations
    @DeveloperApi()
  124. val uid: String
    Definition Classes
    YakeKeywordExtraction → Identifiable
  125. def validate(schema: StructType): Boolean

    takes a Dataset and checks to see if all the required annotation types are present.

    takes a Dataset and checks to see if all the required annotation types are present.

    schema

    to be validated

    returns

    True if all the required types are present, else false

    Attributes
    protected
    Definition Classes
    RawAnnotator
  126. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  127. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  128. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  129. val windowSize: IntParam

    Window size for Co-Occurrence (Default: 3).

    Window size for Co-Occurrence (Default: 3). Yake will construct a co-occurrence matrix. You can set the window size for the co-occurrence matrix construction with this parameter. Example: windowSize=2 will look at two words to both left and right of a candidate word.

    Definition Classes
    YakeParams
  130. def wrapColumnMetadata(col: Column): Column
    Attributes
    protected
    Definition Classes
    RawAnnotator
  131. def write: MLWriter
    Definition Classes
    ParamsAndFeaturesWritable → DefaultParamsWritable → MLWritable

Inherited from YakeParams

Inherited from CanBeLazy

Inherited from HasOutputAnnotationCol

Inherited from HasInputAnnotationCols

Inherited from HasOutputAnnotatorType

Inherited from ParamsAndFeaturesWritable

Inherited from HasFeatures

Inherited from DefaultParamsWritable

Inherited from MLWritable

Inherited from Model[YakeKeywordExtraction]

Inherited from Transformer

Inherited from PipelineStage

Inherited from Logging

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Parameters

A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.

Annotator types

Required input and expected output annotator types

Members

Parameter setters

Parameter getters