sparknlp.annotator.ws.word_segmenter#

Contains classes for the WordSegmenter.

Module Contents#

Classes#

WordSegmenterApproach

Trains a WordSegmenter which tokenizes non-english or non-whitespace

WordSegmenterModel

WordSegmenter which tokenizes non-english or non-whitespace separated

class WordSegmenterApproach[source]#

Trains a WordSegmenter which tokenizes non-english or non-whitespace separated texts.

Many languages are not whitespace separated and their sentences are a concatenation of many symbols, like Korean, Japanese or Chinese. Without understanding the language, splitting the words into their corresponding tokens is impossible. The WordSegmenter is trained to understand these languages and split them into semantically correct parts.

This annotator is based on the paper Chinese Word Segmentation as Character Tagging [1]. Word segmentation is treated as a tagging problem. Each character is be tagged as on of four different labels: LL (left boundary), RR (right boundary), MM (middle) and LR (word by itself). The label depends on the position of the word in the sentence. LL tagged words will combine with the word on the right. Likewise, RR tagged words combine with words on the left. MM tagged words are treated as the middle of the word and combine with either side. LR tagged words are words by themselves.

Example (from [1], Example 3(a) (raw), 3(b) (tagged), 3(c) (translation)):
  • 上海 计划 到 本 世纪 末 实现 人均 国内 生产 总值 五千 美元

  • 上/LL 海/RR 计/LL 划/RR 到/LR 本/LR 世/LL 纪/RR 末/LR 实/LL 现/RR 人/LL 均/RR 国/LL 内/RR 生/LL 产/RR 总/LL值/RR 五/LL 千/RR 美/LL 元/RR

  • Shanghai plans to reach the goal of 5,000 dollars in per capita GDP by the end of the century.

For instantiated/pretrained models, see WordSegmenterModel.

To train your own model, a training dataset consisting of Part-Of-Speech tags is required. The data has to be loaded into a dataframe, where the column is an Annotation of type POS. This can be set with setPosColumn().

Tip: The helper class POS might be useful to read training data into data frames.

For extended examples of usage, see the Examples.

Parameters:
posCol

column of Array of POS tags that match tokens

nIterations

Number of iterations in training, converges to better accuracy, by default 5

frequencyThreshold

How many times at least a tag on a word to be marked as frequent, by default 5

ambiguityThreshold

How much percentage of total amount of words are covered to be marked as frequent, by default 0.97

enableRegexTokenizer

Whether to use RegexTokenizer before segmentation. Useful for multilingual text

toLowercase

Indicates whether to convert all characters to lowercase before tokenizing. Used only when enableRegexTokenizer is true

pattern

regex pattern used for tokenizing. Used only when enableRegexTokenizer is true

References

[1] Xue, Nianwen. “Chinese Word Segmentation as Character Tagging.” International Journal of Computational Linguistics & Chinese Language Processing, Volume 8, Number 1, February 2003: Special Issue on Word Formation and Chinese Language Processing, 2003, pp. 29-48. ACLWeb, https://aclanthology.org/O03-4002.

Input Annotation types

Output Annotation type

DOCUMENT

TOKEN

Examples

In this example, "chinese_train.utf8" is in the form of:

|LL |RR |LL |RR |LL |RR

and is loaded with the POS class to create a dataframe of POS type Annotations.

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> wordSegmenter = WordSegmenterApproach() \
...     .setInputCols(["document"]) \
...     .setOutputCol("token") \
...     .setPosColumn("tags") \
...     .setNIterations(5)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     wordSegmenter
... ])
>>> trainingDataSet = POS().readDataset(
...     spark,
...     "src/test/resources/word-segmenter/chinese_train.utf8"
... )
>>> pipelineModel = pipeline.fit(trainingDataSet)
setPosColumn(value)[source]#

Sets column name for array of POS tags that match tokens.

Parameters:
valuestr

Name of the column

setNIterations(value)[source]#

Sets number of iterations in training, converges to better accuracy, by default 5.

Parameters:
valueint

Number of iterations

setFrequencyThreshold(value)[source]#

Sets how many times at least a tag on a word to be marked as frequent, by default 5.

Parameters:
valueint

Frequency threshold to be marked as frequent

setAmbiguityThreshold(value)[source]#

Sets the percentage of total amount of words are covered to be marked as frequent, by default 0.97.

Parameters:
valuefloat

Percentage of total amount of words are covered to be marked as frequent

getNIterations()[source]#

Gets number of iterations in training, converges to better accuracy.

Returns:
int

Number of iterations

getFrequencyThreshold()[source]#

Sets How many times at least a tag on a word to be marked as frequent.

Returns:
int

Frequency threshold to be marked as frequent

getAmbiguityThreshold()[source]#

Sets How much percentage of total amount of words are covered to be marked as frequent.

Returns:
float

Percentage of total amount of words are covered to be marked as frequent

setEnableRegexTokenizer(value)[source]#

Sets whether to to use RegexTokenizer before segmentation. Useful for multilingual text

Parameters:
valuebool

Whether to use RegexTokenizer before segmentation

setToLowercase(value)[source]#

Sets whether to convert all characters to lowercase before tokenizing, by default False.

Parameters:
valuebool

Whether to convert all characters to lowercase before tokenizing

setPattern(value)[source]#

Sets the regex pattern used for tokenizing, by default \s+.

Parameters:
valuestr

Regex pattern used for tokenizing

class WordSegmenterModel(classname='com.johnsnowlabs.nlp.annotators.ws.WordSegmenterModel', java_model=None)[source]#

WordSegmenter which tokenizes non-english or non-whitespace separated texts.

Many languages are not whitespace separated and their sentences are a concatenation of many symbols, like Korean, Japanese or Chinese. Without understanding the language, splitting the words into their corresponding tokens is impossible. The WordSegmenter is trained to understand these languages and plit them into semantically correct parts.

This is the instantiated model of the WordSegmenterApproach. For training your own model, please see the documentation of that class.

Pretrained models can be loaded with pretrained() of the companion object:

>>> wordSegmenter = WordSegmenterModel.pretrained() \
...     .setInputCols(["document"]) \
...     .setOutputCol("words_segmented")

The default model is "wordseg_pku", default language is "zh", if no values are provided. For available pretrained models please see the Models Hub.

For extended examples of usage, see the Examples.

Input Annotation types

Output Annotation type

DOCUMENT

TOKEN

Parameters:
None

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> wordSegmenter = WordSegmenterModel.pretrained() \
...     .setInputCols(["document"]) \
...     .setOutputCol("token")
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     wordSegmenter
... ])
>>> data = spark.createDataFrame([["然而,這樣的處理也衍生了一些問題。"]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.select("token.result").show(truncate=False)
+--------------------------------------------------------+
|result                                                  |
+--------------------------------------------------------+
|[然而, ,, 這樣, 的, 處理, 也, 衍生, 了, 一些, 問題, 。      ]|
+--------------------------------------------------------+
setEnableRegexTokenizer(value)[source]#

Sets whether to to use RegexTokenizer before segmentation. Useful for multilingual text

Parameters:
valuebool

Whether to use RegexTokenizer before segmentation

setToLowercase(value)[source]#

Sets whether to convert all characters to lowercase before tokenizing, by default False.

Parameters:
valuebool

Whether to convert all characters to lowercase before tokenizing

setPattern(value)[source]#

Sets the regex pattern used for tokenizing, by default \s+.

Parameters:
valuestr

Regex pattern used for tokenizing

static pretrained(name='wordseg_pku', lang='zh', remote_loc=None)[source]#

Downloads and loads a pretrained model.

Parameters:
namestr, optional

Name of the pretrained model, by default “wordseg_pku”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:
WordSegmenterModel

The restored model