Word Segmenter for Japanese

Description

WordSegmenterModel-WSM is based on maximum entropy probability model to detect word boundaries in Japanese text. Japanese text is written without white space between the words, and a computer-based application cannot know a priori which sequence of ideograms form a word. In many natural language processing tasks such as part-of-speech (POS) and named entity recognition (NER) require word segmentation as a initial step.

Open in Colab Download Copy S3 URI

How to use


word_segmenter = WordSegmenterModel.pretrained("wordseg_gsd_ud", "ja")        .setInputCols(["sentence"])        .setOutputCol("token")
pipeline = Pipeline(stages=[document_assembler, word_segmenter])
ws_model = pipeline.fit(spark.createDataFrame([[""]]).toDF("text"))
example = spark.createDataFrame([['ジョンスノーラボからこんにちは! ']], ["text"])
result = ws_model.transform(example)


val word_segmenter = WordSegmenterModel.pretrained("wordseg_gsd_ud", "ja")
.setInputCols(Array("sentence"))
.setOutputCol("token")
val pipeline = new Pipeline().setStages(Array(document_assembler, word_segmenter))
val data = Seq("ジョンスノーラボからこんにちは! ").toDF("text")
val result = pipeline.fit(data).transform(data)


import nlu
text = [""ジョンスノーラボからこんにちは ""]
token_df = nlu.load('ja.segment_words').predict(text)
token_df

Results


0     ジョンス
1        ノ
2        ー
3        ラ
4        ボ
5       から
6       こん
7        に
8        ち
9        は
10       !
Name: token, dtype: object

Model Information

Model Name: wordseg_gsd_ud
Compatibility: Spark NLP 3.0.0+
License: Open Source
Edition: Official
Input Labels: [document]
Output Labels: [words_segmented]
Language: ja