sparknlp.annotator.MarianTransformer

class sparknlp.annotator.MarianTransformer(classname='com.johnsnowlabs.nlp.annotators.seq2seq.MarianTransformer', java_model=None)[source]

Bases: sparknlp.common.AnnotatorModel, sparknlp.common.HasBatchedAnnotate

MarianTransformer: Fast Neural Machine Translation

Marian is an efficient, free Neural Machine Translation framework written in pure C++ with minimal dependencies. It is mainly being developed by the Microsoft Translator team. Many academic (most notably the University of Edinburgh and in the past the Adam Mickiewicz University in Poznań) and commercial contributors help with its development. MarianTransformer uses the models trained by MarianNMT.

It is currently the engine behind the Microsoft Translator Neural Machine Translation services and being deployed by many companies, organizations and research projects.

Pretrained models can be loaded with pretrained() of the companion object:

>>> marian = MarianTransformer.pretrained() \
...     .setInputCols(["sentence"]) \
...     .setOutputCol("translation")

The default model is "opus_mt_en_fr", default language is "xx" (meaning multi-lingual), if no values are provided.

For available pretrained models please see the Models Hub.

For extended examples of usage, see the Spark NLP Workshop.

Input Annotation types

Output Annotation type

DOCUMENT

DOCUMENT

Parameters
batchSize

Size of every batch, by default 8

configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

langId

Transformer’s task, e.g. “summarize>”, by default “”

maxInputLength

Controls the maximum length for encoder inputs (source language texts), by default 40

maxOutputLength

Controls the maximum length for decoder outputs (target language texts), by default 40

Notes

This is a very computationally expensive module especially on larger sequence. The use of an accelerator such as GPU is recommended.

References

MarianNMT at GitHub

Marian: Fast Neural Machine Translation in C++

Paper Abstract:

We present Marian, an efficient and self-contained Neural Machine Translation framework with an integrated automatic differentiation engine based on dynamic computation graphs. Marian is written entirely in C++. We describe the design of the encoder-decoder framework and demonstrate that a research-friendly toolkit can achieve high training and translation speed.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> sentence = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \
...     .setInputCols("document") \
...     .setOutputCol("sentence")
>>> marian = MarianTransformer.pretrained() \
...     .setInputCols("sentence") \
...     .setOutputCol("translation") \
...     .setMaxInputLength(30)
>>> pipeline = Pipeline() \
...     .setStages([
...       documentAssembler,
...       sentence,
...       marian
...     ])
>>> data = spark.createDataFrame([["What is the capital of France? We should know this in french."]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.selectExpr("explode(translation.result) as result").show(truncate=False)
+-------------------------------------+
|result                               |
+-------------------------------------+
|Quelle est la capitale de la France ?|
|On devrait le savoir en français.    |
+-------------------------------------+

Methods

__init__([classname, java_model])

Initialize this instance with a Java model object.

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

getBatchSize()

Gets current batch size.

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

loadSavedModel(folder, spark_session)

Loads a locally saved model.

pretrained([name, lang, remote_loc])

Downloads and loads a pretrained model.

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(v)

Sets batch size.

setConfigProtoBytes(b)

Sets configProto from tensorflow, serialized into byte array.

setInputCols(*value)

Sets column names of input annotations.

setLangId(value)

Sets transformer's task, e.g.

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setMaxInputLength(value)

Sets the maximum length for encoder inputs (source language texts), by default 40.

setMaxOutputLength(value)

Sets the maximum length for decoder outputs (target language texts), by default 40.

setOutputCol(value)

Sets output column name of annotations.

setParamValue(paramName)

Sets the value of a parameter.

setParams()

transform(dataset[, params])

Transforms the input dataset with optional parameters.

write()

Returns an MLWriter instance for this ML instance.

Attributes

batchSize

configProtoBytes

getter_attrs

inputCols

langId

lazyAnnotator

maxInputLength

maxOutputLength

name

outputCol

params

Returns all params ordered by name.

clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

getBatchSize()

Gets current batch size.

Returns
int

Current batch size

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

static loadSavedModel(folder, spark_session)[source]

Loads a locally saved model.

Parameters
folderstr

Folder of the saved model

spark_sessionpyspark.sql.SparkSession

The current SparkSession

Returns
MarianTransformer

The restored model

property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

static pretrained(name='opus_mt_en_fr', lang='xx', remote_loc=None)[source]

Downloads and loads a pretrained model.

Parameters
namestr, optional

Name of the pretrained model, by default “opus_mt_en_fr”

langstr, optional

Language of the pretrained model, by default “xx”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns
MarianTransformer

The restored model

classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(v)

Sets batch size.

Parameters
vint

Batch size

setConfigProtoBytes(b)[source]

Sets configProto from tensorflow, serialized into byte array.

Parameters
bList[str]

ConfigProto from tensorflow, serialized into byte array

setInputCols(*value)

Sets column names of input annotations.

Parameters
*valuestr

Input columns for the annotator

setLangId(value)[source]

Sets transformer’s task, e.g. “summarize>”, by default “”.

Parameters
valuestr

Transformer’s task, e.g. “summarize>”

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setMaxInputLength(value)[source]

Sets the maximum length for encoder inputs (source language texts), by default 40.

Parameters
valueint

The maximum length for encoder inputs (source language texts)

setMaxOutputLength(value)[source]

Sets the maximum length for decoder outputs (target language texts), by default 40.

Parameters
valueint

The maximum length for decoder outputs (target language texts)

setOutputCol(value)

Sets output column name of annotations.

Parameters
valuestr

Name of output column

setParamValue(paramName)

Sets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

transform(dataset, params=None)

Transforms the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params.

Returns

transformed dataset

New in version 1.3.0.

uid

A unique id for the object.

write()

Returns an MLWriter instance for this ML instance.