sparknlp_jsl.annotator.ner.zero_shot_ner#
Module Contents#
Classes#
| Zero shot named entity recognition based on RoBertaForQuestionAnswering. | 
- class ZeroShotNerModel(classname='com.johnsnowlabs.nlp.annotators.ner.ZeroShotNerModel', java_model=None)#
- Bases: - sparknlp.annotator.classifier_dl.RoBertaForQuestionAnswering,- sparknlp_jsl.common.HasEngine- Zero shot named entity recognition based on RoBertaForQuestionAnswering. - Input Annotation types - Output Annotation type - DOCUMENT, TOKEN- NAMED_ENTITY- Parameters:
- entityDefinitions – - A dictionary with definitions of named entities. The keys of dictionary are the entity labels and the values are lists of questions. For example: - >>> { >>> "CITY": ["Which city?", "Which town?"], >>> "NAME": ["What is her name?", "What is his name?"] >>> } 
- predictionThreshold – Minimal confidence score to encode an entity (Default: 0.01f) 
- ignoreEntities – A list of entity labels which are discarded from the output. 
 
 - Examples - >>> document_assembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> sentence_detector = annotators.SentenceDetector()\ ... .setInputCols(["document"])\ ... .setOutputCol("sentence") >>> tokenizer = annotators.Tokenizer()\ ... .setInputCols(["sentence"])\ ... .setOutputCol("token") >>> zero_shot_ner = ZeroShotNerModel()\ ... .load("/models/sparknlp/zero_shot_ner")\ ... .setEntityDefinitions( ... { ... "NAME": ["What is his name?", "What is my name?", "What is her name?"], ... "CITY": ["Which city?", "Which is the city?"] ... })\ ... .setInputCols(["sentence", "token"])\ ... .setOutputCol("zero_shot_ner")\ >>> data = spark.createDataFrame( ... [["My name is Clara, I live in New York and Hellen lives in Paris."]] ... ).toDF("text") >>> Pipeline() \ ... .setStages([document_assembler, sentence_detector, tokenizer, zero_shot_ner]) \ ... .fit(data) \ ... .transform(data) \ ... .selectExpr("document", "explode(zero_shot_ner) AS entity")\ ... .select( ... "document.result", ... "entity.result", ... "entity.metadata.word", ... "entity.metadata.confidence", ... "entity.metadata.question")\ ... .show(truncate=False) +-----------------------------------------------------------------+------+------+----------+------------------+ |result |result|word |confidence|question | +-----------------------------------------------------------------+------+------+----------+------------------+ |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-CITY|Paris |0.5328949 |Which is the city?| |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-NAME|Clara |0.9360068 |What is my name? | |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-CITY|New |0.83294415|Which city? | |[My name is Clara, I live in New York and Hellen lives in Paris.]|I-CITY|York |0.83294415|Which city? | |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-NAME|Hellen|0.45366877|What is her name? | +-----------------------------------------------------------------+------+------+----------+------------------+ - batchSize#
 - caseSensitive#
 - coalesceSentences#
 - configProtoBytes#
 - engine#
 - getter_attrs = []#
 - ignoreEntities#
 - inputAnnotatorTypes#
 - inputCols#
 - lazyAnnotator#
 - maxSentenceLength#
 - max_length_limit = 512#
 - name = 'ZeroShotNerModel'#
 - optionalInputAnnotatorTypes = []#
 - outputAnnotatorType = 'named_entity'#
 - outputCol#
 - predictionThreshold#
 - uid = ''#
 - clear(param: pyspark.ml.param.Param) None#
- Clears a param from the param map if it has been explicitly set. 
 - copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#
- Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied. - Parameters:
- extra (dict, optional) – Extra parameters to copy to the new instance 
- Returns:
- Copy of this instance 
- Return type:
- JavaParams
 
 - explainParam(param: str | Param) str#
- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. 
 - explainParams() str#
- Returns the documentation of all params with their optionally default values and user-supplied values. 
 - extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#
- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - Parameters:
- extra (dict, optional) – extra param values 
- Returns:
- merged param map 
- Return type:
- dict 
 
 - getBatchSize()#
- Gets current batch size. - Returns:
- Current batch size 
- Return type:
- int 
 
 - getCaseSensitive()#
- Gets whether to ignore case in tokens for embeddings matching. - Returns:
- Whether to ignore case in tokens for embeddings matching 
- Return type:
- bool 
 
 - getClasses()#
- Returns the list of entities which are recognized. 
 - getEngine()#
- Returns:
- Deep Learning engine used for this model” 
- Return type:
- str 
 
 - getInputCols()#
- Gets current column names of input annotations. 
 - getLazyAnnotator()#
- Gets whether Annotator should be evaluated lazily in a RecursivePipeline. 
 - getMaxSentenceLength()#
- Gets max sentence of the model. - Returns:
- Max sentence length to process 
- Return type:
- int 
 
 - getOrDefault(param: str) Any#
- getOrDefault(param: Param[T]) T
- Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. 
 - getOutputCol()#
- Gets output column name of annotations. 
 - getParam(paramName: str) Param#
- Gets a param by its name. 
 - getParamValue(paramName)#
- Gets the value of a parameter. - Parameters:
- paramName (str) – Name of the parameter 
 
 - hasDefault(param: str | Param[Any]) bool#
- Checks whether a param has a default value. 
 - hasParam(paramName: str) bool#
- Tests whether this instance contains a param with a given (string) name. 
 - inputColsValidation(value)#
 - isDefined(param: str | Param[Any]) bool#
- Checks whether a param is explicitly set by user or has a default value. 
 - isSet(param: str | Param[Any]) bool#
- Checks whether a param is explicitly set by user. 
 - static load(path: str)#
- Load a pre-trained ZeroShotNerModel from a local path. - Parameters:
- path (str) – Path to the pre-trained model. 
- Returns:
- A pre-trained ZeroShotNerModel. 
- Return type:
 
 - static loadSavedModel(folder, spark_session)#
- Loads a locally saved model. - Parameters:
- folder (str) – Folder of the saved model 
- spark_session (pyspark.sql.SparkSession) – The current SparkSession 
 
- Returns:
- The restored model 
- Return type:
- RoBertaForQuestionAnswering 
 
 - static pretrained(name='zero_shot_ner_roberta', lang='en', remote_loc='clinical/models')#
- Download a pre-trained ZeroShotNerModel. - Parameters:
- name (str) – Name of the pre-trained model, by default “zero_shot_ner_roberta” 
- lang (str) – Language of the pre-trained model, by default “en” 
- remote_loc (str) – Remote location of the pre-trained model. If None, use the open-source location. Other values are “clinical/models”, “finance/models”, or “legal/models”. 
 
- Returns:
- A pre-trained ZeroShotNerModel. 
- Return type:
 
 - classmethod read()#
- Returns an MLReader instance for this class. 
 - save(path: str) None#
- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. 
 - set(param: Param, value: Any) None#
- Sets a parameter in the embedded param map. 
 - setBatchSize(v)#
- Sets batch size. - Parameters:
- v (int) – Batch size 
 
 - setCaseSensitive(value)#
- Sets whether to ignore case in tokens for embeddings matching. - Parameters:
- value (bool) – Whether to ignore case in tokens for embeddings matching 
 
 - setConfigProtoBytes(b)#
- Sets configProto from tensorflow, serialized into byte array. - Parameters:
- b (List[int]) – ConfigProto from tensorflow, serialized into byte array 
 
 - setEntityDefinitions(definitions: dict)#
- Set entity definitions. - Parameters:
- definitions (dict[str, list[str]]) – 
 
 - setInputCols(*value)#
- Sets column names of input annotations. - Parameters:
- *value (List[str]) – Input columns for the annotator 
 
 - setLazyAnnotator(value)#
- Sets whether Annotator should be evaluated lazily in a RecursivePipeline. - Parameters:
- value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline 
 
 - setMaxSentenceLength(value)#
- Sets max sentence length to process. - Note that a maximum limit exists depending on the model. If you are working with long single sequences, consider splitting up the input first with another annotator e.g. SentenceDetector. - Parameters:
- value (int) – Max sentence length to process 
 
 - setOutputCol(value)#
- Sets output column name of annotations. - Parameters:
- value (str) – Name of output column 
 
 - setParamValue(paramName)#
- Sets the value of a parameter. - Parameters:
- paramName (str) – Name of the parameter 
 
 - setParams()#
 - setPredictionThreshold(threshold: float)#
- Sets the minimal confidence score to encode an entity. - Parameters:
- threshold (float) – minimal confidence score to encode an entity (default is 0.1) 
 
 - transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame#
- Transforms the input dataset with optional parameters. - New in version 1.3.0. - Parameters:
- dataset ( - pyspark.sql.DataFrame) – input dataset
- params (dict, optional) – an optional param map that overrides embedded params. 
 
- Returns:
- transformed dataset 
- Return type:
 
 - write() JavaMLWriter#
- Returns an MLWriter instance for this ML instance.