sparknlp_jsl.annotator.generic_classifier.generic_classifier
#
Module Contents#
Classes#
Trains a TensorFlow model for generic classification of feature vectors. It takes FEATURE_VECTOR annotations from |
|
Generic classifier of feature vectors. It takes FEATURE_VECTOR annotations from |
- class GenericClassifierApproach(classname='com.johnsnowlabs.nlp.annotators.generic_classifier.GenericClassifierApproach')#
Bases:
sparknlp_jsl.common.AnnotatorApproachInternal
,sparknlp_jsl.common.HasEngine
,sparknlp_jsl.annotator.handle_exception_params.HandleExceptionParams
Trains a TensorFlow model for generic classification of feature vectors. It takes FEATURE_VECTOR annotations from FeaturesAssembler` as input, classifies them and outputs CATEGORY annotations.
Input Annotation types
Output Annotation type
FEATURE_VECTOR
CATEGORY
- Parameters:
labelColumn – Column with one label per document
batchSize – Size for each batch in the optimization process
epochsN – Number of epochs for the optimization process
learningRate – Learning rate for the optimization proces
dropou – Dropout at the output of each laye
validationSplit – Validaiton split - how much data to use for validation
modelFile – File name to load the mode from
fixImbalance – A flag indicating whenther to balance the trainig set
multiClass – Return only the label with the highest confidence score or all labels
featureScaling – Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling)
outputLogsPath – Path to folder where logs will be saved. If no path is specified, the logs won’t be stored in disk. The path can be a local file path, a distributed file path (HDFS, DBFS), or a cloud storage (S3).
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp_jsl.common import * >>> from sparknlp.annotator import * >>> from sparknlp.training import * >>> import sparknlp_jsl >>> from sparknlp_jsl.base import * >>> from sparknlp_jsl.annotator import * >>> from pyspark.ml import Pipeline >>> features_asm = FeaturesAssembler() ... .setInputCols(["feature_1", "feature_2", "...", "feature_n"]) ... .setOutputCol("features") ... >>> gen_clf = GenericClassifierApproach() \ ... .setLabelColumn("target") \ ... .setInputCols(["features"]) \ ... .setOutputCol("prediction") \ ... .setModelFile("/path/to/graph_file.pb") \ ... .setEpochsNumber(50) \ ... .setBatchSize(100) \ ... .setFeatureScaling("zscore") \ ... .setlearningRate(0.001) \ ... .setFixImbalance(True) \ ... .setOutputLogsPath("logs") \ ... .setValidationSplit(0.2) # keep 20% of the data for validation purposes ... >>> pipeline = Pipeline().setStages([ ... features_asm, ... gen_clf ...]) ... >>> clf_model = pipeline.fit(data)
- batchSize#
- doExceptionHandling#
- dropout#
- engine#
- epochsN#
- featureScaling#
- fixImbalance#
- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- labelColumn#
- lazyAnnotator#
- learningRate#
- modelFile#
- multiClass#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- outputLogsPath#
- skipLPInputColsValidation = True#
- validationSplit#
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- fit(dataset, params=None)#
Fits a model to the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input dataset.params (dict or list or tuple, optional) – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.
- Returns:
fitted model(s)
- Return type:
Transformer
or a list ofTransformer
- fitMultiple(dataset, paramMaps)#
Fits a model to the input dataset for each param map in paramMaps.
New in version 2.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input dataset.paramMaps (
collections.abc.Sequence
) – A Sequence of param maps.
- Returns:
A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.
- Return type:
_FitMultipleIterator
- getEngine()#
- Returns:
Deep Learning engine used for this model”
- Return type:
str
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setBatchSize(size: int)#
Size for each batch in the optimization process
- Parameters:
size (int) – Size for each batch in the optimization process
- setDoExceptionHandling(value: bool)#
If True, exceptions are handled. If exception causing data is passed to the model, a error annotation is emitted which has the exception message. Processing continues with the next one. This comes with a performance penalty.
- Parameters:
value (bool) – If True, exceptions are handled.
- setDropout(dropout: float)#
Sets drouptup
- Parameters:
dropout (float) – Dropout at the output of each layer
- setEpochsNumber(epochs: int)#
Sets number of epochs for the optimization process
- Parameters:
epochs (int) – Number of epochs for the optimization process
- setFeatureScaling(feature_scaling: str)#
Sets Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling
- Parameters:
feature_scaling (str) – Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling
- setFixImbalance(fix_imbalance: bool)#
Sets A flag indicating whenther to balance the trainig set.
- Parameters:
fix_imbalance (bool) – A flag indicating whenther to balance the trainig set.
- setForceInputTypeValidation(etfm)#
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLabelCol(label_column: str)#
Sets Size for each batch in the optimization process
- Parameters:
label_column (str) – Column with the value result we are trying to predict.
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setLearningRate(learning_rate: float)#
Sets learning rate for the optimization process
- Parameters:
learning_rate (float) – Learning rate for the optimization process
- setModelFile(mode_file: str)#
Sets file name to load the mode from”
- Parameters:
label (str) – File name to load the mode from”
- setMultiClass(value: bool)#
Sets the model in multi class prediction mode (Default: false)
- Parameters:
value (bool) – Whether to return only the label with the highest confidence score or all labels
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setOutputLogsPath(output_logs_path: str)#
Sets path to folder where logs will be saved. If no path is specified, no logs are generated
- Parameters:
output_logs_path (str) – Path to folder where logs will be saved. If no path is specified, no logs are generated
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setValidationSplit(validation_split: float)#
Sets validaiton split - how much data to use for validation
- Parameters:
validation_split (float) – Validaiton split - how much data to use for validation
- write()#
Returns an MLWriter instance for this ML instance.
- class GenericClassifierModel(classname='com.johnsnowlabs.nlp.annotators.generic_classifier.GenericClassifierModel', java_model=None)#
Bases:
sparknlp_jsl.common.AnnotatorModelInternal
,sparknlp_jsl.annotator.handle_exception_params.HandleExceptionParams
Generic classifier of feature vectors. It takes FEATURE_VECTOR annotations from FeaturesAssembler` as input, classifies them and outputs CATEGORY annotations.
Input Annotation types
Output Annotation type
FEATURE_VECTOR
CATEGORY
- Parameters:
multiClass – Return only the label with the highest confidence score or all labels
featureScaling – Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling)
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp_jsl.common import * >>> from sparknlp.annotator import * >>> from sparknlp.training import * >>> import sparknlp_jsl >>> from sparknlp_jsl.base import * >>> from sparknlp_jsl.annotator import * >>> from pyspark.ml import Pipeline >>> features_asm = FeaturesAssembler() \ ... .setInputCols(["feature_1", "feature_2", "...", "feature_n"]) \ ... .setOutputCol("features") ... >>> gen_clf = GenericClassifierModel.pretrained() \ ... .setInputCols(["features"]) \ ... .setOutputCol("prediction") \ ... >>> pipeline = Pipeline().setStages([ ... features_asm, ... gen_clf ...]) ... >>> clf_model = pipeline.fit(data)
- classes#
- doExceptionHandling#
- featureScaling#
- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- multiClass#
- name = GenericClassifierModel#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- skipLPInputColsValidation = True#
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- static pretrained(name='genericclassifier_sdoh_housing_insecurity_sbiobert_cased_mli', lang='en', remote_loc='clinical/models')#
Downloads and loads a pretrained model.
- Parameters:
name (str, optional) – Name of the pretrained model, by default “genericclassifier_sdoh_housing_insecurity_sbiobert_cased_mli”
lang (str, optional) – Language of the pretrained model, by default “en”
remote_loc (str, optional) – Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
The restored model
- Return type:
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setDoExceptionHandling(value: bool)#
If True, exceptions are handled. If exception causing data is passed to the model, a error annotation is emitted which has the exception message. Processing continues with the next one. This comes with a performance penalty.
- Parameters:
value (bool) – If True, exceptions are handled.
- setFeatureScaling(feature_scaling: str)#
Sets Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling)
- Parameters:
feature_scaling (str) – Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling)
- setForceInputTypeValidation(etfm)#
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setMultiClass(value: bool)#
Sets the model in multi class prediction mode (Default: false)
- Parameters:
value (bool) – Whether to return only the label with the highest confidence score or all labels
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setParams()#
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input datasetparams (dict, optional) – an optional param map that overrides embedded params.
- Returns:
transformed dataset
- Return type:
- write()#
Returns an MLWriter instance for this ML instance.