Utility & Helper Modules

 

ALAB (Annotation Lab) Interface Module

Spark NLP for Healthcare provides functionality to interact with the Annotation Lab using easy-to-use functions.Annotation Lab is a tool for multi-modal data annotation. It allows annotation teams to efficiently collaborate to generate training data for ML models and/or to validate automatic annotations generated by those.

ALAB

ALAB Intreacting Module provides using ALAB programmatically. Complete code can be found inside the Complete ALAB Module SparkNLP JSL. Following are the functionalities supported by the module:

  • Generating a CoNLL formatted file from the annotation JSON for training an NER model.
  • Generating a csv/excel formatted file from the annotation JSON for training classification, assertion, and relation extraction models.
  • Build preannotation JSON file using Spark NLP pipelines, saving it as a JSON and uploading preannotations to a project.
  • Interacting with the ALAB instance, and setting up projects of ALAB.
  • Getting the list of all projects in the ALAB instance.
  • Creating New Projects.
  • Deleting Projects.
  • Setting & editing configuration of projects.
  • Accessing/getting configuration of any existing project.
  • Upload tasks to a project.
  • Deleting tasks of a project.

Start Module

# import the module
from sparknlp_jsl.alab import AnnotationLab
alab = AnnotationLab()

Generate Data for Traing a Classification Model

alab.get_classification_data(

# required: path to Annotation Lab JSON export
input_json_path='alab_demo.json',

# optional: set to True to select ground truth completions, False to select latest completions,
# defaults to False
# ground_truth=False)

Converting The Json Export into a Conll Format Suitable for Training an Ner Model

alab.get_conll_data(

# required: Spark session with spark-nlp-jsl jar
spark=spark,

# required: path to Annotation Lab JSON export
input_json_path="alab_demo.json",

# required: name of the CoNLL file to save
output_name="conll_demo",

# optional: path for CoNLL file saving directory, defaults to 'exported_conll'
# save_dir="exported_conll",

# optional: set to True to select ground truth completions, False to select latest completions, 
# defaults to False
# ground_truth=False,

# optional: labels to exclude from CoNLL; these are all assertion labels and irrelevant NER labels, 
# defaults to empty list
# excluded_labels=['ABSENT'],

# optional: set a pattern to use regex tokenizer, defaults to regular tokenizer if pattern not defined  
# regex_pattern="\\s+|(?=[-.:;*+,$&%\\[\\]])|(?<=[-.:;*+,$&%\\[\\]])"

# optional: list of Annotation Lab task titles to exclude from CoNLL, defaults to empty list
# excluded_task_ids = [2, 3]

# optional: list of Annotation Lab task titles to exclude from CoNLL, defaults to None
# excluded_task_titles = ['Note 1'])

Converting The JSON Export into a Dataframe Suitable for Training an Assertion Model

alab.get_assertion_data(

# required: SparkSession with spark-nlp-jsl jar
spark=spark,

# required: path to Annotation Lab JSON export
input_json_path = 'alab_demo.json',

# required: annotated assertion labels to train on
assertion_labels = ['ABSENT'],

# required: relevant NER labels that are assigned assertion labels
relevant_ner_labels = ['PROBLEM', 'TREATMENT'],

# optional: set to True to select ground truth completions, False to select latest completions, 
# defaults to False
# ground_truth = False,

# optional: assertion label to assign to entities that have no assertion labels, defaults to None
# unannotated_label = 'PRESENT',

# optional: set a pattern to use regex tokenizer, defaults to regular tokenizer if pattern not defined
# regex_pattern = "\\s+|(?=[-.:;*+,$&%\\[\\]])|(?<=[-.:;*+,$&%\\[\\]])",

# optional: set the strategy to control the number of occurrences of the unannotated assertion label 
# in the output dataframe, options are 'weighted' or 'counts', 'weighted' allows to sample using a
# fraction, 'counts' allows to sample using absolute counts, defaults to None
# unannotated_label_strategy = None,

# optional: dictionary in the format {'ENTITY_LABEL': sample_weight_or_counts} to control the number of 
# occurrences of the unannotated assertion label in the output dataframe, where 'ENTITY_LABEL' are the 
# NER labels that are assigned the unannotated assertion label, and sample_weight_or_counts should be 
# between 0 and 1 if `unannotated_label_strategy` is 'weighted' or between 0 and the max number of 
# occurrences of that NER label if `unannotated_label_strategy` is 'counts'
# unannotated_label_strategy_dict = {'PROBLEM': 0.5, 'TREATMENT': 0.5},

# optional: list of Annotation Lab task IDs to exclude from output dataframe, defaults to None
# excluded_task_ids = [2, 3]

# optional: list of Annotation Lab task titles to exclude from output dataframe, defaults to None
# excluded_task_titles = ['Note 1'])

Converting The JSON Export into a Dataframe Suitable for Training a Relation Extraction Model

alab.get_relation_extraction_data(

# required: Spark session with spark-nlp-jsl jar
spark=spark,

# required: path to Annotation Lab JSON export
input_json_path='alab_demo.json',

# optional: set to True to select ground truth completions, False to select latest completions,
# defaults to False
ground_truth=True,

# optional: set to True to assign a relation label between entities where no relation was annotated,
# defaults to False
negative_relations=True,

# optional: all assertion labels that were annotated in the Annotation Lab, defaults to None
assertion_labels=['ABSENT'],

# optional: plausible pairs of entities for relations, separated by a '-', use the same casing as the 
# annotations, include only one relation direction, defaults to all possible pairs of annotated entities
relation_pairs=['DATE-PROBLEM','TREATMENT-PROBLEM','TEST-PROBLEM'],

# optional: set the strategy to control the number of occurrences of the negative relation label 
# in the output dataframe, options are 'weighted' or 'counts', 'weighted' allows to sample using a
# fraction, 'counts' allows to sample using absolute counts, defaults to None
negative_relation_strategy='weighted',

# optional: dictionary in the format {'ENTITY1-ENTITY2': sample_weight_or_counts} to control the number of 
# occurrences of negative relations in the output dataframe for each entity pair, where 'ENTITY1-ENTITY2' 
# represent the pairs of entities for relations separated by a `-` (include only one relation direction), 
# and sample_weight_or_counts should be between 0 and 1 if `negative_relation_strategy` is 'weighted' or
# between 0 and the max number of occurrences of negative relations if `negative_relation_strategy` is 
# 'counts', defaults to None
negative_relation_strategy_dict = {'DATE-PROBLEM': 0.1, 'TREATMENT-PROBLEM': 0.5, 'TEST-PROBLEM': 0.2},

# optional: list of Annotation Lab task IDs to exclude from output dataframe, defaults to None
# excluded_task_ids = [2, 3]

# optional: list of Annotation Lab task titles to exclude from output dataframe, defaults to None
# excluded_task_titles = ['Note 1'])

Geneate JSON Containing Pre-annotations Using a Spark NLP Pipeline

pre_annotations, summary = alab.generate_preannotations(

# required: list of results.
all_results = results,

# requied: output column name of 'DocumentAssembler' stage - to get original document string.
document_column = 'document',

# required: column name(s) of ner model(s). Note: multiple NER models can be used, but make sure their results don't overrlap.
# Or use 'ChunkMergeApproach' to combine results from multiple NER models.
ner_columns = ['ner_chunk'],

# optional: column name(s) of assertion model(s). Note: multiple assertion models can be used, but make sure their results don't overrlap.
# assertion_columns = ['assertion_res'],

# optional: column name(s) of relation extraction model(s). Note: multiple relation extraction models can be used, but make sure their results don't overrlap.
# relations_columns = ['relations_clinical', 'relations_pos'],

# optional: This can be defined to identify which pipeline/user/model was used to get predictions.
# Default: 'model'
# user_name = 'model',

# optional: Option to assign custom titles to tasks. By default, tasks will be titled as 'task_#'
# titles_list = [],

# optional: If there are already tasks in project, then this id offset can be used to make sure default titles 'task_#' do not overlap.
# While upload a batch after the first one, this can be set to number of tasks currently present in the project
# This number would be added to each tasks's ID and title.
# id_offset=0)

Interacting with Annotation Lab

alab = AnnotationLab()

username=''
password=''
client_secret=''
annotationlab_url=''

alab.set_credentials(

# required: username
username=username,

# required: password
password=password,

# required: secret for you alab instance (every alab installation has a different secret)
client_secret=client_secret,

# required: http(s) url for you annotation lab
annotationlab_url=annotationlab_url)

Get All Visible Projects

alab.get_all_projects()

Create a New Project

alab.create_project(

# required: unique name of project
project_name = 'alab_demo',

# optional: other details about project. Default: Empty string
project_description='',

# optional: Sampling option of tasks. Default: random
project_sampling='',

# optional: Annotation Guidelines of project
project_instruction='')

Delete a Project

alab.delete_project(

# required: unique name of project
project_name = 'alab_demo',

# optional: confirmation for deletion. Default: False - will ask for confirmation. If set to true, will delete directly.
confirm=False)

Upload Tasks to a Project

alab.upload_tasks(

# required: name of project to upload tasks to
project_name='alab_demo',

# required: list of examples / tasks as string (One string is one task).
task_list=task_list,

# optional: Option to assign custom titles to tasks. By default, tasks will be titled as 'task_#'
title_list = [],

# optional: If there are already tasks in project, then this id offset can be used to make sure default titles 'task_#' do not overlap.
# While upload a batch after the first one, this can be set to number of tasks currently present in the project
# This number would be added to each tasks's ID and title.
id_offset=0)

Delete Tasks from a Project

alab.delete_tasks(

# required: name of project to upload tasks to
project_name='alab_demo',

# required: list of ids of tasks.
# note: you can get task ids from the above step. Look for 'task_ids' key.
task_ids=[1, 2],

# optional: confirmation for deletion. Default: False - will ask for confirmation. If set to true, will delete directly.
confirm=False)

Upload Pre-annotations to Annotation Lab

alab.upload_preannotations(

# required: name of project to upload annotations to
project_name = 'alab_demo',

# required: preannotation JSON
preannotations = pre_annotations)
Last updated