Annotation Lab Release Notes 2.1.0

 

2.1.0

Highlights

  • A new project configuration “Visual NER Labeling” was added, which provides the skeleton for text annotation on scanned images.
  • Project Owners or Project Manager can train open-source models too.
  • The UI components and navigation of Annotation Lab - as a SPA - continues to improve its performance.
  • The application has an increased performance (security and bug fixes, general optimizations).
  • More models & embeddings included in the Annotation Lab image used for deployments. This should reduce the burden for system admins during the installation in air-gapped or enterprise environments.
  • Easier way to add relations.
  • Project Owners and Managers can see the proper status of tasks, taking into account their own completions.
  • Security Fixes. We understand and take the security issues as the highest priority. On every release, we run our artifacts and images through series of security testings (Static Code analysis, PenTest, Images Vulnerabilities Test, AWS AMI Scan Test). This version resolves a few critical issues that were recently identified in Python Docker image we use. We have upgraded it to a higher version. Along with this upgrade, we have also refactored our codebase to pass our standard Static Code Analysis.

Bug fixes

  • An issue with using Uploaded models was fixed so any uploaded models can be loaded in Project Config and used for preannotation.
  • Issues related to error messages when uploading a valid Spark OCR license and when trying to train NER models while Spark OCR license was expired are now fixed.
  • The issue with exporting annotations in COCO format for image projects was fixed. Project Owners and Managers should be able to export COCO format which also includes images used for annotations.
  • The bug reports related to unexpected scrolling of the Labeling page, issues in Swagger documentation, and typos in some hover texts are now fixed.

Read more

Versions

Last updated