This model uses a BERT base architecture initialized from https://tfhub.dev/google/experts/bert/wiki_books/1 and fine-tuned on MNLI. This is a BERT base architecture but some changes have been made to the original training and export scheme based on more recent learnings.
This model is intended to be used for a variety of English NLP tasks. The pre-training data contains more formal text and the model may not generalize to more colloquial text such as social media or messages.
This model is fine-tuned on the MNLI and is recommended for use in natural language inference tasks. The MNLI fine-tuning task is a textual entailment task and includes data from a range of genres of spoken and written text.
How to use
sent_embeddings = BertSentenceEmbeddings.pretrained("sent_bert_wiki_books_mnli", "en") \ .setInputCols("sentence") \ .setOutputCol("bert_sentence") nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, sent_embeddings ])
val sent_embeddings = BertSentenceEmbeddings.pretrained("sent_bert_wiki_books_mnli", "en") .setInputCols("sentence") .setOutputCol("bert_sentence") val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, sent_embeddings ))
import nlu text = ["I love NLP"] sent_embeddings_df = nlu.load('en.embed_sentence.bert.wiki_books_mnli').predict(text, output_level='sentence') sent_embeddings_df
|Compatibility:||Spark NLP 3.2.0+|
This Model has been imported from: https://tfhub.dev/google/experts/bert/wiki_books/mnli/2