Commit 0fccd232 authored by Rayyyyy's avatar Rayyyyy
Browse files

First add

parents
Pipeline #1027 failed with stages
in 0 seconds
"""
In this example we train a semantic search model to search through Wikipedia
articles about programming articles & technologies.
We use the text paragraphs from the following Wikipedia articles:
Assembly language, C , C Sharp , C++, Go , Java , JavaScript, Keras, Laravel, MATLAB, Matplotlib, MongoDB, MySQL, Natural Language Toolkit, NumPy, pandas (software), Perl, PHP, PostgreSQL, Python , PyTorch, R , React, Rust , Scala , scikit-learn, SciPy, Swift , TensorFlow, Vue.js
In:
1_programming_query_generation.py - We generate queries for all paragraphs from these articles
2_programming_train_bi-encoder.py - We train a SentenceTransformer bi-encoder with these generated queries. This results in a model we can then use for semantic search (for the given Wikipedia articles).
3_programming_semantic_search.py - Shows how the trained model can be used for semantic search
"""
from sentence_transformers import SentenceTransformer, util
import gzip
import json
import os
# Load the model we trained in 2_programming_train_bi-encoder.py
model = SentenceTransformer("output/programming-model")
# Load the corpus
docs = []
corpus_filepath = "wiki-programmming-20210101.jsonl.gz"
if not os.path.exists(corpus_filepath):
util.http_get("https://sbert.net/datasets/wiki-programmming-20210101.jsonl.gz", corpus_filepath)
with gzip.open(corpus_filepath, "rt") as fIn:
for line in fIn:
data = json.loads(line.strip())
title = data["title"]
for p in data["paragraphs"]:
if len(p) > 100: # Only take paragraphs with at least 100 chars
docs.append((title, p))
paragraph_emb = model.encode([d[1] for d in docs], convert_to_tensor=True)
print("Available Wikipedia Articles:")
print(", ".join(sorted(list(set([d[0] for d in docs])))))
# Example for semantic search
while True:
query = input("Query: ")
query_emb = model.encode(query, convert_to_tensor=True)
hits = util.semantic_search(query_emb, paragraph_emb, top_k=3)[0]
for hit in hits:
doc = docs[hit["corpus_id"]]
print("{:.2f}\t{}\t\t{}".format(hit["score"], doc[0], doc[1]))
print("\n=================\n")
# GenQ
In our paper [BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models](https://arxiv.org/abs/2104.08663) we presented a method to adapt a model for [asymmetric semantic search](../../applications/semantic-search/) without for a corpus without labeled training data.
## Background
In [asymmetric semantic search](../../applications/semantic-search/), the user provides a (short) query like some keywords or a question. We then want to retrieve a longer text passage that provides the answer.
For example:
```
query: What is Python?
passage to retrieve: Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.
```
We showed how to train such models if sufficient training data (query & relevant passage) is available here: [Training MS MARCO dataset](../../training/ms_marco)
In this tutorial, we show to train such models if **no training data is available**, i.e., if you don't have thousands of labeled query & relevant passage pairs.
## Overview
We use **synthetic query generation** to achieve our goal. We start with the passage from our document collection and create for these possible queries users might ask / might search for.
![Query Generation](https://raw.githubusercontent.com/UKPLab/sentence-transformers/master/docs/img/query-generation.png)
For example, we have the following text passage:
```
Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.
```
We pass this passage through a specially trained [T5 model](https://arxiv.org/abs/1910.10683) which generates possible queries for us. For the above passage, it might generate these queries:
- What is python
- definition python
- what language uses whitespaces
We then use these generated queries to create our training set:
```
(What is python, Python is an interpreted...)
(definition python, Python is an interpreted...)
(what language uses whitespaces, Python is an interpreted...)
````
And train our SentenceTransformer bi-encoder with it.
## Query Generation
In [BeIR](https://huggingface.co/BeIR) we provide different models that can be used for query generation. In this example, we use the T5 model that was trained by [docTTTTTquery](https://github.com/castorini/docTTTTTquery):
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained("BeIR/query-gen-msmarco-t5-large-v1")
model = T5ForConditionalGeneration.from_pretrained("BeIR/query-gen-msmarco-t5-large-v1")
model.eval()
para = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(para, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=3,
)
print("Paragraph:")
print(para)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f"{i + 1}: {query}")
```
In the above code, we use [Top-p (nucleus) sampling](https://huggingface.co/blog/how-to-generate) which will randomly pick a word from a collection of likely words. As a consequence, the model will generate different queries each time.
## Bi-Encoder Training
With the generated queries, we can then train a bi-encoder using the use [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss).
## Full Example
We train a semantic search model to search through Wikipedia
articles about programming articles & technologies.
We use the text paragraphs from the following Wikipedia articles:
Assembly language, C , C# , C++, Go , Java , JavaScript, Keras, Laravel, MATLAB, Matplotlib, MongoDB, MySQL, Natural Language Toolkit, NumPy, pandas (software), Perl, PHP, PostgreSQL, Python , PyTorch, R , React, Rust , Scala , scikit-learn, SciPy, Swift , TensorFlow, Vue.js
In:
- [1_programming_query_generation.py](1_programming_query_generation.py) - We generate queries for all paragraphs from these articles
- [2_programming_train_bi-encoder.py](2_programming_train_bi-encoder.py) - We train a SentenceTransformer bi-encoder with these generated queries. This results in a model we can then use for semantic search (for the given Wikipedia articles).
- [3_programming_semantic_search.py](3_programming_semantic_search.py) - Shows how the trained model can be used for semantic search.
import torch
import numpy as np
import random
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Set all seeds to make output deterministic
torch.manual_seed(0)
np.random.seed(0)
random.seed(0)
# Paragraphs for which we want to generate queries
paragraphs = [
"Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.",
'Python is dynamically-typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly, procedural), object-oriented and functional programming. Python is often described as a "batteries included" language due to its comprehensive standard library.',
"Python was created in the late 1980s, and first released in 1991, by Guido van Rossum as a successor to the ABC programming language. Python 2.0, released in 2000, introduced new features, such as list comprehensions, and a garbage collection system with reference counting, and was discontinued with version 2.7 in 2020. Python 3.0, released in 2008, was a major revision of the language that is not completely backward-compatible and much Python 2 code does not run unmodified on Python 3. With Python 2's end-of-life (and pip having dropped support in 2021), only Python 3.6.x and later are supported, with older versions still supporting e.g. Windows 7 (and old installers not restricted to 64-bit Windows).",
"Python interpreters are supported for mainstream operating systems and available for a few more (and in the past supported many more). A global community of programmers develops and maintains CPython, a free and open-source reference implementation. A non-profit organization, the Python Software Foundation, manages and directs resources for Python and CPython development.",
"As of January 2021, Python ranks third in TIOBE’s index of most popular programming languages, behind C and Java, having previously gained second place and their award for the most popularity gain for 2020.",
"Java is a class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let application developers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++, but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as reflection and runtime code modification) that are typically not available in traditional compiled languages. As of 2019, Java was one of the most popular programming languages in use according to GitHub, particularly for client-server web applications, with a reported 9 million developers.",
"Java was originally developed by James Gosling at Sun Microsystems (which has since been acquired by Oracle) and released in 1995 as a core component of Sun Microsystems' Java platform. The original and reference implementation Java compilers, virtual machines, and class libraries were originally released by Sun under proprietary licenses. As of May 2007, in compliance with the specifications of the Java Community Process, Sun had relicensed most of its Java technologies under the GNU General Public License. Oracle offers its own HotSpot Java Virtual Machine, however the official reference implementation is the OpenJDK JVM which is free open source software and used by most developers and is the default JVM for almost all Linux distributions.",
"As of September 2020, the latest version is Java 15, with Java 11, a currently supported long-term support (LTS) version, released on September 25, 2018. Oracle released the last zero-cost public update for the legacy version Java 8 LTS in January 2019 for commercial use, although it will otherwise still support Java 8 with public updates for personal use indefinitely. Other vendors have begun to offer zero-cost builds of OpenJDK 8 and 11 that are still receiving security and other upgrades.",
"Oracle (and others) highly recommend uninstalling outdated versions of Java because of serious risks due to unresolved security issues. Since Java 9, 10, 12, 13, and 14 are no longer supported, Oracle advises its users to immediately transition to the latest version (currently Java 15) or an LTS release.",
]
# For available models for query generation, see: https://huggingface.co/BeIR/
# Here, we use a T5-large model was trained on the MS MARCO dataset
tokenizer = T5Tokenizer.from_pretrained("BeIR/query-gen-msmarco-t5-large-v1")
model = T5ForConditionalGeneration.from_pretrained("BeIR/query-gen-msmarco-t5-large-v1")
model.eval()
# Select the device
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Iterate over the paragraphs and generate for each some queries
with torch.no_grad():
for para in paragraphs:
input_ids = tokenizer.encode(para, return_tensors="pt").to(device)
outputs = model.generate(
input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=3
)
print("\nParagraph:")
print(para)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f"{i + 1}: {query}")
"""
Output of the script:
Paragraph:
Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.
Generated Queries:
1: what is python language used for
2: what is python programming
3: what language do i use for scripts
Paragraph:
Python is dynamically-typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly, procedural), object-oriented and functional programming. Python is often described as a "batteries included" language due to its comprehensive standard library.
Generated Queries:
1: what is python language
2: what programming paradigms do python support
3: what programming languages use python
Paragraph:
Python was created in the late 1980s, and first released in 1991, by Guido van Rossum as a successor to the ABC programming language. Python 2.0, released in 2000, introduced new features, such as list comprehensions, and a garbage collection system with reference counting, and was discontinued with version 2.7 in 2020. Python 3.0, released in 2008, was a major revision of the language that is not completely backward-compatible and much Python 2 code does not run unmodified on Python 3. With Python 2's end-of-life (and pip having dropped support in 2021), only Python 3.6.x and later are supported, with older versions still supporting e.g. Windows 7 (and old installers not restricted to 64-bit Windows).
Generated Queries:
1: what year did python start
2: when does the next python update release
3: when did python come out?
Paragraph:
Python interpreters are supported for mainstream operating systems and available for a few more (and in the past supported many more). A global community of programmers develops and maintains CPython, a free and open-source reference implementation. A non-profit organization, the Python Software Foundation, manages and directs resources for Python and CPython development.
Generated Queries:
1: what platform is python available on
2: what is python used for
3: what is python?
Paragraph:
As of January 2021, Python ranks third in TIOBE’s index of most popular programming languages, behind C and Java, having previously gained second place and their award for the most popularity gain for 2020.
Generated Queries:
1: what is the most used programming language in the world
2: what is python language
3: what is the most popular programming language in the world?
Paragraph:
Java is a class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let application developers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++, but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as reflection and runtime code modification) that are typically not available in traditional compiled languages. As of 2019, Java was one of the most popular programming languages in use according to GitHub, particularly for client-server web applications, with a reported 9 million developers.
Generated Queries:
1: java how java works
2: what language is similar to java
3: what is java language
Paragraph:
Java was originally developed by James Gosling at Sun Microsystems (which has since been acquired by Oracle) and released in 1995 as a core component of Sun Microsystems' Java platform. The original and reference implementation Java compilers, virtual machines, and class libraries were originally released by Sun under proprietary licenses. As of May 2007, in compliance with the specifications of the Java Community Process, Sun had relicensed most of its Java technologies under the GNU General Public License. Oracle offers its own HotSpot Java Virtual Machine, however the official reference implementation is the OpenJDK JVM which is free open source software and used by most developers and is the default JVM for almost all Linux distributions.
Generated Queries:
1: what is java created by
2: when was java introduced to linux
3: who developed java?
Paragraph:
As of September 2020, the latest version is Java 15, with Java 11, a currently supported long-term support (LTS) version, released on September 25, 2018. Oracle released the last zero-cost public update for the legacy version Java 8 LTS in January 2019 for commercial use, although it will otherwise still support Java 8 with public updates for personal use indefinitely. Other vendors have begun to offer zero-cost builds of OpenJDK 8 and 11 that are still receiving security and other upgrades.
Generated Queries:
1: what is the latest version of java
2: what is the latest java version
3: what is the latest version of java
Paragraph:
Oracle (and others) highly recommend uninstalling outdated versions of Java because of serious risks due to unresolved security issues. Since Java 9, 10, 12, 13, and 14 are no longer supported, Oracle advises its users to immediately transition to the latest version (currently Java 15) or an LTS release.
Generated Queries:
1: why is oracle not supported
2: what version is oracle used in
3: which java version is obsolete
"""
import os
import math
import json
import logging
import argparse
import torch
from datetime import datetime
from torch.utils.data import DataLoader
from sentence_transformers import SentenceTransformer, LoggingHandler, losses, util, InputExample
from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator
#### Just some code to print debug information to stdout
logging.basicConfig(
format="%(asctime)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.INFO, handlers=[LoggingHandler()]
)
parser = argparse.ArgumentParser()
parser.add_argument('--data_path', type=str, default='./datasets/tmp.txt', help='Input txt path')
parser.add_argument('--train_batch_size', type=int, default=16)
parser.add_argument('--num_epochs', type=int, default=10)
parser.add_argument('--model_name_or_path', type=str, default="all-MiniLM-L6-v2")
parser.add_argument('--model_save_path', type=str, default="output/training_sbert_" + datetime.now().strftime("%Y-%m-%d_%H-%M-%S"), help='Output folder')
parser.add_argument('--lr', default=2e-05)
args = parser.parse_args()
if __name__ == "__main__":
sts_dataset_path = args.data_path
# Check if dataset exists. If not, download and extract it
if not os.path.exists(sts_dataset_path):
print("datasets is not exists!!!!")
exit()
model_name_or_path = args.model_name_or_path
train_batch_size = args.train_batch_size
num_epochs = args.num_epochs
model_save_path = args.model_save_path
# Load a pre-trained sentence transformer model
model = SentenceTransformer(model_name_or_path, device='cuda')
# Convert the dataset to a DataLoader ready for training
logging.info("Read STSbenchmark train dataset")
# Read the dataset
train_samples = []
dev_samples = []
with open(sts_dataset_path, "r", encoding="utf8") as fIn:
count = 0
for lineinfo in fIn.readlines():
row = json.loads(lineinfo)
score = float(row["score"]) # Normalize score to range 0 ... 1
inp_example = InputExample(texts=[row["sentence1"], row["sentence2"]], label=score)
if (count+1) % 5 == 0:
dev_samples.append(inp_example)
else:
train_samples.append(inp_example)
count += 1
logging.info("Dealing data end.")
train_dataloader = DataLoader(train_samples, shuffle=True, batch_size=train_batch_size)
train_loss = losses.CosineSimilarityLoss(model=model)
# Development set: Measure correlation between cosine score and gold labels
logging.info("Read STSbenchmark dev dataset")
evaluator = EmbeddingSimilarityEvaluator.from_input_examples(dev_samples, name="sts-dev")
# Configure the training. We skip evaluation in this example
warmup_steps = math.ceil(len(train_dataloader) * num_epochs * 0.1) # 10% of train data for warm-up
logging.info("Warmup-steps: {}".format(warmup_steps))
print("Start training ...")
# Train the model
model.fit(
train_objectives=[(train_dataloader, train_loss)],
evaluator=evaluator,
epochs=num_epochs,
evaluation_steps=1000,
warmup_steps=warmup_steps,
optimizer_params={'lr': args.lr},
output_path=model_save_path,
)
logging.info("Finetune end")
##############################################################################
#
# Load the stored model and evaluate its performance on STS benchmark dataset
#
##############################################################################
model = SentenceTransformer(model_save_path)
test_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(dev_samples, name="sts-test")
test_evaluator(model, output_path=model_save_path)
#!/bin/bash
echo "Export params ..."
export HIP_VISIBLE_DEVICES=0,1,2,3 # 自行修改为训练的卡号和数量
export HSA_FORCE_FINE_GRAIN_PCIE=1
export USE_MIOPEN_BATCHNORM=1
echo "Training start ..."
python -m torch.distributed.launch --use_env --nproc_per_node=4 --master_port=4321 finetune.py \
--data_path ./datasets/tmp.txt \
--train_batch_size 32 \
--num_epochs 10
import os
import json
import argparse
paras = argparse.ArgumentParser()
paras.add_argument('--root_path', type=str, default='./datasets/simple_wikipedia_v1')
args = paras.parse_args()
def load_sentence_info(file_path):
try:
# 假设的实现,具体逻辑依据实际情况调整
with open(file_path, 'r', encoding='utf-8') as file:
sentences = file.readlines()
# 去除每行句子的首尾空白字符
return [sentence.strip() for sentence in sentences]
except Exception as e:
# 如果遇到异常,打印错误信息并向上抛出异常
print(f"Failed to load sentence info from {file_path}: {e}")
raise
def main():
root_path = args.root_path
if not os.path.exists(root_path):
print(f"{root_path} not exist, please check it")
return
# 尝试加载句子信息
try:
sentence1 = load_sentence_info(os.path.join(root_path, 'wiki.simple'))
sentence2 = load_sentence_info(os.path.join(root_path, 'wiki.unsimplified'))
except Exception as e:
print(f"Error loading sentence info: {e}")
return
# 检查两个数据集长度是否一致
if len(sentence1) != len(sentence2):
print(f'The simple_wikipedia_v1 data length is not equal, please check it')
return
# 将加载的句子对写入到文件中
with open(os.path.join(root_path, 'simple_wiki_pair.txt'), 'w', encoding='utf-8') as wfile:
for indx in range(len(sentence1)):
wfile.write(json.dumps({'sentence1': sentence1[indx], 'sentence2': sentence2[indx]}, ensure_ascii=False) + '\n')
return
if __name__ == '__main__':
main()
SentenceTransformers Documentation
=================================================
SentenceTransformers is a Python framework for state-of-the-art sentence, text and image embeddings. The initial work is described in our paper `Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks <https://arxiv.org/abs/1908.10084>`_.
You can use this framework to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. This can be useful for `semantic textual similarity <docs/usage/semantic_textual_similarity.html>`_, `semantic search <examples/applications/semantic-search/README.html>`_, or `paraphrase mining <examples/applications/paraphrase-mining/README.html>`_.
The framework is based on `PyTorch <https://pytorch.org/>`_ and `Transformers <https://huggingface.co/transformers/>`_ and offers a large collection of `pre-trained models <docs/pretrained_models.html>`_ tuned for various tasks. Further, it is easy to `fine-tune your own models <docs/training/overview.html>`_.
Installation
=================================================
You can install it using pip:
.. code-block:: python
pip install -U sentence-transformers
We recommend **Python 3.8** or higher, and at least **PyTorch 1.11.0**. See `installation <docs/installation.html>`_ for further installation options, especially if you want to use a GPU.
Usage
=================================================
The usage is as simple as:
.. code-block:: python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("all-MiniLM-L6-v2")
# Our sentences to encode
sentences = [
"This framework generates embeddings for each input sentence",
"Sentences are passed as a list of string.",
"The quick brown fox jumps over the lazy dog."
]
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("")
Performance
=========================
Our models are evaluated extensively and achieve state-of-the-art performance on various tasks. Further, the code is tuned to provide the highest possible speed. Have a look at `Pre-Trained Models <docs/pretrained_models.html>`_ for an overview of available models and the respective performance on different tasks.
Contact
=========================
Contact person: Tom Aarsen, tom.aarsen@huggingface.co
Don't hesitate to open an issue on the `repository <https://github.com/UKPLab/sentence-transformers>`_ if something is broken (and it shouldn't be) or if you have further questions.
*This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.*
Citing & Authors
=========================
If you find this repository helpful, feel free to cite our publication `Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks <https://arxiv.org/abs/1908.10084>`_:
.. code-block:: bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
If you use one of the multilingual models, feel free to cite our publication `Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation <https://arxiv.org/abs/2004.09813>`_:
.. code-block:: bibtex
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
If you use the code for `data augmentation <https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/data_augmentation>`_, feel free to cite our publication `Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks <https://arxiv.org/abs/2010.08240>`_:
.. code-block:: bibtex
@inproceedings{thakur-2020-AugSBERT,
title = "Augmented {SBERT}: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks",
author = "Thakur, Nandan and Reimers, Nils and Daxenberger, Johannes and Gurevych, Iryna",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.naacl-main.28",
pages = "296--310",
}
.. toctree::
:maxdepth: 2
:caption: Overview
docs/installation
docs/quickstart
docs/pretrained_models
docs/pretrained_cross-encoders
docs/publications
docs/hugging_face
.. toctree::
:maxdepth: 2
:caption: Usage
examples/applications/computing-embeddings/README
docs/usage/semantic_textual_similarity
examples/applications/embedding-quantization/README
examples/applications/semantic-search/README
examples/applications/retrieve_rerank/README
examples/applications/clustering/README
examples/applications/paraphrase-mining/README
examples/applications/parallel-sentence-mining/README
examples/applications/cross-encoder/README
examples/applications/image-search/README
.. toctree::
:maxdepth: 2
:caption: Training
docs/training/overview
docs/training/loss_overview
examples/training/matryoshka/README
examples/training/adaptive_layer/README
examples/training/multilingual/README
examples/training/distillation/README
examples/training/cross-encoder/README
examples/training/data_augmentation/README
examples/training/datasets/README
.. toctree::
:maxdepth: 2
:caption: Training Examples
examples/training/sts/README
examples/training/nli/README
examples/training/paraphrases/README
examples/training/quora_duplicate_questions/README
examples/training/ms_marco/README
.. toctree::
:maxdepth: 2
:caption: Unsupervised Learning
examples/unsupervised_learning/README
examples/domain_adaptation/README
.. toctree::
:maxdepth: 1
:caption: Package Reference
docs/package_reference/SentenceTransformer
docs/package_reference/util
docs/package_reference/quantization
docs/package_reference/models
docs/package_reference/losses
docs/package_reference/evaluation
docs/package_reference/datasets
docs/package_reference/cross_encoder
import os
import json
import argparse
from sentence_transformers import SentenceTransformer, util
parser = argparse.ArgumentParser()
parser.add_argument('--data_path', type=str, help='txt path')
parser.add_argument('--threshold_score', type=float, default=0.8)
parser.add_argument('--model_name_or_path', type=str, default="all-MiniLM-L6-v2")
parser.add_argument('--save_path', type=str, default='./results')
args = parser.parse_args()
def write_txt(infos, save_root_path='./results',save_name='pos'):
if not os.path.exists(save_root_path):
os.makedirs(save_root_path)
save_path = os.path.join(save_root_path, save_name+'.txt')
with open(save_path, 'w', encoding='utf-8') as wfile:
for info in infos:
wfile.write(json.dumps(info, ensure_ascii=False)+'\n')
wfile.close()
if __name__ == "__main__":
txt_path = args.data_path
model_name_or_path = args.model_name_or_path
threshold_score = args.threshold_score
model = SentenceTransformer(model_name_or_path)
neg_sentence = []
pos_sentence = []
with open(txt_path, 'r', encoding='utf-8') as rfile:
for line in rfile.readlines():
print('dealing with:', line.strip())
json_info = json.loads(line)
# Sentences are encoded by calling model.encode()
label_emb = model.encode(json_info.get("labels"))
pred_emb = model.encode(json_info.get("predict"))
cos_sim = util.cos_sim(label_emb, pred_emb)
json_info["score"] = cos_sim.item()
print("Cosine-Similarity:", cos_sim.item())
if cos_sim >= threshold_score:
pos_sentence.append(json_info)
else:
neg_sentence.append(json_info)
save_root_path = args.save_path
# save results and score in txt
write_txt(pos_sentence, save_root_path, 'pos')
write_txt(neg_sentence, save_root_path, 'neg')
print('dealing end. the acc is {}'.format(len(pos_sentence)/(len(pos_sentence)+len(neg_sentence))))
# 模型唯一标识
modelCode=656
# 模型名称
modelName=sentence-bert_pytorch
# 模型描述
modelDescription=一种对预训练BERT网络的改进,它使用连体和三重网络结构来获得语义上有意义的句子嵌入,可以使用余弦相似度进行比较
# 应用场景
appScenario=推理,训练,NLP,教育,网安,政府
# 框架类型
frameType=PyTorch
[pytest]
testpaths =
tests
addopts = --strict-markers -m "not slow"
markers =
slow: marks tests as slow
\ No newline at end of file
transformers>=4.34.0,<5.0.0
tqdm
numpy
scikit-learn
scipy
huggingface-hub>=0.15.1
Pillow
\ No newline at end of file
lint.ignore-init-module-imports = true
line-length = 119
# Skip `E731` (do not assign a lambda expression, use a def)
lint.ignore = ["E731"]
[lint.per-file-ignores]
# Ignore `E402` (import violations) in all examples
"examples/**" = ["E402"]
import logging
import tqdm
class LoggingHandler(logging.Handler):
def __init__(self, level=logging.NOTSET):
super().__init__(level)
def emit(self, record):
try:
msg = self.format(record)
tqdm.tqdm.write(msg)
self.flush()
except (KeyboardInterrupt, SystemExit):
raise
except Exception:
self.handleError(record)
def install_logger(given_logger, level=logging.WARNING, fmt="%(levelname)s:%(name)s:%(message)s"):
"""Configures the given logger; format, logging level, style, etc"""
import coloredlogs
def add_notice_log_level():
"""Creates a new 'notice' logging level"""
# inspired by:
# https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility
NOTICE_LEVEL_NUM = 25
logging.addLevelName(NOTICE_LEVEL_NUM, "NOTICE")
def notice(self, message, *args, **kws):
if self.isEnabledFor(NOTICE_LEVEL_NUM):
self._log(NOTICE_LEVEL_NUM, message, args, **kws)
logging.Logger.notice = notice
# Add an extra logging level above INFO and below WARNING
add_notice_log_level()
# More style info at:
# https://coloredlogs.readthedocs.io/en/latest/api.html
field_styles = coloredlogs.DEFAULT_FIELD_STYLES.copy()
field_styles["asctime"] = {}
level_styles = coloredlogs.DEFAULT_LEVEL_STYLES.copy()
level_styles["debug"] = {"color": "white", "faint": True}
level_styles["notice"] = {"color": "cyan", "bold": True}
coloredlogs.install(
logger=given_logger,
level=level,
use_chroot=False,
fmt=fmt,
level_styles=level_styles,
field_styles=field_styles,
)
from contextlib import contextmanager
import json
import logging
import os
import shutil
from collections import OrderedDict
import warnings
from typing import List, Dict, Literal, Tuple, Iterable, Type, Union, Callable, Optional, TYPE_CHECKING
import numpy as np
from numpy import ndarray
import transformers
from transformers import is_torch_npu_available
from huggingface_hub import HfApi
import torch
from torch import nn, Tensor, device
from torch.optim import Optimizer
from torch.utils.data import DataLoader
import torch.multiprocessing as mp
from tqdm.autonotebook import trange
import math
import queue
import tempfile
from . import __MODEL_HUB_ORGANIZATION__
from .evaluation import SentenceEvaluator
from .util import (
import_from_string,
batch_to_device,
fullname,
is_sentence_transformer_model,
load_dir_path,
load_file_path,
save_to_hub_args_decorator,
get_device_name,
truncate_embeddings,
)
from .quantization import quantize_embeddings
from .models import Transformer, Pooling, Normalize
from .model_card_templates import ModelCardTemplate
from . import __version__
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from sentence_transformers.readers import InputExample
class SentenceTransformer(nn.Sequential):
"""
Loads or creates a SentenceTransformer model that can be used to map sentences / text to embeddings.
:param model_name_or_path: If it is a filepath on disc, it loads the model from that path. If it is not a path,
it first tries to download a pre-trained SentenceTransformer model. If that fails, tries to construct a model
from the Hugging Face Hub with that name.
:param modules: A list of torch Modules that should be called sequentially, can be used to create custom
SentenceTransformer models from scratch.
:param device: Device (like "cuda", "cpu", "mps", "npu") that should be used for computation. If None, checks if a GPU
can be used.
:param prompts: A dictionary with prompts for the model. The key is the prompt name, the value is the prompt text.
The prompt text will be prepended before any text to encode. For example:
`{"query": "query: ", "passage": "passage: "}` or `{"clustering": "Identify the main category based on the
titles in "}`.
:param default_prompt_name: The name of the prompt that should be used by default. If not set,
no prompt will be applied.
:param cache_folder: Path to store models. Can also be set by the SENTENCE_TRANSFORMERS_HOME environment variable.
:param revision: The specific model version to use. It can be a branch name, a tag name, or a commit id,
for a stored model on Hugging Face.
:param trust_remote_code: Whether or not to allow for custom models defined on the Hub in their own modeling files.
This option should only be set to True for repositories you trust and in which you have read the code, as it
will execute code present on the Hub on your local machine.
:param token: Hugging Face authentication token to download private models.
:param truncate_dim: The dimension to truncate sentence embeddings to. `None` does no truncation. Truncation is
only applicable during inference when `.encode` is called.
"""
def __init__(
self,
model_name_or_path: Optional[str] = None,
modules: Optional[Iterable[nn.Module]] = None,
device: Optional[str] = None,
prompts: Optional[Dict[str, str]] = None,
default_prompt_name: Optional[str] = None,
cache_folder: Optional[str] = None,
trust_remote_code: bool = False,
revision: Optional[str] = None,
token: Optional[Union[bool, str]] = None,
use_auth_token: Optional[Union[bool, str]] = None,
truncate_dim: Optional[int] = None,
):
# Note: self._load_sbert_model can also update `self.prompts` and `self.default_prompt_name`
self.prompts = prompts or {}
self.default_prompt_name = default_prompt_name
self.truncate_dim = truncate_dim
self._model_card_vars = {}
self._model_card_text = None
self._model_config = {}
if use_auth_token is not None:
warnings.warn(
"The `use_auth_token` argument is deprecated and will be removed in v3 of SentenceTransformers.",
FutureWarning,
)
if token is not None:
raise ValueError(
"`token` and `use_auth_token` are both specified. Please set only the argument `token`."
)
token = use_auth_token
if cache_folder is None:
cache_folder = os.getenv("SENTENCE_TRANSFORMERS_HOME")
if model_name_or_path is not None and model_name_or_path != "":
logger.info("Load pretrained SentenceTransformer: {}".format(model_name_or_path))
# Old models that don't belong to any organization
basic_transformer_models = [
"albert-base-v1",
"albert-base-v2",
"albert-large-v1",
"albert-large-v2",
"albert-xlarge-v1",
"albert-xlarge-v2",
"albert-xxlarge-v1",
"albert-xxlarge-v2",
"bert-base-cased-finetuned-mrpc",
"bert-base-cased",
"bert-base-chinese",
"bert-base-german-cased",
"bert-base-german-dbmdz-cased",
"bert-base-german-dbmdz-uncased",
"bert-base-multilingual-cased",
"bert-base-multilingual-uncased",
"bert-base-uncased",
"bert-large-cased-whole-word-masking-finetuned-squad",
"bert-large-cased-whole-word-masking",
"bert-large-cased",
"bert-large-uncased-whole-word-masking-finetuned-squad",
"bert-large-uncased-whole-word-masking",
"bert-large-uncased",
"camembert-base",
"ctrl",
"distilbert-base-cased-distilled-squad",
"distilbert-base-cased",
"distilbert-base-german-cased",
"distilbert-base-multilingual-cased",
"distilbert-base-uncased-distilled-squad",
"distilbert-base-uncased-finetuned-sst-2-english",
"distilbert-base-uncased",
"distilgpt2",
"distilroberta-base",
"gpt2-large",
"gpt2-medium",
"gpt2-xl",
"gpt2",
"openai-gpt",
"roberta-base-openai-detector",
"roberta-base",
"roberta-large-mnli",
"roberta-large-openai-detector",
"roberta-large",
"t5-11b",
"t5-3b",
"t5-base",
"t5-large",
"t5-small",
"transfo-xl-wt103",
"xlm-clm-ende-1024",
"xlm-clm-enfr-1024",
"xlm-mlm-100-1280",
"xlm-mlm-17-1280",
"xlm-mlm-en-2048",
"xlm-mlm-ende-1024",
"xlm-mlm-enfr-1024",
"xlm-mlm-enro-1024",
"xlm-mlm-tlm-xnli15-1024",
"xlm-mlm-xnli15-1024",
"xlm-roberta-base",
"xlm-roberta-large-finetuned-conll02-dutch",
"xlm-roberta-large-finetuned-conll02-spanish",
"xlm-roberta-large-finetuned-conll03-english",
"xlm-roberta-large-finetuned-conll03-german",
"xlm-roberta-large",
"xlnet-base-cased",
"xlnet-large-cased",
]
if not os.path.exists(model_name_or_path):
# Not a path, load from hub
if "\\" in model_name_or_path or model_name_or_path.count("/") > 1:
raise ValueError("Path {} not found".format(model_name_or_path))
if "/" not in model_name_or_path and model_name_or_path.lower() not in basic_transformer_models:
# A model from sentence-transformers
model_name_or_path = __MODEL_HUB_ORGANIZATION__ + "/" + model_name_or_path
if is_sentence_transformer_model(model_name_or_path, token, cache_folder=cache_folder, revision=revision):
modules = self._load_sbert_model(
model_name_or_path,
token=token,
cache_folder=cache_folder,
revision=revision,
trust_remote_code=trust_remote_code,
)
else:
modules = self._load_auto_model(
model_name_or_path,
token=token,
cache_folder=cache_folder,
revision=revision,
trust_remote_code=trust_remote_code,
)
if modules is not None and not isinstance(modules, OrderedDict):
modules = OrderedDict([(str(idx), module) for idx, module in enumerate(modules)])
super().__init__(modules)
if device is None:
device = get_device_name()
logger.info("Use pytorch device_name: {}".format(device))
self.to(device)
self.is_hpu_graph_enabled = False
if self.default_prompt_name is not None and self.default_prompt_name not in self.prompts:
raise ValueError(
f"Default prompt name '{self.default_prompt_name}' not found in the configured prompts "
f"dictionary with keys {list(self.prompts.keys())!r}."
)
if self.prompts:
logger.info(f"{len(self.prompts)} prompts are loaded, with the keys: {list(self.prompts.keys())}")
if self.default_prompt_name:
logger.warning(
f"Default prompt name is set to '{self.default_prompt_name}'. "
"This prompt will be applied to all `encode()` calls, except if `encode()` "
"is called with `prompt` or `prompt_name` parameters."
)
# Ideally, INSTRUCTOR models should set `include_prompt=False` in their pooling configuration, but
# that would be a breaking change for users currently using the InstructorEmbedding project.
# So, instead we hardcode setting it for the main INSTRUCTOR models, and otherwise give a warning if we
# suspect the user is using an INSTRUCTOR model.
if model_name_or_path in ("hkunlp/instructor-base", "hkunlp/instructor-large", "hkunlp/instructor-xl"):
self.set_pooling_include_prompt(include_prompt=False)
elif (
model_name_or_path
and "/" in model_name_or_path
and "instructor" in model_name_or_path.split("/")[1].lower()
):
if any([module.include_prompt for module in self if isinstance(module, Pooling)]):
logger.warning(
"Instructor models require `include_prompt=False` in the pooling configuration. "
"Either update the model configuration or call `model.set_pooling_include_prompt(False)` after loading the model."
)
def encode(
self,
sentences: Union[str, List[str]],
prompt_name: Optional[str] = None,
prompt: Optional[str] = None,
batch_size: int = 32,
show_progress_bar: bool = None,
output_value: Optional[Literal["sentence_embedding", "token_embeddings"]] = "sentence_embedding",
precision: Literal["float32", "int8", "uint8", "binary", "ubinary"] = "float32",
convert_to_numpy: bool = True,
convert_to_tensor: bool = False,
device: str = None,
normalize_embeddings: bool = False,
) -> Union[List[Tensor], ndarray, Tensor]:
"""
Computes sentence embeddings.
:param sentences: the sentences to embed.
:param prompt_name: The name of the prompt to use for encoding. Must be a key in the `prompts` dictionary,
which is either set in the constructor or loaded from the model configuration. For example if
`prompt_name` is ``"query"`` and the `prompts` is ``{"query": "query: ", ...}``, then the sentence "What
is the capital of France?" will be encoded as "query: What is the capital of France?" because the sentence
is appended to the prompt. If `prompt` is also set, this argument is ignored.
:param prompt: The prompt to use for encoding. For example, if the prompt is ``"query: "``, then the
sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?"
because the sentence is appended to the prompt. If `prompt` is set, `prompt_name` is ignored.
:param batch_size: the batch size used for the computation.
:param show_progress_bar: Whether to output a progress bar when encode sentences.
:param output_value: The type of embeddings to return: "sentence_embedding" to get sentence embeddings,
"token_embeddings" to get wordpiece token embeddings, and `None`, to get all output values. Defaults
to "sentence_embedding".
:param precision: The precision to use for the embeddings. Can be "float32", "int8", "uint8", "binary", or
"ubinary". All non-float32 precisions are quantized embeddings. Quantized embeddings are smaller in
size and faster to compute, but may have a lower accuracy. They are useful for reducing the size
of the embeddings of a corpus for semantic search, among other tasks. Defaults to "float32".
:param convert_to_numpy: Whether the output should be a list of numpy vectors. If False, it is a list of PyTorch tensors.
:param convert_to_tensor: Whether the output should be one large tensor. Overwrites `convert_to_numpy`.
:param device: Which `torch.device` to use for the computation.
:param normalize_embeddings: Whether to normalize returned vectors to have length 1. In that case,
the faster dot-product (util.dot_score) instead of cosine similarity can be used.
:return: By default, a 2d numpy array with shape [num_inputs, output_dimension] is returned. If only one string
input is provided, then the output is a 1d array with shape [output_dimension]. If `convert_to_tensor`, a
torch Tensor is returned instead. If `self.truncate_dim <= output_dimension` then output_dimension is
`self.truncate_dim`.
"""
if self.device.type == "hpu" and not self.is_hpu_graph_enabled:
import habana_frameworks.torch as ht
ht.hpu.wrap_in_hpu_graph(self, disable_tensor_cache=True)
self.is_hpu_graph_enabled = True
self.eval()
if show_progress_bar is None:
show_progress_bar = (
logger.getEffectiveLevel() == logging.INFO or logger.getEffectiveLevel() == logging.DEBUG
)
if convert_to_tensor:
convert_to_numpy = False
if output_value != "sentence_embedding":
convert_to_tensor = False
convert_to_numpy = False
input_was_string = False
if isinstance(sentences, str) or not hasattr(
sentences, "__len__"
): # Cast an individual sentence to a list with length 1
sentences = [sentences]
input_was_string = True
if prompt is None:
if prompt_name is not None:
try:
prompt = self.prompts[prompt_name]
except KeyError:
raise ValueError(
f"Prompt name '{prompt_name}' not found in the configured prompts dictionary with keys {list(self.prompts.keys())!r}."
)
elif self.default_prompt_name is not None:
prompt = self.prompts.get(self.default_prompt_name, None)
else:
if prompt_name is not None:
logger.warning(
"Encode with either a `prompt`, a `prompt_name`, or neither, but not both. "
"Ignoring the `prompt_name` in favor of `prompt`."
)
extra_features = {}
if prompt is not None:
sentences = [prompt + sentence for sentence in sentences]
# Some models (e.g. INSTRUCTOR, GRIT) require removing the prompt before pooling
# Tracking the prompt length allow us to remove the prompt during pooling
tokenized_prompt = self.tokenize([prompt])
if "input_ids" in tokenized_prompt:
extra_features["prompt_length"] = tokenized_prompt["input_ids"].shape[-1] - 1
if device is None:
device = self.device
self.to(device)
all_embeddings = []
length_sorted_idx = np.argsort([-self._text_length(sen) for sen in sentences])
sentences_sorted = [sentences[idx] for idx in length_sorted_idx]
for start_index in trange(0, len(sentences), batch_size, desc="Batches", disable=not show_progress_bar):
sentences_batch = sentences_sorted[start_index : start_index + batch_size]
features = self.tokenize(sentences_batch)
features = batch_to_device(features, device)
features.update(extra_features)
with torch.no_grad():
out_features = self.forward(features)
out_features["sentence_embedding"] = truncate_embeddings(
out_features["sentence_embedding"], self.truncate_dim
)
if output_value == "token_embeddings":
embeddings = []
for token_emb, attention in zip(out_features[output_value], out_features["attention_mask"]):
last_mask_id = len(attention) - 1
while last_mask_id > 0 and attention[last_mask_id].item() == 0:
last_mask_id -= 1
embeddings.append(token_emb[0 : last_mask_id + 1])
elif output_value is None: # Return all outputs
embeddings = []
for sent_idx in range(len(out_features["sentence_embedding"])):
row = {name: out_features[name][sent_idx] for name in out_features}
embeddings.append(row)
else: # Sentence embeddings
embeddings = out_features[output_value]
embeddings = embeddings.detach()
if normalize_embeddings:
embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1)
# fixes for #522 and #487 to avoid oom problems on gpu with large datasets
if convert_to_numpy:
embeddings = embeddings.cpu()
all_embeddings.extend(embeddings)
all_embeddings = [all_embeddings[idx] for idx in np.argsort(length_sorted_idx)]
if precision and precision != "float32":
all_embeddings = quantize_embeddings(all_embeddings, precision=precision)
if convert_to_tensor:
if len(all_embeddings):
if isinstance(all_embeddings, np.ndarray):
all_embeddings = torch.from_numpy(all_embeddings)
else:
all_embeddings = torch.stack(all_embeddings)
else:
all_embeddings = torch.Tensor()
elif convert_to_numpy:
if not isinstance(all_embeddings, np.ndarray):
all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings])
elif isinstance(all_embeddings, np.ndarray):
all_embeddings = [torch.from_numpy(embedding) for embedding in all_embeddings]
if input_was_string:
all_embeddings = all_embeddings[0]
return all_embeddings
def start_multi_process_pool(self, target_devices: List[str] = None):
"""
Starts multi process to process the encoding with several, independent processes.
This method is recommended if you want to encode on multiple GPUs or CPUs. It is advised
to start only one process per GPU. This method works together with encode_multi_process
and stop_multi_process_pool.
:param target_devices: PyTorch target devices, e.g. ["cuda:0", "cuda:1", ...], ["npu:0", "npu:1", ...] or
["cpu", "cpu", "cpu", "cpu"]. If target_devices is None and CUDA/NPU is available, then all available
CUDA/NPU devices will be used. If target_devices is None and CUDA/NPU is not available, then 4 CPU
devices will be used.
:return: Returns a dict with the target processes, an input queue and and output queue.
"""
if target_devices is None:
if torch.cuda.is_available():
target_devices = ["cuda:{}".format(i) for i in range(torch.cuda.device_count())]
elif is_torch_npu_available():
target_devices = ["npu:{}".format(i) for i in range(torch.npu.device_count())]
else:
logger.info("CUDA/NPU is not available. Starting 4 CPU workers")
target_devices = ["cpu"] * 4
logger.info("Start multi-process pool on devices: {}".format(", ".join(map(str, target_devices))))
self.to("cpu")
self.share_memory()
ctx = mp.get_context("spawn")
input_queue = ctx.Queue()
output_queue = ctx.Queue()
processes = []
for device_id in target_devices:
p = ctx.Process(
target=SentenceTransformer._encode_multi_process_worker,
args=(device_id, self, input_queue, output_queue),
daemon=True,
)
p.start()
processes.append(p)
return {"input": input_queue, "output": output_queue, "processes": processes}
@staticmethod
def stop_multi_process_pool(pool):
"""
Stops all processes started with start_multi_process_pool
"""
for p in pool["processes"]:
p.terminate()
for p in pool["processes"]:
p.join()
p.close()
pool["input"].close()
pool["output"].close()
def encode_multi_process(
self,
sentences: List[str],
pool: Dict[str, object],
prompt_name: Optional[str] = None,
prompt: Optional[str] = None,
batch_size: int = 32,
chunk_size: int = None,
normalize_embeddings: bool = False,
):
"""
This method allows to run encode() on multiple GPUs. The sentences are chunked into smaller packages
and sent to individual processes, which encode these on the different GPUs. This method is only suitable
for encoding large sets of sentences
:param sentences: List of sentences
:param pool: A pool of workers started with SentenceTransformer.start_multi_process_pool
:param prompt_name: The name of the prompt to use for encoding. Must be a key in the `prompts` dictionary,
which is either set in the constructor or loaded from the model configuration. For example if
`prompt_name` is ``"query"`` and the `prompts` is ``{"query": "query: {}", ...}``, then the sentence "What
is the capital of France?" will be encoded as "query: What is the capital of France?". If `prompt` is
also set, this argument is ignored.
:param prompt: The prompt to use for encoding. For example, if the prompt is ``"query: {}"``, then the
sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?".
If `prompt` is set, `prompt_name` is ignored.
:param batch_size: Encode sentences with batch size
:param chunk_size: Sentences are chunked and sent to the individual processes. If none, it determine a sensible size.
:param normalize_embeddings: Whether to normalize returned vectors to have length 1. In that case,
the faster dot-product (util.dot_score) instead of cosine similarity can be used.
:return: 2d numpy array with shape [num_inputs, output_dimension]
"""
if chunk_size is None:
chunk_size = min(math.ceil(len(sentences) / len(pool["processes"]) / 10), 5000)
logger.debug(f"Chunk data into {math.ceil(len(sentences) / chunk_size)} packages of size {chunk_size}")
input_queue = pool["input"]
last_chunk_id = 0
chunk = []
for sentence in sentences:
chunk.append(sentence)
if len(chunk) >= chunk_size:
input_queue.put([last_chunk_id, batch_size, chunk, prompt_name, prompt, normalize_embeddings])
last_chunk_id += 1
chunk = []
if len(chunk) > 0:
input_queue.put([last_chunk_id, batch_size, chunk, prompt_name, prompt, normalize_embeddings])
last_chunk_id += 1
output_queue = pool["output"]
results_list = sorted([output_queue.get() for _ in range(last_chunk_id)], key=lambda x: x[0])
embeddings = np.concatenate([result[1] for result in results_list])
return embeddings
@staticmethod
def _encode_multi_process_worker(target_device: str, model, input_queue, results_queue):
"""
Internal working process to encode sentences in multi-process setup
"""
while True:
try:
chunk_id, batch_size, sentences, prompt_name, prompt, normalize_embeddings = input_queue.get()
embeddings = model.encode(
sentences,
prompt_name=prompt_name,
prompt=prompt,
device=target_device,
show_progress_bar=False,
convert_to_numpy=True,
batch_size=batch_size,
normalize_embeddings=normalize_embeddings,
)
results_queue.put([chunk_id, embeddings])
except queue.Empty:
break
def set_pooling_include_prompt(self, include_prompt: bool) -> None:
"""
Sets the `include_prompt` attribute in the pooling layer in the model, if there is one.
:param include_prompt: Whether to include the prompt in the pooling layer.
"""
for module in self:
if isinstance(module, Pooling):
module.include_prompt = include_prompt
break
def get_max_seq_length(self):
"""
Returns the maximal sequence length for input the model accepts. Longer inputs will be truncated
"""
if hasattr(self._first_module(), "max_seq_length"):
return self._first_module().max_seq_length
return None
def tokenize(self, texts: Union[List[str], List[Dict], List[Tuple[str, str]]]):
"""
Tokenizes the texts
"""
kwargs = {}
# HPU models reach optimal performance if the padding is not dynamic
if self.device.type == "hpu":
kwargs["padding"] = "max_length"
try:
return self._first_module().tokenize(texts, **kwargs)
except TypeError:
# In case some Module does not allow for kwargs in tokenize, we also try without any
return self._first_module().tokenize(texts)
def get_sentence_features(self, *features):
return self._first_module().get_sentence_features(*features)
def get_sentence_embedding_dimension(self):
"""
:return: The number of dimensions in the output of `encode`. If it's not known, it's `None`.
"""
output_dim = None
for mod in reversed(self._modules.values()):
sent_embedding_dim_method = getattr(mod, "get_sentence_embedding_dimension", None)
if callable(sent_embedding_dim_method):
output_dim = sent_embedding_dim_method()
break
if self.truncate_dim is not None:
# The user requested truncation. If they set it to a dim greater than output_dim,
# no truncation will actually happen. So return output_dim insead of self.truncate_dim
return min(output_dim or np.inf, self.truncate_dim)
return output_dim
@contextmanager
def truncate_sentence_embeddings(self, truncate_dim: Optional[int]):
"""
In this context, `model.encode` outputs sentence embeddings truncated at dimension `truncate_dim`.
This may be useful when you are using the same model for different applications where different dimensions
are needed.
:param truncate_dim: The dimension to truncate sentence embeddings to. `None` does no truncation.
Example::
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("model-name")
with model.truncate_sentence_embeddings(truncate_dim=16):
embeddings_truncated = model.encode(["hello there", "hiya"])
assert embeddings_truncated.shape[-1] == 16
"""
original_output_dim = self.truncate_dim
try:
self.truncate_dim = truncate_dim
yield
finally:
self.truncate_dim = original_output_dim
def _first_module(self):
"""Returns the first module of this sequential embedder"""
return self._modules[next(iter(self._modules))]
def _last_module(self):
"""Returns the last module of this sequential embedder"""
return self._modules[next(reversed(self._modules))]
def save(
self,
path: str,
model_name: Optional[str] = None,
create_model_card: bool = True,
train_datasets: Optional[List[str]] = None,
safe_serialization: bool = True,
):
"""
Saves all elements for this seq. sentence embedder into different sub-folders
:param path: Path on disc
:param model_name: Optional model name
:param create_model_card: If True, create a README.md with basic information about this model
:param train_datasets: Optional list with the names of the datasets used to to train the model
:param safe_serialization: If true, save the model using safetensors. If false, save the model the traditional PyTorch way
"""
if path is None:
return
os.makedirs(path, exist_ok=True)
logger.info("Save model to {}".format(path))
modules_config = []
# Save some model info
if "__version__" not in self._model_config:
self._model_config["__version__"] = {
"sentence_transformers": __version__,
"transformers": transformers.__version__,
"pytorch": torch.__version__,
}
with open(os.path.join(path, "config_sentence_transformers.json"), "w") as fOut:
config = self._model_config.copy()
config["prompts"] = self.prompts
config["default_prompt_name"] = self.default_prompt_name
json.dump(config, fOut, indent=2)
# Save modules
for idx, name in enumerate(self._modules):
module = self._modules[name]
if idx == 0 and isinstance(module, Transformer): # Save transformer model in the main folder
model_path = path + "/"
else:
model_path = os.path.join(path, str(idx) + "_" + type(module).__name__)
os.makedirs(model_path, exist_ok=True)
if isinstance(module, Transformer):
module.save(model_path, safe_serialization=safe_serialization)
else:
module.save(model_path)
modules_config.append(
{"idx": idx, "name": name, "path": os.path.basename(model_path), "type": type(module).__module__}
)
with open(os.path.join(path, "modules.json"), "w") as fOut:
json.dump(modules_config, fOut, indent=2)
# Create model card
if create_model_card:
self._create_model_card(path, model_name, train_datasets)
def _create_model_card(
self, path: str, model_name: Optional[str] = None, train_datasets: Optional[List[str]] = None
):
"""
Create an automatic model and stores it in path
"""
if self._model_card_text is not None and len(self._model_card_text) > 0:
model_card = self._model_card_text
else:
tags = ModelCardTemplate.__TAGS__.copy()
model_card = ModelCardTemplate.__MODEL_CARD__
if (
len(self._modules) == 2
and isinstance(self._first_module(), Transformer)
and isinstance(self._last_module(), Pooling)
and self._last_module().get_pooling_mode_str() in ["cls", "max", "mean"]
):
pooling_module = self._last_module()
pooling_mode = pooling_module.get_pooling_mode_str()
model_card = model_card.replace(
"{USAGE_TRANSFORMERS_SECTION}", ModelCardTemplate.__USAGE_TRANSFORMERS__
)
pooling_fct_name, pooling_fct = ModelCardTemplate.model_card_get_pooling_function(pooling_mode)
model_card = (
model_card.replace("{POOLING_FUNCTION}", pooling_fct)
.replace("{POOLING_FUNCTION_NAME}", pooling_fct_name)
.replace("{POOLING_MODE}", pooling_mode)
)
tags.append("transformers")
# Print full model
model_card = model_card.replace("{FULL_MODEL_STR}", str(self))
# Add tags
model_card = model_card.replace("{TAGS}", "\n".join(["- " + t for t in tags]))
datasets_str = ""
if train_datasets is not None:
datasets_str = "datasets:\n" + "\n".join(["- " + d for d in train_datasets])
model_card = model_card.replace("{DATASETS}", datasets_str)
# Add dim info
self._model_card_vars["{NUM_DIMENSIONS}"] = self.get_sentence_embedding_dimension()
# Replace vars we created while using the model
for name, value in self._model_card_vars.items():
model_card = model_card.replace(name, str(value))
# Replace remaining vars with default values
for name, value in ModelCardTemplate.__DEFAULT_VARS__.items():
model_card = model_card.replace(name, str(value))
if model_name is not None:
model_card = model_card.replace("{MODEL_NAME}", model_name.strip())
with open(os.path.join(path, "README.md"), "w", encoding="utf8") as fOut:
fOut.write(model_card.strip())
@save_to_hub_args_decorator
def save_to_hub(
self,
repo_id: str,
organization: Optional[str] = None,
token: Optional[str] = None,
private: Optional[bool] = None,
safe_serialization: bool = True,
commit_message: str = "Add new SentenceTransformer model.",
local_model_path: Optional[str] = None,
exist_ok: bool = False,
replace_model_card: bool = False,
train_datasets: Optional[List[str]] = None,
) -> str:
"""
DEPRECATED, use `push_to_hub` instead.
Uploads all elements of this Sentence Transformer to a new HuggingFace Hub repository.
:param repo_id: Repository name for your model in the Hub, including the user or organization.
:param token: An authentication token (See https://huggingface.co/settings/token)
:param private: Set to true, for hosting a private model
:param safe_serialization: If true, save the model using safetensors. If false, save the model the traditional PyTorch way
:param commit_message: Message to commit while pushing.
:param local_model_path: Path of the model locally. If set, this file path will be uploaded. Otherwise, the current model will be uploaded
:param exist_ok: If true, saving to an existing repository is OK. If false, saving only to a new repository is possible
:param replace_model_card: If true, replace an existing model card in the hub with the automatically created model card
:param train_datasets: Datasets used to train the model. If set, the datasets will be added to the model card in the Hub.
:param organization: Deprecated. Organization in which you want to push your model or tokenizer (you must be a member of this organization).
:return: The url of the commit of your model in the repository on the Hugging Face Hub.
"""
logger.warning(
"The `save_to_hub` method is deprecated and will be removed in a future version of SentenceTransformers."
" Please use `push_to_hub` instead for future model uploads."
)
if organization:
if "/" not in repo_id:
logger.warning(
f'Providing an `organization` to `save_to_hub` is deprecated, please use `repo_id="{organization}/{repo_id}"` instead.'
)
repo_id = f"{organization}/{repo_id}"
elif repo_id.split("/")[0] != organization:
raise ValueError(
"Providing an `organization` to `save_to_hub` is deprecated, please only use `repo_id`."
)
else:
logger.warning(
f'Providing an `organization` to `save_to_hub` is deprecated, please only use `repo_id="{repo_id}"` instead.'
)
return self.push_to_hub(
repo_id=repo_id,
token=token,
private=private,
safe_serialization=safe_serialization,
commit_message=commit_message,
local_model_path=local_model_path,
exist_ok=exist_ok,
replace_model_card=replace_model_card,
train_datasets=train_datasets,
)
def push_to_hub(
self,
repo_id: str,
token: Optional[str] = None,
private: Optional[bool] = None,
safe_serialization: bool = True,
commit_message: str = "Add new SentenceTransformer model.",
local_model_path: Optional[str] = None,
exist_ok: bool = False,
replace_model_card: bool = False,
train_datasets: Optional[List[str]] = None,
) -> str:
"""
Uploads all elements of this Sentence Transformer to a new HuggingFace Hub repository.
:param repo_id: Repository name for your model in the Hub, including the user or organization.
:param token: An authentication token (See https://huggingface.co/settings/token)
:param private: Set to true, for hosting a private model
:param safe_serialization: If true, save the model using safetensors. If false, save the model the traditional PyTorch way
:param commit_message: Message to commit while pushing.
:param local_model_path: Path of the model locally. If set, this file path will be uploaded. Otherwise, the current model will be uploaded
:param exist_ok: If true, saving to an existing repository is OK. If false, saving only to a new repository is possible
:param replace_model_card: If true, replace an existing model card in the hub with the automatically created model card
:param train_datasets: Datasets used to train the model. If set, the datasets will be added to the model card in the Hub.
:return: The url of the commit of your model in the repository on the Hugging Face Hub.
"""
api = HfApi(token=token)
repo_url = api.create_repo(
repo_id=repo_id,
private=private,
repo_type=None,
exist_ok=exist_ok,
)
repo_id = repo_url.repo_id # Update the repo_id in case the old repo_id didn't contain a user or organization
if local_model_path:
folder_url = api.upload_folder(
repo_id=repo_id, folder_path=local_model_path, commit_message=commit_message
)
else:
with tempfile.TemporaryDirectory() as tmp_dir:
create_model_card = replace_model_card or not os.path.exists(os.path.join(tmp_dir, "README.md"))
self.save(
tmp_dir,
model_name=repo_url.repo_id,
create_model_card=create_model_card,
train_datasets=train_datasets,
safe_serialization=safe_serialization,
)
folder_url = api.upload_folder(repo_id=repo_id, folder_path=tmp_dir, commit_message=commit_message)
refs = api.list_repo_refs(repo_id=repo_id)
for branch in refs.branches:
if branch.name == "main":
return f"https://huggingface.co/{repo_id}/commit/{branch.target_commit}"
# This isn't expected to ever be reached.
return folder_url
def smart_batching_collate(self, batch: List["InputExample"]) -> Tuple[List[Dict[str, Tensor]], Tensor]:
"""
Transforms a batch from a SmartBatchingDataset to a batch of tensors for the model
Here, batch is a list of InputExample instances: [InputExample(...), ...]
:param batch:
a batch from a SmartBatchingDataset
:return:
a batch of tensors for the model
"""
texts = [example.texts for example in batch]
sentence_features = [self.tokenize(sentence) for sentence in zip(*texts)]
labels = torch.tensor([example.label for example in batch])
return sentence_features, labels
def _text_length(self, text: Union[List[int], List[List[int]]]):
"""
Help function to get the length for the input text. Text can be either
a list of ints (which means a single text as input), or a tuple of list of ints
(representing several text inputs to the model).
"""
if isinstance(text, dict): # {key: value} case
return len(next(iter(text.values())))
elif not hasattr(text, "__len__"): # Object has no len() method
return 1
elif len(text) == 0 or isinstance(text[0], int): # Empty string or list of ints
return len(text)
else:
return sum([len(t) for t in text]) # Sum of length of individual strings
def fit(
self,
train_objectives: Iterable[Tuple[DataLoader, nn.Module]],
evaluator: SentenceEvaluator = None,
epochs: int = 1,
steps_per_epoch=None,
scheduler: str = "WarmupLinear",
warmup_steps: int = 10000,
optimizer_class: Type[Optimizer] = torch.optim.AdamW,
optimizer_params: Dict[str, object] = {"lr": 2e-5},
weight_decay: float = 0.01,
evaluation_steps: int = 0,
output_path: str = None,
save_best_model: bool = True,
max_grad_norm: float = 1,
use_amp: bool = False,
callback: Callable[[float, int, int], None] = None,
show_progress_bar: bool = True,
checkpoint_path: str = None,
checkpoint_save_steps: int = 500,
checkpoint_save_total_limit: int = 0,
):
"""
Train the model with the given training objective
Each training objective is sampled in turn for one batch.
We sample only as many batches from each objective as there are in the smallest one
to make sure of equal training with each dataset.
:param train_objectives: Tuples of (DataLoader, LossFunction). Pass more than one for multi-task learning
:param evaluator: An evaluator (sentence_transformers.evaluation) evaluates the model performance during training on held-out dev data. It is used to determine the best model that is saved to disc.
:param epochs: Number of epochs for training
:param steps_per_epoch: Number of training steps per epoch. If set to None (default), one epoch is equal the DataLoader size from train_objectives.
:param scheduler: Learning rate scheduler. Available schedulers: constantlr, warmupconstant, warmuplinear, warmupcosine, warmupcosinewithhardrestarts
:param warmup_steps: Behavior depends on the scheduler. For WarmupLinear (default), the learning rate is increased from o up to the maximal learning rate. After these many training steps, the learning rate is decreased linearly back to zero.
:param optimizer_class: Optimizer
:param optimizer_params: Optimizer parameters
:param weight_decay: Weight decay for model parameters
:param evaluation_steps: If > 0, evaluate the model using evaluator after each number of training steps
:param output_path: Storage path for the model and evaluation files
:param save_best_model: If true, the best model (according to evaluator) is stored at output_path
:param max_grad_norm: Used for gradient normalization.
:param use_amp: Use Automatic Mixed Precision (AMP). Only for Pytorch >= 1.6.0
:param callback: Callback function that is invoked after each evaluation.
It must accept the following three parameters in this order:
`score`, `epoch`, `steps`
:param show_progress_bar: If True, output a tqdm progress bar
:param checkpoint_path: Folder to save checkpoints during training
:param checkpoint_save_steps: Will save a checkpoint after so many steps
:param checkpoint_save_total_limit: Total number of checkpoints to store
"""
##Add info to model card
# info_loss_functions = "\n".join(["- {} with {} training examples".format(str(loss), len(dataloader)) for dataloader, loss in train_objectives])
info_loss_functions = []
for dataloader, loss in train_objectives:
info_loss_functions.extend(ModelCardTemplate.get_train_objective_info(dataloader, loss))
info_loss_functions = "\n\n".join([text for text in info_loss_functions])
info_fit_parameters = json.dumps(
{
"evaluator": fullname(evaluator),
"epochs": epochs,
"steps_per_epoch": steps_per_epoch,
"scheduler": scheduler,
"warmup_steps": warmup_steps,
"optimizer_class": str(optimizer_class),
"optimizer_params": optimizer_params,
"weight_decay": weight_decay,
"evaluation_steps": evaluation_steps,
"max_grad_norm": max_grad_norm,
},
indent=4,
sort_keys=True,
)
self._model_card_text = None
self._model_card_vars["{TRAINING_SECTION}"] = ModelCardTemplate.__TRAINING_SECTION__.replace(
"{LOSS_FUNCTIONS}", info_loss_functions
).replace("{FIT_PARAMETERS}", info_fit_parameters)
if use_amp:
if is_torch_npu_available():
scaler = torch.npu.amp.GradScaler()
else:
scaler = torch.cuda.amp.GradScaler()
self.to(self.device)
dataloaders = [dataloader for dataloader, _ in train_objectives]
# Use smart batching
for dataloader in dataloaders:
dataloader.collate_fn = self.smart_batching_collate
loss_models = [loss for _, loss in train_objectives]
for loss_model in loss_models:
loss_model.to(self.device)
self.best_score = -9999999
if steps_per_epoch is None or steps_per_epoch == 0:
steps_per_epoch = min([len(dataloader) for dataloader in dataloaders])
num_train_steps = int(steps_per_epoch * epochs)
# Prepare optimizers
optimizers = []
schedulers = []
for loss_model in loss_models:
param_optimizer = list(loss_model.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
"weight_decay": weight_decay,
},
{"params": [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
]
optimizer = optimizer_class(optimizer_grouped_parameters, **optimizer_params)
scheduler_obj = self._get_scheduler(
optimizer, scheduler=scheduler, warmup_steps=warmup_steps, t_total=num_train_steps
)
optimizers.append(optimizer)
schedulers.append(scheduler_obj)
global_step = 0
data_iterators = [iter(dataloader) for dataloader in dataloaders]
num_train_objectives = len(train_objectives)
skip_scheduler = False
for epoch in trange(epochs, desc="Epoch", disable=not show_progress_bar):
training_steps = 0
for loss_model in loss_models:
loss_model.zero_grad()
loss_model.train()
for _ in trange(steps_per_epoch, desc="Iteration", smoothing=0.05, disable=not show_progress_bar):
for train_idx in range(num_train_objectives):
loss_model = loss_models[train_idx]
optimizer = optimizers[train_idx]
scheduler = schedulers[train_idx]
data_iterator = data_iterators[train_idx]
try:
data = next(data_iterator)
except StopIteration:
data_iterator = iter(dataloaders[train_idx])
data_iterators[train_idx] = data_iterator
data = next(data_iterator)
features, labels = data
labels = labels.to(self.device)
features = list(map(lambda batch: batch_to_device(batch, self.device), features))
if use_amp:
with torch.autocast(device_type=self.device.type):
loss_value = loss_model(features, labels)
scale_before_step = scaler.get_scale()
scaler.scale(loss_value).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(loss_model.parameters(), max_grad_norm)
scaler.step(optimizer)
scaler.update()
skip_scheduler = scaler.get_scale() != scale_before_step
else:
loss_value = loss_model(features, labels)
loss_value.backward()
torch.nn.utils.clip_grad_norm_(loss_model.parameters(), max_grad_norm)
optimizer.step()
optimizer.zero_grad()
if not skip_scheduler:
scheduler.step()
training_steps += 1
global_step += 1
if evaluation_steps > 0 and training_steps % evaluation_steps == 0:
self._eval_during_training(
evaluator, output_path, save_best_model, epoch, training_steps, callback
)
for loss_model in loss_models:
loss_model.zero_grad()
loss_model.train()
if (
checkpoint_path is not None
and checkpoint_save_steps is not None
and checkpoint_save_steps > 0
and global_step % checkpoint_save_steps == 0
):
self._save_checkpoint(checkpoint_path, checkpoint_save_total_limit, global_step)
self._eval_during_training(evaluator, output_path, save_best_model, epoch, -1, callback)
if evaluator is None and output_path is not None: # No evaluator, but output path: save final model version
self.save(output_path)
if checkpoint_path is not None:
self._save_checkpoint(checkpoint_path, checkpoint_save_total_limit, global_step)
def evaluate(self, evaluator: SentenceEvaluator, output_path: str = None):
"""
Evaluate the model
:param evaluator:
the evaluator
:param output_path:
the evaluator can write the results to this path
"""
if output_path is not None:
os.makedirs(output_path, exist_ok=True)
return evaluator(self, output_path)
def _eval_during_training(self, evaluator, output_path, save_best_model, epoch, steps, callback):
"""Runs evaluation during the training"""
eval_path = output_path
if output_path is not None:
os.makedirs(output_path, exist_ok=True)
eval_path = os.path.join(output_path, "eval")
os.makedirs(eval_path, exist_ok=True)
if evaluator is not None:
score = evaluator(self, output_path=eval_path, epoch=epoch, steps=steps)
if callback is not None:
callback(score, epoch, steps)
if score > self.best_score:
self.best_score = score
if save_best_model:
self.save(output_path)
def _save_checkpoint(self, checkpoint_path, checkpoint_save_total_limit, step):
# Store new checkpoint
self.save(os.path.join(checkpoint_path, str(step)))
# Delete old checkpoints
if checkpoint_save_total_limit is not None and checkpoint_save_total_limit > 0:
old_checkpoints = []
for subdir in os.listdir(checkpoint_path):
if subdir.isdigit():
old_checkpoints.append({"step": int(subdir), "path": os.path.join(checkpoint_path, subdir)})
if len(old_checkpoints) > checkpoint_save_total_limit:
old_checkpoints = sorted(old_checkpoints, key=lambda x: x["step"])
shutil.rmtree(old_checkpoints[0]["path"])
def _load_auto_model(
self,
model_name_or_path: str,
token: Optional[Union[bool, str]],
cache_folder: Optional[str],
revision: Optional[str] = None,
trust_remote_code: bool = False,
):
"""
Creates a simple Transformer + Mean Pooling model and returns the modules
"""
logger.warning(
"No sentence-transformers model found with name {}. Creating a new one with MEAN pooling.".format(
model_name_or_path
)
)
transformer_model = Transformer(
model_name_or_path,
cache_dir=cache_folder,
model_args={"token": token, "trust_remote_code": trust_remote_code, "revision": revision},
tokenizer_args={"token": token, "trust_remote_code": trust_remote_code, "revision": revision},
)
pooling_model = Pooling(transformer_model.get_word_embedding_dimension(), "mean")
return [transformer_model, pooling_model]
def _load_sbert_model(
self,
model_name_or_path: str,
token: Optional[Union[bool, str]],
cache_folder: Optional[str],
revision: Optional[str] = None,
trust_remote_code: bool = False,
):
"""
Loads a full sentence-transformers model
"""
# Check if the config_sentence_transformers.json file exists (exists since v2 of the framework)
config_sentence_transformers_json_path = load_file_path(
model_name_or_path,
"config_sentence_transformers.json",
token=token,
cache_folder=cache_folder,
revision=revision,
)
if config_sentence_transformers_json_path is not None:
with open(config_sentence_transformers_json_path) as fIn:
self._model_config = json.load(fIn)
if (
"__version__" in self._model_config
and "sentence_transformers" in self._model_config["__version__"]
and self._model_config["__version__"]["sentence_transformers"] > __version__
):
logger.warning(
"You try to use a model that was created with version {}, however, your version is {}. This might cause unexpected behavior or errors. In that case, try to update to the latest version.\n\n\n".format(
self._model_config["__version__"]["sentence_transformers"], __version__
)
)
# Set prompts if not already overridden by the __init__ calls
if not self.prompts:
self.prompts = self._model_config.get("prompts", {})
if not self.default_prompt_name:
self.default_prompt_name = self._model_config.get("default_prompt_name", None)
# Check if a readme exists
model_card_path = load_file_path(
model_name_or_path, "README.md", token=token, cache_folder=cache_folder, revision=revision
)
if model_card_path is not None:
try:
with open(model_card_path, encoding="utf8") as fIn:
self._model_card_text = fIn.read()
except Exception:
pass
# Load the modules of sentence transformer
modules_json_path = load_file_path(
model_name_or_path, "modules.json", token=token, cache_folder=cache_folder, revision=revision
)
with open(modules_json_path) as fIn:
modules_config = json.load(fIn)
modules = OrderedDict()
for module_config in modules_config:
module_class = import_from_string(module_config["type"])
# For Transformer, don't load the full directory, rely on `transformers` instead
# But, do load the config file first.
if module_class == Transformer and module_config["path"] == "":
kwargs = {}
for config_name in [
"sentence_bert_config.json",
"sentence_roberta_config.json",
"sentence_distilbert_config.json",
"sentence_camembert_config.json",
"sentence_albert_config.json",
"sentence_xlm-roberta_config.json",
"sentence_xlnet_config.json",
]:
config_path = load_file_path(
model_name_or_path, config_name, token=token, cache_folder=cache_folder, revision=revision
)
if config_path is not None:
with open(config_path) as fIn:
kwargs = json.load(fIn)
break
hub_kwargs = {"token": token, "trust_remote_code": trust_remote_code, "revision": revision}
if "model_args" in kwargs:
kwargs["model_args"].update(hub_kwargs)
else:
kwargs["model_args"] = hub_kwargs
if "tokenizer_args" in kwargs:
kwargs["tokenizer_args"].update(hub_kwargs)
else:
kwargs["tokenizer_args"] = hub_kwargs
module = Transformer(model_name_or_path, cache_dir=cache_folder, **kwargs)
else:
# Normalize does not require any files to be loaded
if module_class == Normalize:
module_path = None
else:
module_path = load_dir_path(
model_name_or_path,
module_config["path"],
token=token,
cache_folder=cache_folder,
revision=revision,
)
module = module_class.load(module_path)
modules[module_config["name"]] = module
return modules
@staticmethod
def load(input_path):
return SentenceTransformer(input_path)
@staticmethod
def _get_scheduler(optimizer, scheduler: str, warmup_steps: int, t_total: int):
"""
Returns the correct learning rate scheduler. Available scheduler: constantlr, warmupconstant, warmuplinear, warmupcosine, warmupcosinewithhardrestarts
"""
scheduler = scheduler.lower()
if scheduler == "constantlr":
return transformers.get_constant_schedule(optimizer)
elif scheduler == "warmupconstant":
return transformers.get_constant_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps)
elif scheduler == "warmuplinear":
return transformers.get_linear_schedule_with_warmup(
optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total
)
elif scheduler == "warmupcosine":
return transformers.get_cosine_schedule_with_warmup(
optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total
)
elif scheduler == "warmupcosinewithhardrestarts":
return transformers.get_cosine_with_hard_restarts_schedule_with_warmup(
optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total
)
else:
raise ValueError("Unknown scheduler {}".format(scheduler))
@property
def device(self) -> device:
"""
Get torch.device from module, assuming that the whole module has one device.
"""
try:
return next(self.parameters()).device
except StopIteration:
# For nn.DataParallel compatibility in PyTorch 1.5
def find_tensor_attributes(module: nn.Module) -> List[Tuple[str, Tensor]]:
tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
return tuples
gen = self._named_members(get_members_fn=find_tensor_attributes)
first_tuple = next(gen)
return first_tuple[1].device
@property
def tokenizer(self):
"""
Property to get the tokenizer that is used by this model
"""
return self._first_module().tokenizer
@tokenizer.setter
def tokenizer(self, value):
"""
Property to set the tokenizer that should be used by this model
"""
self._first_module().tokenizer = value
@property
def max_seq_length(self):
"""
Property to get the maximal input sequence length for the model. Longer inputs will be truncated.
"""
return self._first_module().max_seq_length
@max_seq_length.setter
def max_seq_length(self, value):
"""
Property to set the maximal input sequence length for the model. Longer inputs will be truncated.
"""
self._first_module().max_seq_length = value
@property
def _target_device(self) -> torch.device:
logger.warning(
"`SentenceTransformer._target_device` has been removed, please use `SentenceTransformer.device` instead.",
)
return self.device
@_target_device.setter
def _target_device(self, device: Optional[Union[int, str, torch.device]] = None) -> None:
self.to(device)
__version__ = "2.7.0.dev0"
__MODEL_HUB_ORGANIZATION__ = "sentence-transformers"
from .datasets import SentencesDataset, ParallelSentencesDataset
from .LoggingHandler import LoggingHandler
from .SentenceTransformer import SentenceTransformer
from .readers import InputExample
from .cross_encoder.CrossEncoder import CrossEncoder
from .quantization import quantize_embeddings
__all__ = [
"LoggingHandler",
"SentencesDataset",
"ParallelSentencesDataset",
"SentenceTransformer",
"InputExample",
"CrossEncoder",
"quantize_embeddings",
]
from functools import wraps
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import logging
import os
from typing import Dict, Type, Callable, List, Optional
import torch
from torch import nn
from torch.optim import Optimizer
from torch.utils.data import DataLoader
from tqdm.autonotebook import tqdm, trange
from transformers import is_torch_npu_available
from transformers.utils import PushToHubMixin
from .. import SentenceTransformer, util
from ..evaluation import SentenceEvaluator
from ..util import get_device_name
logger = logging.getLogger(__name__)
class CrossEncoder(PushToHubMixin):
"""
A CrossEncoder takes exactly two sentences / texts as input and either predicts
a score or label for this sentence pair. It can for example predict the similarity of the sentence pair
on a scale of 0 ... 1.
It does not yield a sentence embedding and does not work for individual sentences.
:param model_name: A model name from Hugging Face Hub that can be loaded with AutoModel, or a path to a local
model. We provide several pre-trained CrossEncoder models that can be used for common tasks.
:param num_labels: Number of labels of the classifier. If 1, the CrossEncoder is a regression model that
outputs a continuous score 0...1. If > 1, it output several scores that can be soft-maxed to get
probability scores for the different classes.
:param max_length: Max length for input sequences. Longer sequences will be truncated. If None, max
length of the model will be used
:param device: Device that should be used for the model. If None, it will use CUDA if available.
:param tokenizer_args: Arguments passed to AutoTokenizer
:param automodel_args: Arguments passed to AutoModelForSequenceClassification
:param revision: The specific model version to use. It can be a branch name, a tag name, or a commit id,
for a stored model on Hugging Face.
:param default_activation_function: Callable (like nn.Sigmoid) about the default activation function that
should be used on-top of model.predict(). If None. nn.Sigmoid() will be used if num_labels=1,
else nn.Identity()
:param classifier_dropout: The dropout ratio for the classification head.
"""
def __init__(
self,
model_name: str,
num_labels: int = None,
max_length: int = None,
device: str = None,
tokenizer_args: Dict = {},
automodel_args: Dict = {},
revision: Optional[str] = None,
default_activation_function=None,
classifier_dropout: float = None,
):
self.config = AutoConfig.from_pretrained(model_name, revision=revision)
classifier_trained = True
if self.config.architectures is not None:
classifier_trained = any(
[arch.endswith("ForSequenceClassification") for arch in self.config.architectures]
)
if classifier_dropout is not None:
self.config.classifier_dropout = classifier_dropout
if num_labels is None and not classifier_trained:
num_labels = 1
if num_labels is not None:
self.config.num_labels = num_labels
self.model = AutoModelForSequenceClassification.from_pretrained(
model_name, config=self.config, revision=revision, **automodel_args
)
self.tokenizer = AutoTokenizer.from_pretrained(model_name, revision=revision, **tokenizer_args)
self.max_length = max_length
if device is None:
device = get_device_name()
logger.info("Use pytorch device: {}".format(device))
self._target_device = torch.device(device)
if default_activation_function is not None:
self.default_activation_function = default_activation_function
try:
self.config.sbert_ce_default_activation_function = util.fullname(self.default_activation_function)
except Exception as e:
logger.warning(
"Was not able to update config about the default_activation_function: {}".format(str(e))
)
elif (
hasattr(self.config, "sbert_ce_default_activation_function")
and self.config.sbert_ce_default_activation_function is not None
):
self.default_activation_function = util.import_from_string(
self.config.sbert_ce_default_activation_function
)()
else:
self.default_activation_function = nn.Sigmoid() if self.config.num_labels == 1 else nn.Identity()
def smart_batching_collate(self, batch):
texts = [[] for _ in range(len(batch[0].texts))]
labels = []
for example in batch:
for idx, text in enumerate(example.texts):
texts[idx].append(text.strip())
labels.append(example.label)
tokenized = self.tokenizer(
*texts, padding=True, truncation="longest_first", return_tensors="pt", max_length=self.max_length
)
labels = torch.tensor(labels, dtype=torch.float if self.config.num_labels == 1 else torch.long).to(
self._target_device
)
for name in tokenized:
tokenized[name] = tokenized[name].to(self._target_device)
return tokenized, labels
def smart_batching_collate_text_only(self, batch):
texts = [[] for _ in range(len(batch[0]))]
for example in batch:
for idx, text in enumerate(example):
texts[idx].append(text.strip())
tokenized = self.tokenizer(
*texts, padding=True, truncation="longest_first", return_tensors="pt", max_length=self.max_length
)
for name in tokenized:
tokenized[name] = tokenized[name].to(self._target_device)
return tokenized
def fit(
self,
train_dataloader: DataLoader,
evaluator: SentenceEvaluator = None,
epochs: int = 1,
loss_fct=None,
activation_fct=nn.Identity(),
scheduler: str = "WarmupLinear",
warmup_steps: int = 10000,
optimizer_class: Type[Optimizer] = torch.optim.AdamW,
optimizer_params: Dict[str, object] = {"lr": 2e-5},
weight_decay: float = 0.01,
evaluation_steps: int = 0,
output_path: str = None,
save_best_model: bool = True,
max_grad_norm: float = 1,
use_amp: bool = False,
callback: Callable[[float, int, int], None] = None,
show_progress_bar: bool = True,
):
"""
Train the model with the given training objective
Each training objective is sampled in turn for one batch.
We sample only as many batches from each objective as there are in the smallest one
to make sure of equal training with each dataset.
:param train_dataloader: DataLoader with training InputExamples
:param evaluator: An evaluator (sentence_transformers.evaluation) evaluates the model performance during training on held-out dev data. It is used to determine the best model that is saved to disc.
:param epochs: Number of epochs for training
:param loss_fct: Which loss function to use for training. If None, will use nn.BCEWithLogitsLoss() if self.config.num_labels == 1 else nn.CrossEntropyLoss()
:param activation_fct: Activation function applied on top of logits output of model.
:param scheduler: Learning rate scheduler. Available schedulers: constantlr, warmupconstant, warmuplinear, warmupcosine, warmupcosinewithhardrestarts
:param warmup_steps: Behavior depends on the scheduler. For WarmupLinear (default), the learning rate is increased from o up to the maximal learning rate. After these many training steps, the learning rate is decreased linearly back to zero.
:param optimizer_class: Optimizer
:param optimizer_params: Optimizer parameters
:param weight_decay: Weight decay for model parameters
:param evaluation_steps: If > 0, evaluate the model using evaluator after each number of training steps
:param output_path: Storage path for the model and evaluation files
:param save_best_model: If true, the best model (according to evaluator) is stored at output_path
:param max_grad_norm: Used for gradient normalization.
:param use_amp: Use Automatic Mixed Precision (AMP). Only for Pytorch >= 1.6.0
:param callback: Callback function that is invoked after each evaluation.
It must accept the following three parameters in this order:
`score`, `epoch`, `steps`
:param show_progress_bar: If True, output a tqdm progress bar
"""
train_dataloader.collate_fn = self.smart_batching_collate
if use_amp:
if is_torch_npu_available():
scaler = torch.npu.amp.GradScaler()
else:
scaler = torch.cuda.amp.GradScaler()
self.model.to(self._target_device)
if output_path is not None:
os.makedirs(output_path, exist_ok=True)
self.best_score = -9999999
num_train_steps = int(len(train_dataloader) * epochs)
# Prepare optimizers
param_optimizer = list(self.model.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
"weight_decay": weight_decay,
},
{"params": [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
]
optimizer = optimizer_class(optimizer_grouped_parameters, **optimizer_params)
if isinstance(scheduler, str):
scheduler = SentenceTransformer._get_scheduler(
optimizer, scheduler=scheduler, warmup_steps=warmup_steps, t_total=num_train_steps
)
if loss_fct is None:
loss_fct = nn.BCEWithLogitsLoss() if self.config.num_labels == 1 else nn.CrossEntropyLoss()
skip_scheduler = False
for epoch in trange(epochs, desc="Epoch", disable=not show_progress_bar):
training_steps = 0
self.model.zero_grad()
self.model.train()
for features, labels in tqdm(
train_dataloader, desc="Iteration", smoothing=0.05, disable=not show_progress_bar
):
if use_amp:
with torch.autocast(device_type=self._target_device.type):
model_predictions = self.model(**features, return_dict=True)
logits = activation_fct(model_predictions.logits)
if self.config.num_labels == 1:
logits = logits.view(-1)
loss_value = loss_fct(logits, labels)
scale_before_step = scaler.get_scale()
scaler.scale(loss_value).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_grad_norm)
scaler.step(optimizer)
scaler.update()
skip_scheduler = scaler.get_scale() != scale_before_step
else:
model_predictions = self.model(**features, return_dict=True)
logits = activation_fct(model_predictions.logits)
if self.config.num_labels == 1:
logits = logits.view(-1)
loss_value = loss_fct(logits, labels)
loss_value.backward()
torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_grad_norm)
optimizer.step()
optimizer.zero_grad()
if not skip_scheduler:
scheduler.step()
training_steps += 1
if evaluator is not None and evaluation_steps > 0 and training_steps % evaluation_steps == 0:
self._eval_during_training(
evaluator, output_path, save_best_model, epoch, training_steps, callback
)
self.model.zero_grad()
self.model.train()
if evaluator is not None:
self._eval_during_training(evaluator, output_path, save_best_model, epoch, -1, callback)
def predict(
self,
sentences: List[List[str]],
batch_size: int = 32,
show_progress_bar: bool = None,
num_workers: int = 0,
activation_fct=None,
apply_softmax=False,
convert_to_numpy: bool = True,
convert_to_tensor: bool = False,
):
"""
Performs predicts with the CrossEncoder on the given sentence pairs.
:param sentences: A list of sentence pairs [[Sent1, Sent2], [Sent3, Sent4]]
:param batch_size: Batch size for encoding
:param show_progress_bar: Output progress bar
:param num_workers: Number of workers for tokenization
:param activation_fct: Activation function applied on the logits output of the CrossEncoder. If None, nn.Sigmoid() will be used if num_labels=1, else nn.Identity
:param convert_to_numpy: Convert the output to a numpy matrix.
:param apply_softmax: If there are more than 2 dimensions and apply_softmax=True, applies softmax on the logits output
:param convert_to_tensor: Convert the output to a tensor.
:return: Predictions for the passed sentence pairs
"""
input_was_string = False
if isinstance(sentences[0], str): # Cast an individual sentence to a list with length 1
sentences = [sentences]
input_was_string = True
inp_dataloader = DataLoader(
sentences,
batch_size=batch_size,
collate_fn=self.smart_batching_collate_text_only,
num_workers=num_workers,
shuffle=False,
)
if show_progress_bar is None:
show_progress_bar = (
logger.getEffectiveLevel() == logging.INFO or logger.getEffectiveLevel() == logging.DEBUG
)
iterator = inp_dataloader
if show_progress_bar:
iterator = tqdm(inp_dataloader, desc="Batches")
if activation_fct is None:
activation_fct = self.default_activation_function
pred_scores = []
self.model.eval()
self.model.to(self._target_device)
with torch.no_grad():
for features in iterator:
model_predictions = self.model(**features, return_dict=True)
logits = activation_fct(model_predictions.logits)
if apply_softmax and len(logits[0]) > 1:
logits = torch.nn.functional.softmax(logits, dim=1)
pred_scores.extend(logits)
if self.config.num_labels == 1:
pred_scores = [score[0] for score in pred_scores]
if convert_to_tensor:
pred_scores = torch.stack(pred_scores)
elif convert_to_numpy:
pred_scores = np.asarray([score.cpu().detach().numpy() for score in pred_scores])
if input_was_string:
pred_scores = pred_scores[0]
return pred_scores
def rank(
self,
query: str,
documents: List[str],
top_k: Optional[int] = None,
return_documents: bool = False,
batch_size: int = 32,
show_progress_bar: bool = None,
num_workers: int = 0,
activation_fct=None,
apply_softmax=False,
convert_to_numpy: bool = True,
convert_to_tensor: bool = False,
) -> List[Dict]:
"""
Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores.
Example:
::
from sentence_transformers import CrossEncoder
model = CrossEncoder("cross-encoder/ms-marco-MiniLM-L-6-v2")
query = "Who wrote 'To Kill a Mockingbird'?"
documents = [
"'To Kill a Mockingbird' is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize, and has become a classic of modern American literature.",
"The novel 'Moby-Dick' was written by Herman Melville and first published in 1851. It is considered a masterpiece of American literature and deals with complex themes of obsession, revenge, and the conflict between good and evil.",
"Harper Lee, an American novelist widely known for her novel 'To Kill a Mockingbird', was born in 1926 in Monroeville, Alabama. She received the Pulitzer Prize for Fiction in 1961.",
"Jane Austen was an English novelist known primarily for her six major novels, which interpret, critique and comment upon the British landed gentry at the end of the 18th century.",
"The 'Harry Potter' series, which consists of seven fantasy novels written by British author J.K. Rowling, is among the most popular and critically acclaimed books of the modern era.",
"'The Great Gatsby', a novel written by American author F. Scott Fitzgerald, was published in 1925. The story is set in the Jazz Age and follows the life of millionaire Jay Gatsby and his pursuit of Daisy Buchanan."
]
model.rank(query, documents, return_documents=True)
::
[{'corpus_id': 0,
'score': 10.67858,
'text': "'To Kill a Mockingbird' is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize, and has become a classic of modern American literature."},
{'corpus_id': 2,
'score': 9.761677,
'text': "Harper Lee, an American novelist widely known for her novel 'To Kill a Mockingbird', was born in 1926 in Monroeville, Alabama. She received the Pulitzer Prize for Fiction in 1961."},
{'corpus_id': 1,
'score': -3.3099542,
'text': "The novel 'Moby-Dick' was written by Herman Melville and first published in 1851. It is considered a masterpiece of American literature and deals with complex themes of obsession, revenge, and the conflict between good and evil."},
{'corpus_id': 5,
'score': -4.8989105,
'text': "'The Great Gatsby', a novel written by American author F. Scott Fitzgerald, was published in 1925. The story is set in the Jazz Age and follows the life of millionaire Jay Gatsby and his pursuit of Daisy Buchanan."},
{'corpus_id': 4,
'score': -5.082967,
'text': "The 'Harry Potter' series, which consists of seven fantasy novels written by British author J.K. Rowling, is among the most popular and critically acclaimed books of the modern era."}]
:param query: A single query
:param documents: A list of documents
:param top_k: Return the top-k documents. If None, all documents are returned.
:param return_documents: If True, also returns the documents. If False, only returns the indices and scores.
:param batch_size: Batch size for encoding
:param show_progress_bar: Output progress bar
:param num_workers: Number of workers for tokenization
:param activation_fct: Activation function applied on the logits output of the CrossEncoder. If None, nn.Sigmoid() will be used if num_labels=1, else nn.Identity
:param convert_to_numpy: Convert the output to a numpy matrix.
:param apply_softmax: If there are more than 2 dimensions and apply_softmax=True, applies softmax on the logits output
:param convert_to_tensor: Convert the output to a tensor.
:return: A sorted list with the document indices and scores, and optionally also documents.
"""
query_doc_pairs = [[query, doc] for doc in documents]
scores = self.predict(
query_doc_pairs,
batch_size=batch_size,
show_progress_bar=show_progress_bar,
num_workers=num_workers,
activation_fct=activation_fct,
apply_softmax=apply_softmax,
convert_to_numpy=convert_to_numpy,
convert_to_tensor=convert_to_tensor,
)
results = []
for i in range(len(scores)):
if return_documents:
results.append({"corpus_id": i, "score": scores[i], "text": documents[i]})
else:
results.append({"corpus_id": i, "score": scores[i]})
results = sorted(results, key=lambda x: x["score"], reverse=True)
return results[:top_k]
def _eval_during_training(self, evaluator, output_path, save_best_model, epoch, steps, callback):
"""Runs evaluation during the training"""
if evaluator is not None:
score = evaluator(self, output_path=output_path, epoch=epoch, steps=steps)
if callback is not None:
callback(score, epoch, steps)
if score > self.best_score:
self.best_score = score
if save_best_model:
self.save(output_path)
def save(self, path: str, *, safe_serialization: bool = True, **kwargs) -> None:
"""
Saves the model and tokenizer to path; identical to `save_pretrained`
"""
if path is None:
return
logger.info("Save model to {}".format(path))
self.model.save_pretrained(path, safe_serialization=safe_serialization, **kwargs)
self.tokenizer.save_pretrained(path, **kwargs)
def save_pretrained(self, path: str, *, safe_serialization: bool = True, **kwargs) -> None:
"""
Saves the model and tokenizer to path; identical to `save`
"""
return self.save(path, safe_serialization=safe_serialization, **kwargs)
@wraps(PushToHubMixin.push_to_hub)
def push_to_hub(
self,
repo_id: str,
*,
commit_message: Optional[str] = None,
private: Optional[bool] = None,
safe_serialization: bool = True,
tags: Optional[List[str]] = None,
**kwargs,
) -> str:
if isinstance(tags, str):
tags = [tags]
elif tags is None:
tags = []
if "cross-encoder" not in tags:
tags.insert(0, "cross-encoder")
return super().push_to_hub(
repo_id=repo_id,
safe_serialization=safe_serialization,
commit_message=commit_message,
private=private,
tags=tags,
**kwargs,
)
from .CrossEncoder import CrossEncoder
__all__ = ["CrossEncoder"]
import logging
import os
import csv
from typing import List
from ... import InputExample
import numpy as np
logger = logging.getLogger(__name__)
class CEBinaryAccuracyEvaluator:
"""
This evaluator can be used with the CrossEncoder class.
It is designed for CrossEncoders with 1 outputs. It measure the
accuracy of the predict class vs. the gold labels. It uses a fixed threshold to determine the label (0 vs 1).
See CEBinaryClassificationEvaluator for an evaluator that determines automatically the optimal threshold.
"""
def __init__(
self,
sentence_pairs: List[List[str]],
labels: List[int],
name: str = "",
threshold: float = 0.5,
write_csv: bool = True,
):
self.sentence_pairs = sentence_pairs
self.labels = labels
self.name = name
self.threshold = threshold
self.csv_file = "CEBinaryAccuracyEvaluator" + ("_" + name if name else "") + "_results.csv"
self.csv_headers = ["epoch", "steps", "Accuracy"]
self.write_csv = write_csv
@classmethod
def from_input_examples(cls, examples: List[InputExample], **kwargs):
sentence_pairs = []
labels = []
for example in examples:
sentence_pairs.append(example.texts)
labels.append(example.label)
return cls(sentence_pairs, labels, **kwargs)
def __call__(self, model, output_path: str = None, epoch: int = -1, steps: int = -1) -> float:
if epoch != -1:
if steps == -1:
out_txt = " after epoch {}:".format(epoch)
else:
out_txt = " in epoch {} after {} steps:".format(epoch, steps)
else:
out_txt = ":"
logger.info("CEBinaryAccuracyEvaluator: Evaluating the model on " + self.name + " dataset" + out_txt)
pred_scores = model.predict(self.sentence_pairs, convert_to_numpy=True, show_progress_bar=False)
pred_labels = pred_scores > self.threshold
assert len(pred_labels) == len(self.labels)
acc = np.sum(pred_labels == self.labels) / len(self.labels)
logger.info("Accuracy: {:.2f}".format(acc * 100))
if output_path is not None and self.write_csv:
csv_path = os.path.join(output_path, self.csv_file)
output_file_exists = os.path.isfile(csv_path)
with open(csv_path, mode="a" if output_file_exists else "w", encoding="utf-8") as f:
writer = csv.writer(f)
if not output_file_exists:
writer.writerow(self.csv_headers)
writer.writerow([epoch, steps, acc])
return acc
import logging
from sklearn.metrics import average_precision_score
from typing import List
import numpy as np
import os
import csv
from ... import InputExample
from ...evaluation import BinaryClassificationEvaluator
logger = logging.getLogger(__name__)
class CEBinaryClassificationEvaluator:
"""
This evaluator can be used with the CrossEncoder class. Given sentence pairs and binary labels (0 and 1),
it compute the average precision and the best possible f1 score
"""
def __init__(
self,
sentence_pairs: List[List[str]],
labels: List[int],
name: str = "",
show_progress_bar: bool = False,
write_csv: bool = True,
):
assert len(sentence_pairs) == len(labels)
for label in labels:
assert label == 0 or label == 1
self.sentence_pairs = sentence_pairs
self.labels = np.asarray(labels)
self.name = name
if show_progress_bar is None:
show_progress_bar = (
logger.getEffectiveLevel() == logging.INFO or logger.getEffectiveLevel() == logging.DEBUG
)
self.show_progress_bar = show_progress_bar
self.csv_file = "CEBinaryClassificationEvaluator" + ("_" + name if name else "") + "_results.csv"
self.csv_headers = [
"epoch",
"steps",
"Accuracy",
"Accuracy_Threshold",
"F1",
"F1_Threshold",
"Precision",
"Recall",
"Average_Precision",
]
self.write_csv = write_csv
@classmethod
def from_input_examples(cls, examples: List[InputExample], **kwargs):
sentence_pairs = []
labels = []
for example in examples:
sentence_pairs.append(example.texts)
labels.append(example.label)
return cls(sentence_pairs, labels, **kwargs)
def __call__(self, model, output_path: str = None, epoch: int = -1, steps: int = -1) -> float:
if epoch != -1:
if steps == -1:
out_txt = " after epoch {}:".format(epoch)
else:
out_txt = " in epoch {} after {} steps:".format(epoch, steps)
else:
out_txt = ":"
logger.info("CEBinaryClassificationEvaluator: Evaluating the model on " + self.name + " dataset" + out_txt)
pred_scores = model.predict(
self.sentence_pairs, convert_to_numpy=True, show_progress_bar=self.show_progress_bar
)
acc, acc_threshold = BinaryClassificationEvaluator.find_best_acc_and_threshold(pred_scores, self.labels, True)
f1, precision, recall, f1_threshold = BinaryClassificationEvaluator.find_best_f1_and_threshold(
pred_scores, self.labels, True
)
ap = average_precision_score(self.labels, pred_scores)
logger.info("Accuracy: {:.2f}\t(Threshold: {:.4f})".format(acc * 100, acc_threshold))
logger.info("F1: {:.2f}\t(Threshold: {:.4f})".format(f1 * 100, f1_threshold))
logger.info("Precision: {:.2f}".format(precision * 100))
logger.info("Recall: {:.2f}".format(recall * 100))
logger.info("Average Precision: {:.2f}\n".format(ap * 100))
if output_path is not None and self.write_csv:
csv_path = os.path.join(output_path, self.csv_file)
output_file_exists = os.path.isfile(csv_path)
with open(csv_path, mode="a" if output_file_exists else "w", encoding="utf-8") as f:
writer = csv.writer(f)
if not output_file_exists:
writer.writerow(self.csv_headers)
writer.writerow([epoch, steps, acc, acc_threshold, f1, f1_threshold, precision, recall, ap])
return ap
import logging
from scipy.stats import pearsonr, spearmanr
from typing import List
import os
import csv
from ... import InputExample
logger = logging.getLogger(__name__)
class CECorrelationEvaluator:
"""
This evaluator can be used with the CrossEncoder class. Given sentence pairs and continuous scores,
it compute the pearson & spearman correlation between the predicted score for the sentence pair
and the gold score.
"""
def __init__(self, sentence_pairs: List[List[str]], scores: List[float], name: str = "", write_csv: bool = True):
self.sentence_pairs = sentence_pairs
self.scores = scores
self.name = name
self.csv_file = "CECorrelationEvaluator" + ("_" + name if name else "") + "_results.csv"
self.csv_headers = ["epoch", "steps", "Pearson_Correlation", "Spearman_Correlation"]
self.write_csv = write_csv
@classmethod
def from_input_examples(cls, examples: List[InputExample], **kwargs):
sentence_pairs = []
scores = []
for example in examples:
sentence_pairs.append(example.texts)
scores.append(example.label)
return cls(sentence_pairs, scores, **kwargs)
def __call__(self, model, output_path: str = None, epoch: int = -1, steps: int = -1) -> float:
if epoch != -1:
if steps == -1:
out_txt = " after epoch {}:".format(epoch)
else:
out_txt = " in epoch {} after {} steps:".format(epoch, steps)
else:
out_txt = ":"
logger.info("CECorrelationEvaluator: Evaluating the model on " + self.name + " dataset" + out_txt)
pred_scores = model.predict(self.sentence_pairs, convert_to_numpy=True, show_progress_bar=False)
eval_pearson, _ = pearsonr(self.scores, pred_scores)
eval_spearman, _ = spearmanr(self.scores, pred_scores)
logger.info("Correlation:\tPearson: {:.4f}\tSpearman: {:.4f}".format(eval_pearson, eval_spearman))
if output_path is not None and self.write_csv:
csv_path = os.path.join(output_path, self.csv_file)
output_file_exists = os.path.isfile(csv_path)
with open(csv_path, mode="a" if output_file_exists else "w", encoding="utf-8") as f:
writer = csv.writer(f)
if not output_file_exists:
writer.writerow(self.csv_headers)
writer.writerow([epoch, steps, eval_pearson, eval_spearman])
return eval_spearman
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment