Instructions to use mrp/SCT_BERT_Small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use mrp/SCT_BERT_Small with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("mrp/SCT_BERT_Small") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use mrp/SCT_BERT_Small with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("mrp/SCT_BERT_Small", dtype="auto") - Notebooks
- Google Colab
- Kaggle
This is a SCT model: It maps sentences to a dense vector space and can be used for tasks like semantic search.
Usage
Using this model becomes easy when you have SCT installed:
pip install -U git+https://github.com/mrpeerat/SCT
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mrp/SCT_BERT_Small')
embeddings = model.encode(sentences)
print(embeddings)
Evaluation Results
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: Semantic Textual Similarity
Citing & Authors
@article{limkonchotiwat-etal-2023-sct,
title = "An Efficient Self-Supervised Cross-View Training For Sentence Embedding",
author = "Limkonchotiwat, Peerat and
Ponwitayarat, Wuttikorn and
Lowphansirikul, Lalita and
Udomcharoenchaikit, Can and
Chuangsuwanich, Ekapol and
Nutanong, Sarana",
journal = "Transactions of the Association for Computational Linguistics",
year = "2023",
address = "Cambridge, MA",
publisher = "MIT Press",
}
- Downloads last month
- 7