Instructions to use tlemberger/sd-ner with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use tlemberger/sd-ner with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="tlemberger/sd-ner")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("tlemberger/sd-ner") model = AutoModelForTokenClassification.from_pretrained("tlemberger/sd-ner") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 7054902f579ad48a002314e30380caeca698b751cba87582ccd24b0d6c080f44
- Size of remote file:
- 496 MB
- SHA256:
- 8f73900b7994ae25c83a59d9d662d1c404e391ad2ebd236850b5f5b4d4b95da6
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.