Instructions to use Shadman-Rohan/outputs with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Shadman-Rohan/outputs with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="Shadman-Rohan/outputs")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Shadman-Rohan/outputs") model = AutoModelForTokenClassification.from_pretrained("Shadman-Rohan/outputs") - Notebooks
- Google Colab
- Kaggle
| tags: | |
| - generated_from_trainer | |
| model-index: | |
| - name: outputs | |
| results: [] | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| # outputs | |
| This model is a fine-tuned version of [csebuetnlp/banglabert](https://huggingface.co/csebuetnlp/banglabert) on an unknown dataset. | |
| It achieves the following results on the evaluation set: | |
| - Loss: 0.2073 | |
| - 5 Err Precision: 0.0 | |
| - 5 Err Recall: 0.0 | |
| - 5 Err F1: 0.0 | |
| - 5 Err Number: 34 | |
| - Precision: 0.3586 | |
| - Recall: 0.2192 | |
| - F1: 0.2721 | |
| - Number: 9934 | |
| - Err Precision: 0.0 | |
| - Err Recall: 0.0 | |
| - Err F1: 0.0 | |
| - Err Number: 285 | |
| - Egin Err Precision: 0.9184 | |
| - Egin Err Recall: 0.0400 | |
| - Egin Err F1: 0.0766 | |
| - Egin Err Number: 1126 | |
| - El Err Precision: 0.8718 | |
| - El Err Recall: 0.1478 | |
| - El Err F1: 0.2528 | |
| - El Err Number: 1380 | |
| - Nd Err Precision: 0.7453 | |
| - Nd Err Recall: 0.1995 | |
| - Nd Err F1: 0.3147 | |
| - Nd Err Number: 1188 | |
| - Ne Word Err Precision: 0.6677 | |
| - Ne Word Err Recall: 0.5206 | |
| - Ne Word Err F1: 0.5850 | |
| - Ne Word Err Number: 8247 | |
| - Unc Insert Err Precision: 1.0 | |
| - Unc Insert Err Recall: 0.0011 | |
| - Unc Insert Err F1: 0.0022 | |
| - Unc Insert Err Number: 902 | |
| - Micro Avg Precision: 0.5309 | |
| - Micro Avg Recall: 0.3013 | |
| - Micro Avg F1: 0.3844 | |
| - Micro Avg Number: 23096 | |
| - Macro Avg Precision: 0.5702 | |
| - Macro Avg Recall: 0.1410 | |
| - Macro Avg F1: 0.1879 | |
| - Macro Avg Number: 23096 | |
| - Weighted Avg Precision: 0.5669 | |
| - Weighted Avg Recall: 0.3013 | |
| - Weighted Avg F1: 0.3611 | |
| - Weighted Avg Number: 23096 | |
| - Overall Accuracy: 0.9419 | |
| ## Model description | |
| More information needed | |
| ## Intended uses & limitations | |
| More information needed | |
| ## Training and evaluation data | |
| More information needed | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 2e-05 | |
| - train_batch_size: 16 | |
| - eval_batch_size: 16 | |
| - seed: 42 | |
| - gradient_accumulation_steps: 2 | |
| - total_train_batch_size: 32 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: linear | |
| - lr_scheduler_warmup_ratio: 0.1 | |
| - num_epochs: 1.0 | |
| ### Training results | |
| | Training Loss | Epoch | Step | Validation Loss | 5 Err Precision | 5 Err Recall | 5 Err F1 | 5 Err Number | Precision | Recall | F1 | Number | Err Precision | Err Recall | Err F1 | Err Number | Egin Err Precision | Egin Err Recall | Egin Err F1 | Egin Err Number | El Err Precision | El Err Recall | El Err F1 | El Err Number | Nd Err Precision | Nd Err Recall | Nd Err F1 | Nd Err Number | Ne Word Err Precision | Ne Word Err Recall | Ne Word Err F1 | Ne Word Err Number | Unc Insert Err Precision | Unc Insert Err Recall | Unc Insert Err F1 | Unc Insert Err Number | Micro Avg Precision | Micro Avg Recall | Micro Avg F1 | Micro Avg Number | Macro Avg Precision | Macro Avg Recall | Macro Avg F1 | Macro Avg Number | Weighted Avg Precision | Weighted Avg Recall | Weighted Avg F1 | Weighted Avg Number | Overall Accuracy | | |
| |:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------:|:--------:|:------------:|:-----------:|:--------:|:------:|:--------:|:--------------:|:-----------:|:-------:|:-----------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------:|:-------------:|:---------:|:-------------:|:----------------:|:-------------:|:---------:|:-------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:| | |
| | 0.3677 | 1.0 | 575 | 0.2073 | 0.0 | 0.0 | 0.0 | 34 | 0.3586 | 0.2192 | 0.2721 | 9934 | 0.0 | 0.0 | 0.0 | 285 | 0.9184 | 0.0400 | 0.0766 | 1126 | 0.8718 | 0.1478 | 0.2528 | 1380 | 0.7453 | 0.1995 | 0.3147 | 1188 | 0.6677 | 0.5206 | 0.5850 | 8247 | 1.0 | 0.0011 | 0.0022 | 902 | 0.5309 | 0.3013 | 0.3844 | 23096 | 0.5702 | 0.1410 | 0.1879 | 23096 | 0.5669 | 0.3013 | 0.3611 | 23096 | 0.9419 | | |
| ### Framework versions | |
| - Transformers 4.25.1 | |
| - Pytorch 1.13.1+cu117 | |
| - Datasets 2.9.0 | |
| - Tokenizers 0.13.2 | |