Instructions to use NLPC-UOM/SinBERT-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use NLPC-UOM/SinBERT-small with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="NLPC-UOM/SinBERT-small")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("NLPC-UOM/SinBERT-small") model = AutoModelForMaskedLM.from_pretrained("NLPC-UOM/SinBERT-small") - Notebooks
- Google Colab
- Kaggle
This is SinBERT-small model. SinBERT models are pretrained on a large Sinhala monolingual corpus (sin-cc-15M) using RoBERTa. If you use this model, please cite BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, LREC 2022
- Downloads last month
- 524