rajpurkar/squad_v2
Viewer • Updated • 142k • 36.9k • 251
How to use aware-ai/mobilebert-squadv2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="aware-ai/mobilebert-squadv2") # Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("aware-ai/mobilebert-squadv2")
model = AutoModelForQuestionAnswering.from_pretrained("aware-ai/mobilebert-squadv2")This is based on mobile bert architecture suitable for handy devices or device with low resources.
using transformers library first load model and Tokenizer
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "aware-ai/mobilebert-squadv2"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
use question answering pipeline
qa_engine = pipeline('question-answering', model=model, tokenizer=tokenizer)
QA_input = {
'question': 'your question?',
'context': '. your context ................ '
}
res = qa_engine (QA_input)