Instructions to use SNOWTEAM/MedicoLLM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SNOWTEAM/MedicoLLM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SNOWTEAM/MedicoLLM") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("SNOWTEAM/MedicoLLM") model = AutoModelForCausalLM.from_pretrained("SNOWTEAM/MedicoLLM") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SNOWTEAM/MedicoLLM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SNOWTEAM/MedicoLLM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SNOWTEAM/MedicoLLM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SNOWTEAM/MedicoLLM
- SGLang
How to use SNOWTEAM/MedicoLLM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SNOWTEAM/MedicoLLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SNOWTEAM/MedicoLLM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SNOWTEAM/MedicoLLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SNOWTEAM/MedicoLLM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use SNOWTEAM/MedicoLLM with Docker Model Runner:
docker model run hf.co/SNOWTEAM/MedicoLLM
Overview
SNOWTEAM/medico-mistral is a specialized language model designed for medical applications. This transformer-based decoder-only language model is based on the Mistral 8x7B model and has been fine-tuned through global parameter adjustments, leveraging a comprehensive dataset that includes 4.8 million research papers and 10,000 medical books.
Model Description
- Base Model: Mistral 8x7B model- Instruct
- Model type: Transformer-based decoder-only language model
- Language(s) (NLP): English
Training Dataset
- Dataset Size: 4.8 million research papers and 10,000 medical books.
- Data Diversity: Includes a wide range of medical fields, ensuring comprehensive coverage of medical knowledge.
- Preprocessing:
- Books: We collected 10,000 textbooks from various sources such as the open-library, university libraries, and reputable publishers, covering a wide range of medical specialties. For preprocessing, we extracted text content from PDF files, then performed data cleaning through de-duplication and content filtering. This involved removing extraneous elements such as URLs, author lists, superfluous information, document contents, references, and citations.
- Papers: Academic papers are a valuable knowledge resource due to their high-quality, cutting-edge medical information. We started with the S2ORC (Lo et al. 2020) dataset, which contains 81.1 million English-language academic papers. From this, we selected biomedical-related papers based on the presence of corresponding PubMed Central (PMC) IDs. This resulted in approximately 4.8 million biomedical papers, totaling over 75 billion tokens.
Model Sources [optional]
- Repository: https://huggingface.co/SNOWTEAM/medico-mistral
- Paper [optional]:
- Demo [optional]:
How to Get Started with the Model
import transformers
import torch
model_path = "SNOWTEAM/medico-mistral"
model = AutoModelForCausalLM.from_pretrained(
model_path,device_map="auto",
max_memory=max_memory_mapping,
torch_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained("SNOWTEAM/medico-mistral")
input_text = ""
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output_ids = model.generate(input_ids=input_ids.cuda(),
max_new_tokens=300,
pad_token_id=tokenizer.eos_token_id,)
output_text = tokenizer.batch_decode(output_ids[:, input_ids.shape[1]:],skip_special_tokens=True)[0]
print(output_text)
Training Details
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
- Downloads last month
- 11