Instructions to use bugdaryan/Code-Llama-2-13B-instruct-text2sql with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bugdaryan/Code-Llama-2-13B-instruct-text2sql with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bugdaryan/Code-Llama-2-13B-instruct-text2sql")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bugdaryan/Code-Llama-2-13B-instruct-text2sql") model = AutoModelForCausalLM.from_pretrained("bugdaryan/Code-Llama-2-13B-instruct-text2sql") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bugdaryan/Code-Llama-2-13B-instruct-text2sql with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bugdaryan/Code-Llama-2-13B-instruct-text2sql" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bugdaryan/Code-Llama-2-13B-instruct-text2sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bugdaryan/Code-Llama-2-13B-instruct-text2sql
- SGLang
How to use bugdaryan/Code-Llama-2-13B-instruct-text2sql with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bugdaryan/Code-Llama-2-13B-instruct-text2sql" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bugdaryan/Code-Llama-2-13B-instruct-text2sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bugdaryan/Code-Llama-2-13B-instruct-text2sql" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bugdaryan/Code-Llama-2-13B-instruct-text2sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bugdaryan/Code-Llama-2-13B-instruct-text2sql with Docker Model Runner:
docker model run hf.co/bugdaryan/Code-Llama-2-13B-instruct-text2sql
Context length supported and benchmarking with other models?
#2
by Kshitizkhandelwal - opened
What is the context length supported for the model given that the schema can be relatively large? Also how does the model compare with other text to SQL models like gpt4/text-da-vinci 003?
Hello,
Thank you for your inquiry. Expanding context length support and conducting a detailed comparison and evaluation with models like gpt4/text-da-vinci-003 are both on our to-do list for the future. We appreciate your patience as we work on these improvements.
bugdaryan changed discussion status to closed