Image-Text-to-Text
Transformers
Safetensors
English
qwen2_vl
reasoner
r1
exp
diagram
math
theorem
text-generation-inference
conversational
Instructions to use prithivMLmods/Open-R1-Mini-Experimental with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Open-R1-Mini-Experimental with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="prithivMLmods/Open-R1-Mini-Experimental") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("prithivMLmods/Open-R1-Mini-Experimental") model = AutoModelForImageTextToText.from_pretrained("prithivMLmods/Open-R1-Mini-Experimental") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Open-R1-Mini-Experimental with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Open-R1-Mini-Experimental" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Open-R1-Mini-Experimental", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/prithivMLmods/Open-R1-Mini-Experimental
- SGLang
How to use prithivMLmods/Open-R1-Mini-Experimental with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Open-R1-Mini-Experimental" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Open-R1-Mini-Experimental", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Open-R1-Mini-Experimental" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Open-R1-Mini-Experimental", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use prithivMLmods/Open-R1-Mini-Experimental with Docker Model Runner:
docker model run hf.co/prithivMLmods/Open-R1-Mini-Experimental
| license: apache-2.0 | |
| language: | |
| - en | |
| base_model: | |
| - Qwen/Qwen2-VL-2B-Instruct | |
| pipeline_tag: image-text-to-text | |
| library_name: transformers | |
| tags: | |
| - reasoner | |
| - r1 | |
| - exp | |
| - diagram | |
| - math | |
| - theorem | |
| - text-generation-inference | |
|  | |
| > [!WARNING] | |
| > **Note:** This model contains artifacts and may perform poorly in some cases. | |
| # **Open-R1-Mini-Experimental** | |
| The **Open-R1-Mini-Experimental** model is a fine-tuned version of Qwen2-VL-2B-Instruct, specifically designed for reasoning tasks, context reasoning, and multi-modal understanding based on the **R1 reasoning logits data**. This model integrates a conversational approach with deep reasoning capabilities to handle complex multi-modal tasks efficiently. | |
| # **Key Enhancements** | |
| * **Advanced Contextual Reasoning**: Open-R1-Mini-Experimental achieves state-of-the-art performance in reasoning tasks by leveraging R1 reasoning logits data, enhancing logical inference and decision-making. | |
| * **Understanding images of various resolution & ratio**: The model excels at visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. | |
| * **Long-Context Video Understanding**: Capable of processing and reasoning over videos of 20 minutes or more for high-quality video-based question answering, content creation, and dialogue. | |
| * **Device Integration**: With strong reasoning and decision-making abilities, the model can be integrated into mobile devices, robots, and automation systems for real-time operation based on both visual and textual input. | |
| * **Multilingual Support**: Supports text understanding in various languages within images, including English, Chinese, Japanese, Korean, Arabic, most European languages, and Vietnamese. | |
| # **Sample Inference** | |
| | Example | Image | | |
| |---------|-------| | |
| | **Example 1** |  | | |
| | **Example 2** |  | | |
| | **Example 3** |  | | |
| | **Example 4** |  | | |
| | **Example 5** |  | | |
| **Demo:** https://huggingface.co/prithivMLmods/Open-R1-Mini-Experimental/blob/main/open-r1-reasoner-doc-py/open-r1-exp.ipynb | |
| # **How to Use** | |
| ```python | |
| instruction = "Analyze the provided image and the associated problem statement. Carefully consider the geometric relationships and mathematical principles involved. Provide a step-by-step solution to the problem, ensuring that each step is logically derived from the previous one. Conclude with the correct answer, clearly labeled." | |
| ``` | |
| ```python | |
| from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor | |
| from qwen_vl_utils import process_vision_info | |
| # Load the model with automatic device placement | |
| model = Qwen2VLForConditionalGeneration.from_pretrained( | |
| "prithivMLmods/Open-R1-Mini-Experimental", torch_dtype="auto", device_map="auto" | |
| ) | |
| # Recommended: Enable flash_attention_2 for better performance in multi-image and video tasks | |
| # model = Qwen2VLForConditionalGeneration.from_pretrained( | |
| # "prithivMLmods/Open-R1-Mini-Experimental", | |
| # torch_dtype=torch.bfloat16, | |
| # attn_implementation="flash_attention_2", | |
| # device_map="auto", | |
| # ) | |
| # Load processor | |
| processor = AutoProcessor.from_pretrained("prithivMLmods/Open-R1-Mini-Experimental") | |
| # Adjust visual token range for optimized memory usage | |
| # min_pixels = 256*28*28 | |
| # max_pixels = 1280*28*28 | |
| # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) | |
| messages = [ | |
| { | |
| "role": "user", | |
| "content": [ | |
| { | |
| "type": "image", | |
| "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", | |
| }, | |
| {"type": "text", "text": "Analyze the context of this image."}, | |
| ], | |
| } | |
| ] | |
| # Prepare input | |
| text = processor.apply_chat_template( | |
| messages, tokenize=False, add_generation_prompt=True | |
| ) | |
| image_inputs, video_inputs = process_vision_info(messages) | |
| inputs = processor( | |
| text=[text], | |
| images=image_inputs, | |
| videos=video_inputs, | |
| padding=True, | |
| return_tensors="pt", | |
| ) | |
| inputs = inputs.to("cuda") | |
| # Inference | |
| generated_ids = model.generate(**inputs, max_new_tokens=128) | |
| generated_ids_trimmed = [ | |
| out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) | |
| ] | |
| output_text = processor.batch_decode( | |
| generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False | |
| ) | |
| print(output_text) | |
| ``` | |
| # **Buffer Handling** | |
| ```python | |
| buffer = "" | |
| for new_text in streamer: | |
| buffer += new_text | |
| buffer = buffer.replace("<|im_end|>", "") | |
| yield buffer | |
| ``` | |
| # **Key Features** | |
| 1. **Advanced Contextual Reasoning:** | |
| - Optimized for **context-aware problem-solving** and **logical inference** based on R1 reasoning logits. | |
| 2. **Optical Character Recognition (OCR):** | |
| - Extracts and processes text from images with exceptional accuracy. | |
| 3. **Mathematical and Logical Problem Solving:** | |
| - Supports complex reasoning and outputs equations in **LaTeX format**. | |
| 4. **Conversational and Multi-Turn Interaction:** | |
| - Handles **multi-turn dialogue** with enhanced memory retention and response coherence. | |
| 5. **Multi-Modal Inputs & Outputs:** | |
| - Processes images, text, and combined inputs to generate insightful analyses. | |
| 6. **Secure and Efficient Model Loading:** | |
| - Uses **Safetensors** for faster and more secure model weight handling. |