Avoid Re-encoding Reference Images in Vision-LLM When Comparison Criteria Are User-Defined
#18 opened 7 days ago
by
yaroslav332
What is the limit of images for each prompt
#17 opened 3 months ago
by
rockyislearning
Add pipeline_tag
2
#16 opened 4 months ago
by
multimodalart
waste of time
2
#15 opened 5 months ago
by
kingriel
How much vram is needed to run this model? 8xRTX3090=192GB isn't enough to run the context.
1
#12 opened 6 months ago
by
kq
Output messy code with demo code.
#11 opened 6 months ago
by
kk3dmax
No output_router_logits / load_balancing_loss_func for Qwen3VLMoE?
#10 opened 6 months ago
by
plcedoz38
🚀 Best Practices for Evaluating the Qwen3-VL Model
❤️ 1
#9 opened 6 months ago
by
Yunxz
Adding Offline and Online inference via vLLM Code
#8 opened 6 months ago
by
hrithiksagar-bgen
FP8/4bit version please
➕ 4
5
#7 opened 7 months ago
by
zhanghx0905
32b version?
➕ 9
1
#5 opened 7 months ago
by
sanak
Adding `transformers` library tag.
#3 opened 7 months ago
by
ariG23498
Citation section lacks Qwen3 VL specific citation
👍 1
#1 opened 7 months ago
by
jaxchang