Image-to-Text
Transformers
Safetensors
English
Chinese
qwen2_5_vl
image-text-to-text
mathematical-reasoning
visual-reasoning
code-generation
qwen2.5-vl
text-generation-inference
Instructions to use gogoduan/MatPlotCode with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use gogoduan/MatPlotCode with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="gogoduan/MatPlotCode")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("gogoduan/MatPlotCode") model = AutoModelForImageTextToText.from_pretrained("gogoduan/MatPlotCode") - Notebooks
- Google Colab
- Kaggle
metadata
license: mit
library_name: transformers
pipeline_tag: image-to-text
datasets:
- gogoduan/Math-VR-train
- gogoduan/Math-VR-bench
language:
- en
- zh
tags:
- mathematical-reasoning
- visual-reasoning
- code-generation
- qwen2.5-vl
CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven Images
This repository contains the MatplotCode model, a core component from the paper CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven Images. MatPlotCode is state-of-the-art image-code converter capable of converting math figures into 'matplotlib' code.
The model is built upon the Qwen2.5-VL architecture and is compatible with the transformers library.
For more details, please refer to the project homepage and the GitHub repository.
Citation
If you find this work helpful, please consider citing our paper:
@article{duan2025codeplot,
title={CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven Images},
author={Duan, Chengqi and Sun, Kaiyue and Fang, Rongyao and Zhang, Manyuan and Feng, Yan and Luo, Ying and Liu, Yufang and Wang, Ke and Pei, Peng and Cai, Xunliang and others},
journal={arXiv preprint arXiv:2510.11718},
year={2025}
}