Datasets:
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
dataset_info:
features:
- name: repo
dtype: string
- name: fix_commit
dtype: string
- name: buggy_commit
dtype: string
- name: message
dtype: string
- name: files
list:
- name: path
dtype: string
- name: patch
dtype: string
- name: additions
dtype: int64
- name: deletions
dtype: int64
- name: language
dtype: string
- name: timestamp
dtype: timestamp[s]
splits:
- name: train
num_bytes: 1449699066
num_examples: 103096
- name: eval
num_bytes: 25215784
num_examples: 3000
- name: test
num_bytes: 25215784
num_examples: 3000
download_size: 507195718
dataset_size: 1500130634
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: Github issues dataset
size_categories:
- 10K<n<100K
GitHub Pull Request Bug–Fix Dataset
A curated, high-signal dataset of real-world software bugs and fixes collected from 25 popular open-source GitHub repositories.
Each entry corresponds to a single pull request (PR) and pairs contextual metadata with the exact code changes (unified diffs) that fixed the bug.
This dataset is designed for:
- Automated program repair
- Bug-fix patch generation
- LLM-based code and debugging agents
- Empirical software engineering research
Data collection methodology
Data was collected using the GitHub REST API and post-processed into a structured format.
To maintain quality and usefulness:
- Only merged pull requests were included
- Each PR must represent a bug fix or correctness change
- Fixes are linked to both the buggy commit and the fix commit
- Code changes are stored as unified diffs at the file level
- Low-signal PRs (refactors, formatting-only changes, discussions) were filtered out
- PRs without meaningful code changes were excluded
Each dataset row represents one bug–fix PR.
Dataset schema
Each entry follows the schema below:
{
"repo": "owner/repository",
"pr_number": 12345,
"title": "Short description of the fix",
"body": "Pull request description and context",
"buggy_commit": "abcdef123456...",
"fix_commit": "fedcba654321...",
"buggy_distance": 12,
"confidence": "medium",
"files": [
{
"filename": "path/to/file.ext",
"patch": "unified diff representing the fix",
"additions": 10,
"deletions": 2
}
]
}
| Field | Description |
|---|---|
repo |
GitHub repository containing the pull request |
pr_number |
Pull request number |
title |
Pull request title |
body |
Pull request description and discussion context |
buggy_commit |
Commit introducing or containing the bug |
fix_commit |
Commit that fixes the bug |
buggy_distance |
Number of commits between buggy and fix commits |
confidence |
Heuristic confidence level of bug–fix correctness |
files |
List of files modified by the fix |
files[].filename |
Path to the modified file |
files[].patch |
Unified diff containing the code changes |
files[].additions |
Number of lines added |
files[].deletions |
Number of lines removed |
Supported languages
The dataset contains fixes across multiple programming languages, including (but not limited to):
- C / C++
- Python
- JavaScript / TypeScript
- Rust
- Go
- Java
- Assembly (very rare)
Language distribution varies by repository.
Intended use cases
This dataset is well-suited for:
- Training models to generate patches from real pull request context
- Studying bug-fix patterns across large codebases
- Building autonomous debugging or repair agents
- Research in program repair, code synthesis, and software maintenance
It is not intended for:
- Pull request classification or triage
- Sentiment analysis
Limitations
The dataset reflects real-world noise from GitHub pull requests Buggy commit identification is heuristic and may be imperfect Some fixes involve refactoring or design changes rather than minimal patches No guarantee that fixes represent optimal or best-practice solutions
⚠️ Warning This dataset currently includes pull requests from 10 of the planned 25 repositories, totaling approximately 24,000 entries. The final release is expected to contain ~50,000 entries and occupy ~2 GB of storage.