submission_id int64 513k 755k | leaderboard_id int64 763 765 | problem_name large_stringclasses 3
values | user_id large_stringclasses 738
values | user_name large_stringclasses 730
values | code_id int64 2.94k 446k | file_name large_stringlengths 4 115 | submission_time timestamp[us, tz=UTC]date 2026-03-06 17:01:06 2026-04-07 07:58:55 | status large_stringclasses 6
values | score float64 0 1.74 ⌀ | passed bool 2
classes | mode large_stringclasses 1
value | runner large_stringclasses 1
value | code large_stringlengths 6 950k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
754,087 | 765 | amd-mixed-mla | 51903518 | Amo-Zeng | 445,432 | submission.py | 2026-04-07T01:45:33.394000 | succeeded | 0.000008 | true | leaderboard | MI355X | # gpumode leaderboard reference
"""
Reference implementation for MLA (Multi-head Latent Attention) decode kernel.
Uses aiter MLA kernels (mla_decode_fwd) as the reference.
DeepSeek R1 forward_absorb MLA: absorbed q (576), compressed kv_buffer (576),
output v_head_dim = kv_lora_rank = 512.
The input provides:
q: ... |
753,797 | 765 | amd-mixed-mla | 67768263 | Barry_zhang | 442,931 | submission-040501.py | 2026-04-07T00:01:17.664000 | succeeded | 0.000015 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""
MLA — Skip-amax with pg2 for kv<=1024 (dice roll version).
Same as mla_skip_amax.py that scored 38.7μs ranked.
67% pass rate on secret seeds. Keep submitting this to leaderboard
every hour — each attempt is independent.
When it passes, you lock in ~38μs.
""... |
754,992 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 445,806 | v033b.py | 2026-04-07T06:36:26.395000 | succeeded | 0.000021 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V033b: v029n + Triton fused init kernel for 8K shapes.
Replace zero_() + fill_(-inf) with one Triton kernel that does both.
Saves ~1.5us dispatch overhead per 8K shape (4 ops -> 3 ops)."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.enviro... |
736,894 | 765 | amd-mixed-mla | 171488486 | NinoHeather | 433,933 | my_submission_refact.py | 2026-04-05T14:22:29.862000 | succeeded | 0.000022 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
import os
import torch
from task import input_t, output_t
from aiter import dtypes as aiter_dtypes
from aiter import get_mla_metadata_info_v1, get_mla_metadata_v1
from aiter import mla_decode_stage1_asm_fwd, mla_reduce_v1
from aiter.mla import mla_decode_fwd
H... |
753,463 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 439,126 | v031n.py | 2026-04-06T22:33:17.393000 | succeeded | 0.000022 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V031n: Best hybrid — _fwd for 4/32/256x1K, persistent for 64x1K, NP for 8K.
v031j showed _fwd is 5.9us faster for 256x1K (41.0 vs 46.9).
But 64x1K is 0.5us worse with _fwd (25.1 vs 24.6).
Cherry-pick: _fwd for 256x1K, persistent for 64x1K."""
import os
os.envi... |
736,348 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 391,843 | v029n.py | 2026-04-05T13:31:03.525000 | succeeded | 0.000022 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V029n: Push ALL 8K shapes to maximum page sizes.
- 4x8K: ps=2048 (8192/2048=4 pages, 4*4=16 total)
- 32x8K: ps=1024 (8192/1024=8 pages, 32*8=256 total)
- 64x8K: ps=512 (8192/512=16 pages, 64*16=1024 total)
- 256x8K: ps=2048 (proven in v029b leaderboard)
1K sha... |
754,596 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 438,760 | v031k.py | 2026-04-07T04:35:23.460000 | succeeded | 0.000022 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V031k: v029n but try intra=False for 256x1K.
Previous notes say 'same or worse' but let's verify with current base.
Also try kg=4 for 256x1K (our 64x1K uses kg=4 and is good)."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.environ.setdefau... |
675,586 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 391,326 | v029b.py | 2026-03-31T00:21:59.340000 | succeeded | 0.000023 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V029b: v027m + 256x8K ps=2048.
Fresh namespace copy of the deepest 256x8K page ladder point on top of safe v027m."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.environ.setdefault("AMD_DIRECT_DISPATCH", "1")
os.environ.setdefault("HIPBLASL... |
675,331 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 391,305 | v029a.py | 2026-03-30T23:20:14.131000 | succeeded | 0.000023 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V029a: v027m + 256x8K ps=1024.
Fresh namespace copy of the deep 256x8K page ladder on top of the now-safe v027m stack."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.environ.setdefault("AMD_DIRECT_DISPATCH", "1")
os.environ.setdefault("HIP... |
673,583 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 389,988 | v028k.py | 2026-03-30T16:42:35.942000 | succeeded | 0.000023 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V028k: v027l + 256x8K ps=256 isolate.
Keep the safe v027l stack everywhere else.
Push only 256x8K one step beyond the already-benchmark-valid ps=128 point."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.environ.setdefault("AMD_DIRECT_DISPA... |
675,102 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 390,361 | v028m.py | 2026-03-30T22:19:55.327000 | succeeded | 0.000023 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V028m: v027m + 256x8K ps=256.
Start from the now-safe v027m stack and push only 256x8K one page step further."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.environ.setdefault("AMD_DIRECT_DISPATCH", "1")
os.environ.setdefault("HIPBLASLT_AL... |
673,003 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 388,584 | v027m.py | 2026-03-30T15:02:00.979000 | succeeded | 0.000024 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V027m: v027l but push 256x8K to ps=128 and try 256x1K persistent intra=True kg=2.
256x8K: ps=128 (was ps=64 at 26.4us, should drop to ~18us like other 8K)
256x1K: try persistent intra=True split=1 kg=2 (smaller granularity)"""
import os
os.environ.setdefault("... |
670,980 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 388,539 | v027l.py | 2026-03-30T10:06:38.266000 | succeeded | 0.000024 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V027l: Cherry-pick best configs:
- 64x1K: persistent intra=True split=1 kg=4 (24.4us from v027k, was 27.3)
- 256x1K: persistent intra=False split=1 kg=8 (43.7us from v027i, better than v027k)
- All 8K: NP ns=1 large ps (proven in v027i leaderboard)"""
import o... |
670,399 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 388,040 | v027i.py | 2026-03-30T08:04:25.249000 | succeeded | 0.000024 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V027i: v027h + add 64x8K NP ns=1 ps=128.
v027h passed ranked with 4+32x8K NP ps=128. Now add 64x8K too.
64x8K was 33.9us persistent in v027h. NP ps=128 should be ~18us.
v027b (all 8K NP) failed ranked — isolating whether 64x8K is the culprit."""
import os
os.e... |
670,118 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 387,807 | v027h.py | 2026-03-30T07:03:39.567000 | succeeded | 0.000026 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V027h: v026o plus 4x8K and 32x8K NP ns=1 ps=128 direct output.
Keep 64x8K persistent while grafting the two smaller 8K rows from v027b."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.environ.setdefault("AMD_DIRECT_DISPATCH", "1")
os.enviro... |
669,133 | 765 | amd-mixed-mla | 5554649 | josusanmartin | 387,419 | v026o.py | 2026-03-30T04:01:34.002000 | succeeded | 0.000027 | true | leaderboard | MI355X | #!POPCORN leaderboard amd-mixed-mla
#!POPCORN gpu MI355X
"""V026o: SAFE version of v026l. 256x8K NP ns=1 ps=64 (fastest!).
64x8K reverted to PERSISTENT (NP ns=2 failed ranked at seed 1360).
32x8K: NP ns=3 + Triton reduce (frozen from v025h)."""
import os
os.environ.setdefault("HIP_FORCE_DEV_KERNARG", "1")
os.environ.se... |
KernelBot Competition Data
This dataset contains GPU kernel submissions from the KernelBot competition platform. Submissions are optimized GPU kernels written for specific hardware targets.
Data Files
AMD MI300 Submissions
| File | Description |
|---|---|
submissions.parquet |
All AMD competition submissions |
successful_submissions.parquet |
AMD submissions that passed correctness tests |
deduplicated_submissions.parquet |
AMD submissions deduplicated by (user, code) |
deduplicated_successful_submissions.parquet |
Deduplicated passing AMD submissions |
AMD Problems: fp8-gemm, moe (mixture of experts), mla-decode, all2all, gemm+reducescatter, allgather+gemm, mxfp4-mm, moe-mxfp4, mixed-mla
AMD 1.1M Competition
| File | Size | Description |
|---|---|---|
amd_1_1m_competition_submissions.parquet |
~699 MB | Deduplicated submissions with code for amd-mxfp4-mm (763), amd-moe-mxfp4 (764), and amd-mixed-mla (765) |
Trimul
| File | Size | Description |
|---|---|---|
trimul_submissions.parquet |
~120 MB | Deduplicated submissions with code for trimul (leaderboard 496) |
trimul is a separate mixed-GPU problem and is not grouped with the AMD competition exports.
Helion B200_Nebius
| File | Size | Description |
|---|---|---|
helion_b200_nebius_submissions.parquet |
~4 MB | Deduplicated submissions with code for causal_conv1d (766), fp8_quant (767), gated_deltanet_chunk_fwd_h (768), gated_deltanet_chunk_fwd_o (769), and gated_deltanet_recompute_w_u (770) |
Measurement note: these problems were run on B200_Nebius, and the measurements for this problem set are brittle. Treat leaderboard scores from this export with extra caution.
NVIDIA Blackwell NVFP4 Submissions
| File | Size | Description |
|---|---|---|
nvidia_nvfp4_submissions.parquet |
~1.4 GB | NVFP4 submissions deduplicated by (user, code), with full code content |
NVFP4 Problems: gemv (leaderboard 595), gemm (597), dual_gemm (598), modal_dual_gemm (697), group_gemm (730)
Note on Dual GEMM: There are two variants of the dual_gemm problem. Midway through the competition, on-prem hardware measurements became unreliable, so a second leaderboard was created on Modal infrastructure. The Modal measurements (leaderboard 697, modal_nvfp4_dual_gemm) are more trustworthy.
Note: Scores are execution time in seconds. Lower is better.
Helper Scripts
analyze_submissions.py- Python functions for analyzing submissionsskills.md- Documentation for data processing workflows
Quick Start
from analyze_submissions import load_submissions, top_contestants, author_progression
# Load NVIDIA NVFP4 data
df = load_submissions()
# Get top 20 for a problem
leaders = top_contestants(df, problem_name='nvfp4_gemm', n=20)
# See a user's progression over time
progression = author_progression(df, user_name='username', problem_name='nvfp4_gemm')
Learn More
- Competition platform: gpumode.com
- Reference kernels and problem specs: github.com/gpu-mode/reference-kernels
License
This dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
You are free to share and adapt the material for any purpose, even commercially, provided you give appropriate credit. If for whatever reason you cannot give appropriate credit then please reach out to marksaroufim@gmail.com to discuss other arrangements.
Attribution: Please cite GPU Mode and link to this dataset. For academic papers, use the citation below.
Citation
If you use this dataset in your work, please cite:
@inproceedings{
kernelbot2025,
title={KernelBot: A Competition Platform for Writing Heterogeneous {GPU} Code},
author={Alex L Zhang and Matej Sirovatka and Erik Schultheis and Benjamin Horowitz and Mark Saroufim},
booktitle={Championing Open-source DEvelopment in ML Workshop @ ICML25},
year={2025},
url={https://openreview.net/forum?id=bq9U4dmuyJ}
}
- Downloads last month
- 694