Datasets:
code stringlengths 72 8.78M | code_en stringlengths 72 8.78M | language stringclasses 1
value | file_path stringlengths 36 164 | license stringclasses 1
value | token_count int64 26 8.41M |
|---|---|---|---|---|---|
# -*- coding: utf-8 -*-
from collections import OrderedDict
from ipywidgets import Widget, Tab
class ExtendedTab(Tab):
"""
A Tab subclass that allows to add/access/select/replace/remove children by name.
There can be only one tab for any given name.
Example:
import time
t = Extend... | # -*- coding: utf-8 -*-
from collections import OrderedDict
from ipywidgets import Widget, Tab
class ExtendedTab(Tab):
"""
A Tab subclass that allows to add/access/select/replace/remove children by name.
There can be only one tab for any given name.
Example:
import time
t = Extend... | en | 002440303_deeplook-ipyrest_extendedtab_093e2be98d4c.py | unknown | 717 |
# ------------------------------------------------------------------------------------------------
# Deformable DETR
# Copyright (c) 2020 SenseTime. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
# -------------------------------------------------------------------------... | # ------------------------------------------------------------------------------------------------
# Deformable DETR
# Copyright (c) 2020 SenseTime. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
# -------------------------------------------------------------------------... | en | 005599174_eslambakr-HRS_benchmark_ms_deform_attn_e958c034cd2c.py | unknown | 2,162 |
"""
Notice : 神兽保佑 ,测试一次通过
//
// ┏┛ ┻━━━━━┛ ┻┓
// ┃ ┃
// ┃ ━ ┃
// ┃ ┳┛ ┗┳ ┃
// ┃ ┃
// ┃ ┻ ┃
// ┃ ┃
// ┗━┓ ┏━━━┛
// ┃ ┃ Author: somewheve
// ┃ ┃ Datetime: 2019/7/3 下午8:46 ---> 无知即是罪恶
// ┃ ┗━━━━━━━━━┓
// ┃ ... | """
Notice : 神兽保佑 ,测试一次通过
//
// ┏┛ ┻━━━━━┛ ┻┓
// ┃ ┃
// ┃ ━ ┃
// ┃ ┳┛ ┗┳ ┃
// ┃ ┃
// ┃ ┻ ┃
// ┃ ┃
// ┗━┓ ┏━━━┛
// ┃ ┃ Author: somewheve
// ┃ ┃ Datetime: 2019/7/3 下午8:46 ---> 无知即是罪恶
// ┃ ┗━━━━━━━━━┓
// ┃ ... | en | 004828635_ctpbee-ctpbee_local_position_cba89669d079.py | unknown | 6,624 |
"""
Reddit comments
---------------
A collection of up to ~1.5 billion Reddit comments posted from
October 2007 through May 2015.
Records include the following key fields (plus a few others):
- ``body``: Full text of the comment.
- ``created_utc``: Date on which the comment was posted.
- ``subreddit``: S... | """
Reddit comments
---------------
A collection of up to ~1.5 billion Reddit comments posted from
October 2007 through May 2015.
Records include the following key fields (plus a few others):
- ``body``: Full text of the comment.
- ``created_utc``: Date on which the comment was posted.
- ``subreddit``: S... | en | 005639457_chartbeat-labs-textacy_reddit_comments_dd41d70f3bec.py | unknown | 4,005 |
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from django.http import HttpRequest
def source() -> str:
request = HttpRequest()
return request.GET["bad"]
def sink(argument: st... | # Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from django.http import HttpRequest
def source() -> str:
request = HttpRequest()
return request.GET["bad"]
def sink(argument: st... | en | 005513877_facebook-pyre-check_taint_9fd251ea2de2.py | unknown | 98 |
"""Spatial Dissimilarity Index."""
__author__ = "Renan X. Cortes <renanc@ucr.edu>, Sergio J. Rey <sergio.rey@ucr.edu> and Elijah Knaap <elijah.knaap@ucr.edu>"
import libpysal
import numpy as np
from libpysal.weights import Queen
from .._base import SingleGroupIndex, SpatialExplicitIndex
from .dissim import _dissim
... | """Spatial Dissimilarity Index."""
__author__ = "Renan X. Cortes <renanc@ucr.edu>, Sergio J. Rey <sergio.rey@ucr.edu> and Elijah Knaap <elijah.knaap@ucr.edu>"
import libpysal
import numpy as np
from libpysal.weights import Queen
from .._base import SingleGroupIndex, SpatialExplicitIndex
from .dissim import _dissim
... | en | 005596770_pysal-segregation_spatial_dissim_c7cbba1b3cc7.py | unknown | 1,408 |
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to... | # Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to... | en | 003296220_mindspore-ai-models_pointTransfomrer_4aa2cdd24484.py | unknown | 3,593 |
"""
Conduct searches against all registry context.
"""
import math
from operator import itemgetter
from flask import abort
from stringscore import liquidmetal
from text_unidecode import unidecode
import features
from app import app, authentication, avatar
from auth import scopes
from auth.auth_context import get_aut... | """
Conduct searches against all registry context.
"""
import math
from operator import itemgetter
from flask import abort
from stringscore import liquidmetal
from text_unidecode import unidecode
import features
from app import app, authentication, avatar
from auth import scopes
from auth.auth_context import get_aut... | en | 002995759_quay-quay_search_1eeecbd30b46.py | unknown | 4,126 |
from pygments.lexer import RegexLexer, words, include
from pygments.token import *
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, Number, Operator, Generic, Text
from sphinxcontrib.domaintools import custom_domain
import re
__all__ = ['MinilangLexer']
class Minilang... | from pygments.lexer import RegexLexer, words, include
from pygments.token import *
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, Number, Operator, Generic, Text
from sphinxcontrib.domaintools import custom_domain
import re
__all__ = ['MinilangLexer']
class Minilang... | en | 003598869_wrapl-minilang_minilang_5aa1f4fdd5b5.py | unknown | 1,326 |
import os
import numpy as np
import yaml
import re
from edflow.data.dataset_mixin import DatasetMixin
from edflow.util import retrieve, get_obj_from_str, pp2mkdtable, pop_keypath
from edflow.util import walk, set_value, edprint
from edflow.data.believers.meta_loaders import DEFAULT_LOADERS
try:
from IPython impor... | import os
import numpy as np
import yaml
import re
from edflow.data.dataset_mixin import DatasetMixin
from edflow.util import retrieve, get_obj_from_str, pp2mkdtable, pop_keypath
from edflow.util import walk, set_value, edprint
from edflow.data.believers.meta_loaders import DEFAULT_LOADERS
try:
from IPython impor... | en | 000899950_pesser-edflow_meta_09fbed6837d7.py | unknown | 2,808 |
import logging
import os
from restic_compose_backup import utils
logger = logging.getLogger(__name__)
def run(image: str = None, command: str = None, volumes: dict = None,
environment: dict = None, labels: dict = None, source_container_id: str = None):
logger.info("Starting backup container")
client... | import logging
import os
from restic_compose_backup import utils
logger = logging.getLogger(__name__)
def run(image: str = None, command: str = None, volumes: dict = None,
environment: dict = None, labels: dict = None, source_container_id: str = None):
logger.info("Starting backup container")
client... | en | 005414885_ZettaIO-restic-compose-backup_backup_runner_9c6cd421a308.py | unknown | 517 |
Language Decoded | Multilingual Code Dataset
Note (2026-04-21): Phase 2 configs have been renamed to
phase-2-the-stack-v1-*and moved underdata/phase-2-the-stack-v1/. Existingload_dataset(...)calls using the shortcondition-*names will no longer resolve. The shortcondition-*namespace is reserved for Phase 3 builds (streamed from The Stack v2, coming soon). See the Loading the Dataset section below for updated config names.
Multilingual Python code datasets for the Language Decoded project (part of Cohere's Tiny Aya Expedition), investigating whether code's reasoning benefit for language models is language-dependent or structure-dependent.
Research Question
Does fine-tuning on non-English code (Python with translated keywords) improve multilingual reasoning as much as English code does?
Prior work (Aryabumi et al., 2024 -- "To Code or Not to Code") demonstrated that including English code in pre-training data improves downstream reasoning performance by approximately 8%. However, that study only tested English code. This dataset enables the natural follow-up: does the reasoning benefit come from the structure of code, or from the language of its keywords?
Dataset Description
This dataset provides filtered, quality-controlled Python source code in multiple configurations: the original English, three keyword-swapped variants (Chinese, Spanish, Urdu), a blended native+transpiled mix, and strictly native Chinese code. The source data is drawn from bigcode/the-stack-dedup (Python subset), filtered for quality using the following criteria:
- AST-valid Python only (must parse without errors)
- Permissive licenses only (MIT, Apache-2.0, BSD, etc.)
- 10--1000 lines of code
- Minimum 21 GitHub stars
- No autogenerated files
- SHA-256 deduplication
Keyword-swapped variants are produced using Legesher v0.7.3, which translates Python reserved words (37 keywords, 72 builtins, 66 exceptions) into the target language while preserving code structure and semantics.
Available Configs
Each condition is available in two sizes: -32k (full filtered corpus, ~31.8k train + ~3.5k validation) and -5k (stratified subset, 4.5k train + 500 validation). The -5k subsets are used for QLoRA fine-tuning on consumer GPUs.
| Config | Condition | Language | Description | Train | Val |
|---|---|---|---|---|---|
condition-1-en-32k |
1 (control) | English | Unmodified filtered Python from The Stack Dedup | 31,818 | 3,536 |
condition-1-en-5k |
1 (control) | English | Stratified 5k subset of condition-1 | 4,500 | 500 |
condition-2-zh-32k |
2 | Chinese | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 |
condition-2-zh-5k |
2 | Chinese | Stratified 5k subset of condition-2-zh | 4,500 | 500 |
condition-2-es-32k |
2 | Spanish | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 |
condition-2-es-5k |
2 | Spanish | Stratified 5k subset of condition-2-es | 4,500 | 500 |
condition-2-ur-32k |
2 | Urdu | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 |
condition-2-ur-5k |
2 | Urdu | Stratified 5k subset of condition-2-ur | 4,500 | 500 |
condition-3-zh-5k |
3 | Chinese | Blended: 3,486 native Chinese code + 1,514 transpiled Python | 4,500 | 500 |
condition-4-zh-5k |
4 | Chinese | Strictly native Chinese code (no transpiled code) | 6,553 | 729 |
Schema
Conditions 1--2
Used by: condition-1-en-*, condition-2-zh-*, condition-2-es-*, condition-2-ur-*
| Column | Type | Description |
|---|---|---|
code |
string | Python source code. For condition-2 configs, this is the transpiled (keyword-swapped) version. For condition-1, this is the original English source. |
code_en |
string | Original English Python source code. Identical to code for condition-1-en. |
language |
string | ISO 639-1 language code: en, ur, zh, or es. |
file_path |
string | Original file path in The Stack Dedup. |
license |
string | SPDX license identifier for the source file. |
token_count |
int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer. |
Condition 3
Used by: condition-3-zh-5k
Condition 3 blends native Chinese code with transpiled code and adds a source_type column to distinguish them. code_en is populated for transpiled rows (keeping them in sync with conditions 1--2) but null for native code rows, which have no English equivalent.
| Column | Type | Description |
|---|---|---|
file_path |
string | File identifier (native filename or transpiled file path) |
code |
string | The code content (native or transpiled) |
code_en |
string/null | English original -- populated for transpiled rows, null for native code rows |
language |
string | ISO 639-1 language code (zh) |
license |
string | Source license (SPDX identifier, UNKNOWN, or varies) |
token_count |
int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer |
source_type |
string | "native" (natively Chinese-authored) or "transpiled" (keyword-swapped English) |
Condition 4
Used by: condition-4-zh-5k
Condition 4 contains strictly native Chinese code -- code written by developers who think and code in Chinese. This uses the same schema as the language-decoded-community dataset rather than the transpilation schema, since there is no English original to reference.
| Column | Type | Description |
|---|---|---|
filename |
string | Original filename |
content |
string | The code content |
extension |
string | File extension (e.g., .py, .c, .wenyan) |
source |
string | Data source (e.g., thestack, wenyan, program_in_chinese) |
quality_tier |
string | Quality rating: A (highest) through D (lowest) |
sha256 |
string | SHA-256 hash for deduplication |
byte_size |
int64 | File size in bytes |
total_lines |
int64 | Total line count |
cjk_ratio |
float64 | Ratio of CJK characters in the file |
has_cjk |
bool | Whether the file contains CJK characters |
Experimental Conditions
The Language Decoded experiment uses a ladder of conditions to isolate the mechanism behind code's reasoning benefit:
| Condition | Name | Purpose |
|---|---|---|
| Baseline | No fine-tuning | Establishes the performance floor |
| Condition 1 | English code | Tests whether code fine-tuning helps at all (replicates Aryabumi et al.) |
| Condition 2 | Keyword-swapped code | Tests whether the language of keywords matters for the reasoning benefit |
| Condition 3 | Mixed native sources | Tests whether diverse native-language code adds value beyond keyword swapping |
| Condition 4 | Strictly native code | Tests whether code authored by native speakers carries unique signal beyond transpilation |
The Experimental Ladder
- Baseline --> 1: Does code help at all?
- 1 --> 2: Does the language of keywords matter?
- 2 --> 3: Does diversity of native-language sources add value beyond keyword swap?
- 3 --> 4: Does code written in the cultural context of a language carry something that transpiled+mixed can't?
Usage
from datasets import load_dataset
# Load full-size English code (control)
ds = load_dataset("legesher/language-decoded-data", "condition-1-en-32k")
# Load 5k subset (for QLoRA fine-tuning)
ds = load_dataset("legesher/language-decoded-data", "condition-1-en-5k")
# Load keyword-swapped variants
ds = load_dataset("legesher/language-decoded-data", "condition-2-zh-5k")
ds = load_dataset("legesher/language-decoded-data", "condition-2-es-5k")
ds = load_dataset("legesher/language-decoded-data", "condition-2-ur-5k")
# Load blended native + transpiled (condition 3)
ds = load_dataset("legesher/language-decoded-data", "condition-3-zh-5k")
# Load strictly native code (condition 4)
ds = load_dataset("legesher/language-decoded-data", "condition-4-zh-5k")
# Access splits
train = ds["train"]
val = ds["validation"]
# Filter condition-3 by source type
native_only = train.filter(lambda x: x["source_type"] == "native")
Technical Details
| Parameter | Value |
|---|---|
| Source dataset | bigcode/the-stack-dedup (Python subset) |
| Transpilation tool | Legesher v0.7.3 (legesher-core, legesher-i18n) |
| Tokenizer | CohereLabs/tiny-aya-base |
| Base model | CohereLabs/tiny-aya-base (3.35B params) |
| Train/validation split | 90% / 10% (seed 42) |
| File format | Parquet (snappy compression) |
| Filtering criteria | AST-valid, permissive licenses, 10--1000 lines, min 21 GitHub stars, no autogenerated files, SHA-256 deduplication |
Limitations
- Source bias: The Stack Dedup skews toward popular, well-starred GitHub repositories, which may not represent the full diversity of Python code in the wild.
- Keyword-only transpilation: Legesher translates Python reserved words (keywords, builtins, exceptions) but leaves comments, docstrings, string literals, and variable/function names in their original language (typically English). This means condition-2 code is a hybrid of translated keywords and English identifiers.
- Token count variation: Transpiled code may have different token counts than the English original due to multi-byte characters (especially for Chinese and Urdu), even though the code structure is identical.
- Single programming language: Currently limited to Python. Results may not generalize to other programming languages.
- Condition 4 scope: Native Chinese code is limited to publicly available sources (The Stack, Wenyan, Program-in-Chinese, Qi, Mulan) and may not represent the full spectrum of Chinese-language programming.
Citation
@misc{language-decoded-2026,
title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code},
author={Madison Edgar and Saad Ahmed Bazaz and Tom Sherborne and Rashik Shahjahan and Khojasteh Mirza and Sarah Jawaid and Rafay Mustafa and Sohaib Ahmed Bazaz},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/legesher/language-decoded-data}
}
Links
- Legesher on GitHub
- Tiny Aya Expedition
- bigcode/the-stack-dedup
- Language Decoded Community (native code)
- Language Decoded Experiments (tracking)
- Language Decoded LoRA (model hub)
License
Apache 2.0
- Downloads last month
- 123