Instructions to use tiny-random/step3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use tiny-random/step3 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="tiny-random/step3", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("tiny-random/step3", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use tiny-random/step3 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "tiny-random/step3" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/step3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/tiny-random/step3
- SGLang
How to use tiny-random/step3 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "tiny-random/step3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/step3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "tiny-random/step3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/step3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use tiny-random/step3 with Docker Model Runner:
docker model run hf.co/tiny-random/step3
| library_name: transformers | |
| pipeline_tag: text-generation | |
| inference: true | |
| widget: | |
| - text: Hello! | |
| example_title: Hello world | |
| group: Python | |
| base_model: | |
| - stepfun-ai/step3 | |
| This tiny model is for debugging. It is randomly initialized with the config adapted from [stepfun-ai/step3](https://huggingface.co/stepfun-ai/step3). | |
| Note: For vLLM supported version, see [tiny-random/step3-vllm](https://huggingface.co/tiny-random/step3-vllm). | |
| ### Example usage: | |
| ```python | |
| import torch | |
| from transformers import AutoModelForCausalLM, AutoProcessor | |
| model_id = "tiny-random/step3" | |
| processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_id, | |
| device_map="cuda", torch_dtype=torch.bfloat16, | |
| trust_remote_code=True, | |
| ) | |
| messages = [ | |
| { | |
| "role": "user", | |
| "content": [ | |
| {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"}, | |
| {"type": "text", "text": "What's in this picture?"} | |
| ] | |
| }, | |
| ] | |
| inputs = processor.apply_chat_template( | |
| messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt" | |
| ).to(model.device) | |
| generate_ids = model.generate(**inputs, max_new_tokens=32, do_sample=False) | |
| decoded = processor.decode(generate_ids[0, inputs["input_ids"].shape[-1]:], skip_special_tokens=False) | |
| print(decoded) | |
| ``` | |
| ### Codes to create this repo: | |
| ```python | |
| import json | |
| from pathlib import Path | |
| import accelerate | |
| import torch | |
| from huggingface_hub import file_exists, hf_hub_download | |
| from transformers import ( | |
| AutoConfig, | |
| AutoModelForCausalLM, | |
| AutoProcessor, | |
| AutoTokenizer, | |
| GenerationConfig, | |
| set_seed, | |
| ) | |
| source_model_id = "stepfun-ai/step3" | |
| save_folder = "/tmp/tiny-random/step3" | |
| processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True) | |
| processor.save_pretrained(save_folder) | |
| def rewrite_automap(filepath: str, source_model_id: str, overrides: dict = None): | |
| import json | |
| with open(filepath, 'r', encoding='utf-8') as f: | |
| config = json.load(f) | |
| for k, v in config['auto_map'].items(): | |
| v = v.split('--')[-1] | |
| config['auto_map'][k] = f'{source_model_id}--{v}' | |
| if overrides is not None: | |
| config.update(overrides) | |
| with open(filepath, 'w', encoding='utf - 8') as f: | |
| json.dump(config, f, indent=2) | |
| rewrite_automap(f'{save_folder}/processor_config.json', source_model_id) | |
| rewrite_automap(f'{save_folder}/tokenizer_config.json', source_model_id) | |
| with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f: | |
| config_json = json.load(f) | |
| for k, v in config_json['auto_map'].items(): | |
| config_json['auto_map'][k] = f'{source_model_id}--{v}' | |
| config_json['architectures'] = ["Step3VLForConditionalGeneration"] | |
| config_json['text_config'].update({ | |
| "hidden_size": 32, | |
| "intermediate_size": 64, | |
| "num_hidden_layers": 2, | |
| "num_attention_heads": 2, | |
| "num_attention_groups": 1, | |
| "head_dim": 256, | |
| "share_q_dim": 512, | |
| "moe_layers_enum": "1", | |
| "moe_num_experts": 8, | |
| "moe_top_k": 3, | |
| "moe_intermediate_size": 64, | |
| "share_expert_dim": 64, | |
| # "tie_word_embeddings": True, | |
| }) | |
| config_json['vision_config'].update({ | |
| "hidden_size": 64, | |
| "output_hidden_size": 64, | |
| "intermediate_size": 128, | |
| "num_hidden_layers": 2, | |
| "num_attention_heads": 2 | |
| }) | |
| with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: | |
| json.dump(config_json, f, indent=2) | |
| config = AutoConfig.from_pretrained( | |
| save_folder, | |
| trust_remote_code=True, | |
| ) | |
| print(config) | |
| # key_mapping = { | |
| # "^vision_model": "model.vision_model", | |
| # r"^model(?!\.(language_model|vision_model))": "model.language_model", | |
| # "vit_downsampler": "model.vit_downsampler", | |
| # "vit_downsampler2": "model.vit_downsampler2", | |
| # "vit_large_projector": "model.vit_large_projector", | |
| # } | |
| automap = config_json['auto_map'] | |
| torch.set_default_dtype(torch.bfloat16) | |
| model = AutoModelForCausalLM.from_config(config, trust_remote_code=True) | |
| torch.set_default_dtype(torch.float32) | |
| if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'): | |
| model.generation_config = GenerationConfig.from_pretrained( | |
| source_model_id, trust_remote_code=True, | |
| ) | |
| set_seed(42) | |
| model = model.cpu() # cpu is more stable for random initialization across machines | |
| with torch.no_grad(): | |
| for name, p in sorted(model.named_parameters()): | |
| torch.nn.init.normal_(p, 0, 0.2) | |
| print(name, p.shape) | |
| model.save_pretrained(save_folder) | |
| print(model) | |
| rewrite_automap(f'{save_folder}/config.json', source_model_id) | |
| for python_file in Path(save_folder).glob('*.py'): | |
| if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_') or python_file.name.endswith('.py'): | |
| python_file.unlink() | |
| ``` | |
| ### Printing the model: | |
| ```text | |
| Step3vForConditionalGeneration( | |
| (model): Step3vModel( | |
| (vision_model): StepCLIPVisionTransformer( | |
| (embeddings): StepCLIPVisionEmbeddings( | |
| (patch_embedding): Conv2d(3, 64, kernel_size=(14, 14), stride=(14, 14)) | |
| (position_embedding): Embedding(2705, 64) | |
| ) | |
| (transformer): StepCLIPEncoder( | |
| (layers): ModuleList( | |
| (0-1): 2 x StepCLIPEncoderLayer( | |
| (layer_norm1): LayerNorm((64,), eps=1e-06, elementwise_affine=True) | |
| (layer_norm2): LayerNorm((64,), eps=1e-06, elementwise_affine=True) | |
| (self_attn): StepCLIPAttention( | |
| (qkv_proj): Linear(in_features=64, out_features=192, bias=True) | |
| (out_proj): Linear(in_features=64, out_features=64, bias=True) | |
| ) | |
| (mlp): StepCLIPMLP( | |
| (fc1): Linear(in_features=64, out_features=128, bias=True) | |
| (act): QuickGELUActivation() | |
| (fc2): Linear(in_features=128, out_features=64, bias=True) | |
| ) | |
| ) | |
| ) | |
| ) | |
| ) | |
| (language_model): Step3Model( | |
| (embed_tokens): Embedding(128815, 32) | |
| (layers): ModuleList( | |
| (0): Step3vDecoderLayer( | |
| (self_attn): Step3vAttention( | |
| (q_proj): Linear(in_features=32, out_features=512, bias=False) | |
| (k_proj): Linear(in_features=32, out_features=256, bias=False) | |
| (v_proj): Linear(in_features=32, out_features=256, bias=False) | |
| (o_proj): Linear(in_features=512, out_features=32, bias=False) | |
| (inter_norm): Step3vRMSNorm((512,), eps=1e-05) | |
| (wq): Linear(in_features=512, out_features=512, bias=False) | |
| ) | |
| (mlp): Step3vMLP( | |
| (gate_proj): Linear(in_features=32, out_features=64, bias=False) | |
| (up_proj): Linear(in_features=32, out_features=64, bias=False) | |
| (down_proj): Linear(in_features=64, out_features=32, bias=False) | |
| (act_fn): SiLU() | |
| ) | |
| (input_layernorm): Step3vRMSNorm((32,), eps=1e-05) | |
| (post_attention_layernorm): Step3vRMSNorm((32,), eps=1e-05) | |
| ) | |
| (1): Step3vDecoderLayer( | |
| (self_attn): Step3vAttention( | |
| (q_proj): Linear(in_features=32, out_features=512, bias=False) | |
| (k_proj): Linear(in_features=32, out_features=256, bias=False) | |
| (v_proj): Linear(in_features=32, out_features=256, bias=False) | |
| (o_proj): Linear(in_features=512, out_features=32, bias=False) | |
| (inter_norm): Step3vRMSNorm((512,), eps=1e-05) | |
| (wq): Linear(in_features=512, out_features=512, bias=False) | |
| ) | |
| (moe): Step3vMoEMLP( | |
| (gate): Linear(in_features=32, out_features=8, bias=False) | |
| (up_proj): MoELinear() | |
| (gate_proj): MoELinear() | |
| (down_proj): MoELinear() | |
| (act_fn): SiLU() | |
| ) | |
| (share_expert): Step3vMLP( | |
| (gate_proj): Linear(in_features=32, out_features=64, bias=False) | |
| (up_proj): Linear(in_features=32, out_features=64, bias=False) | |
| (down_proj): Linear(in_features=64, out_features=32, bias=False) | |
| (act_fn): SiLU() | |
| ) | |
| (input_layernorm): Step3vRMSNorm((32,), eps=1e-05) | |
| (post_attention_layernorm): Step3vRMSNorm((32,), eps=1e-05) | |
| ) | |
| ) | |
| (norm): Step3vRMSNorm((32,), eps=1e-05) | |
| (rotary_emb): Step3vRotaryEmbedding() | |
| ) | |
| (vit_downsampler): Conv2d(64, 64, kernel_size=(2, 2), stride=(2, 2)) | |
| (vit_downsampler2): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) | |
| (vit_large_projector): Linear(in_features=128, out_features=32, bias=False) | |
| ) | |
| (lm_head): Linear(in_features=32, out_features=128815, bias=False) | |
| ) | |
| ``` |