Instructions to use lambda/text2bricks-360p-64f with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lambda/text2bricks-360p-64f with Transformers:
# Load model directly from transformers import STDiT2 model = STDiT2.from_pretrained("lambda/text2bricks-360p-64f", dtype="auto") - Notebooks
- Google Colab
- Kaggle
| license: apache-2.0 | |
| <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6310d1226f21f539e52b9d77/7zw2xPWGRpFrwK62CkFqJ.mp4"></video> | |
| _Prompt: A young man walks alone by the seaside."_ | |
| __Text2Bricks__ is a fine-tuned [Open Sora](https://github.com/hpcaitech/Open-Sora) model that generates toy brick-style short stop animations. | |
| `text2bricks-360p-64f` is fine-tuned to generated up to 360p/64-frames outputs. | |
| __You can play with the videos created by the model in this [game](https://albrick-hitchblock.s3.amazonaws.com/index.html).__ | |
| It was trained on Lambda's [1-Click Clusters](https://lambdalabs.com/service/gpu-cloud/1-click-clusters) in ~1,000 H100 GPU hours. See this [Weights $ Biases report](https://api.wandb.ai/links/lambdalabs/2cbrtx45) for details. | |
| Extra code and data process steps can be found in this [tutorial](https://github.com/LambdaLabsML/Open-Sora/blob/lambda_bricks/README.md). | |
| # Usage | |
| Use [Lambda's fork](https://github.com/LambdaLabsML/Open-Sora/tree/lambda_bricks) of Open-Sora. | |
| ``` | |
| python scripts/inference.py \ | |
| configs/opensora-v1-1/inference/text2bricks-360p-64f.py \ | |
| --prompt "A young man walks alone by the seaside." \ | |
| --num-frames 64 | |
| ``` |