Instructions to use NO8D/ExpressionControl with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use NO8D/ExpressionControl with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-9B", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("NO8D/ExpressionControl") prompt = "-" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-9B", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("NO8D/ExpressionControl")
prompt = "-"
image = pipe(prompt).images[0]Expression Control

- Prompt
- -
Model description
Klein does not have LoRAs like the PixelSmile for QIE2511, so I decided to train a set myself! like the QIE2511 version, this collection allows for fine‑grained and linear control over facial expressions while maintaining high character consistency
happy
coming soon
Updating continuously ……
Continuously updating
I'm an independent model and workflow developer. If you like my work and want to support independent development, please consider buying me a cup of coffee to keep this motivation going! Thank you very much.
support 🫡 May the AI-power be with you, see you soon !🫡
- Downloads last month
- 20
Model tree for NO8D/ExpressionControl
Base model
black-forest-labs/FLUX.2-klein-9B
