Instructions to use peteromallet/Flux-Kontext-InScene with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use peteromallet/Flux-Kontext-InScene with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("peteromallet/Flux-Kontext-InScene") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Curious For More Training Details
#2
by burew - opened
Thanks a lot for your research and dataset!
I'd like to know more details regarding your training if you are willing to share. From what I've gathered so far, I can see the following training details:
- trainer software: ostris/ai-toolkit
- steps: 3250
- epochs: 4
- dataset examples: 394 image pairs
I'm interested in figuring out the rest, like:
- LoRA/alpha rank
- Learning rate
- Batch size
- Learning rate scheduler
- Guidance scale
- The details of any failed runs you may have encountered and the things you've learned
Thank you again for your contribution, I hope to learn more from your experiences.