Instructions to use TheMindExpansionNetwork/questro_LTX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use TheMindExpansionNetwork/questro_LTX with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("TheMindExpansionNetwork/questro_LTX", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
questro_LTX
This is a fine-tuned version of ltx-2-19b-dev.safetensors trained on custom data.
Model Details
- Base Model:
ltx-2-19b-dev.safetensors - Training Type: LoRA fine-tuning
- Training Steps: 2500
- Learning Rate: 0.0001
- Batch Size: 1
Sample Outputs
Usage
This model is designed to be used with the LTX-2 (Lightricks Audio-Video) pipeline.
π Using Trained LoRAs in ComfyUI
In order to use the trained LoRA in ComfyUI, follow these steps:
- Copy your trained LoRA checkpoint (
.safetensorsfile) to themodels/lorasfolder in your ComfyUI installation. - In your ComfyUI workflow:
- Add the "Load LoRA" node to choose your LoRA file
- Connect it to the "Load Checkpoint" node to apply the LoRA to the base model
You can find reference Text-to-Video (T2V) and Image-to-Video (I2V) workflows in the official LTX-2 repository.
Example Prompts
Example prompts used during validation:
Questro is walking in a dystopian universe
This model inherits the license of the base model (ltx-2-19b-dev.safetensors).
Acknowledgments
- Base model: Lightricks
- Trainer: LTX-2
- Downloads last month
- 4
