Instructions to use Lightricks/LTX-Video with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lightricks/LTX-Video with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-Video", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Inference
- Notebooks
- Google Colab
- Kaggle
SLOW like CogvideoX
the only reason i choose LTX over Cogvideo is the speed, with the new update of 0.9.1 its slower than cogvideo now.. why every update has to be trash
👋
LTXV 0.9.1 has a decoder that is a bit larger than 0.9 which means it needs more VRAM but from all our measurements can run at 10% added time or lower on machines with sufficient memory.
What setup makes you say it's as slow as a model that is much slower in all setups we saw in the community?
Hello,
Thank you for your feedback! I believe you are referring to the playground environment. The slowdown was caused by a temporary switch to different server hardware. Providing a top-notch user experience is our highest priority, so we’ve switched to the original hardware. The playground is now running fast once again.
If you have any further comments or suggestions, please don’t hesitate to reach out!