Instructions to use Remade-AI/Dolly-Effect with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Remade-AI/Dolly-Effect with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-I2V-14B-480P,Wan-AI/Wan2.1-I2V-14B-480P-Diffusers", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("Remade-AI/Dolly-Effect") prompt = "d011Ye33ect dolly effect. The video begins with a close-up of the man’s steely gaze as he stands in a dusty cemetery, a cigar clenched in his mouth. The camera slowly zooms out, keeping his face centered while the background stretches—revealing crosses, gravestones, and the wide open desert behind him. The dolly effect intensifies the tension of the western standoff." input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png") image = pipe(image=input_image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Hi,it is really cool, Is there a plan to release the collected dolly zoom dataset ?
#2
by Rookienovice - opened
Hi,it is really cool, Is there a plan to release the collected dolly zoom dataset ? I want to try training my lora.