top of page

In and Out of this World with You - Process Video 

Exploring AI video generation using 3D nerf scans, currently focusing on starting from scratch using only inanimate objects. Solo traveling gig-to-gig for the last 24  months has made me intrigued by AI's potential to create emotive images with less production weight. I'm still early in this journey, balancing it with more traditional live-action processes, but I wanted to share some process work with AI:

1. Luma Scan: Used a gaussian splat of two oil bottles and orchestrated a basic camera movement via my iPhone after removing the background.

2. Runway Gen 1 Video-to-Video: Merged the Luma nerf render with a Midjourney reference to animate a couple on a bed. I tried more intricate camera moves from Luma (rolls, spins, looped motion) but found limitations in Runway's interpretation. Comparing with tools like Kaiber and other video to video AI, I found Runway to stand out in 3D scan to conversions by setting the “structural consistency” slider to zero. (Other tools don’t have this function and retained the oil bottle structure)

3. Gen 1 Slow Motion: To manage costs, initial inputs were set at a high speed, with desired results slowed down using a runways slow motion tool to yield more footage.

4. Gen 1 Video-to-Video (Round 2): Used the slowed footage for enhanced texture. Past tests showed the benefit of initial content application in the first video-to-video render followed by pronounced styling in the second.

5. Premiere Pro: Layered and retimed both compositions.

6. Kodak Point and Shoot Camera (2009): AI textures aren't always to my taste. While I’ve used tools like Topaz can enhance footage, I lean still towards degradation. The decade-old ccd sensor adds digital noise and softness that AI struggles to mimic. Even if the result is 640p compared to the initial 4k, it resonates with me.

Final result:

bottom of page