I built a dedicated landing page and tool to utilize the new Kling 2.6 Motion Control API.
The main problem with existing AI video tools is the "stochastic" nature of movement—you prompt "running," and the AI guesses the physics. Kling's new update allows you to upload a reference video (e.g., a person doing a backflip) and apply that exact motion vector to a target image (e.g., a robot).
I wanted a cleaner interface to experiment with this specific "acting transfer" workflow without the clutter of the main generation platforms.
Key features enabled in this tool:
Support for full 30s continuous generation (no stitching).
Handling of complex limb occlusions (hands crossing bodies).
Simplified parameter tuning for "Creativity Strength".
Feel free to break it and let me know what you think.
Hi HN,
I built a dedicated landing page and tool to utilize the new Kling 2.6 Motion Control API.
The main problem with existing AI video tools is the "stochastic" nature of movement—you prompt "running," and the AI guesses the physics. Kling's new update allows you to upload a reference video (e.g., a person doing a backflip) and apply that exact motion vector to a target image (e.g., a robot).
I wanted a cleaner interface to experiment with this specific "acting transfer" workflow without the clutter of the main generation platforms.
Key features enabled in this tool:
Support for full 30s continuous generation (no stitching).
Handling of complex limb occlusions (hands crossing bodies).
Simplified parameter tuning for "Creativity Strength".
Feel free to break it and let me know what you think.