Kinetix is a start-up founded in 2020, specializing in artificial intelligence and 3D animation technologies. In collaboration with motion capture studios and highly-skilled actors and performers, we have built an extensive proprietary dataset of human motions rendered in various virtual scenes with an in-house synthetic data generation pipeline. Leveraging this dataset, we develop AI tools for character motion control in video diffusion models, as well as human pose estimation models to extract animations from videos.
Our R&D team is seeking an intern to work on motion controllability in video diffusion models. Recent advances in generative AI have brought major breakthroughs in video synthesis, but current models still face challenges in generating complex human motions. Additionally, controlling character and camera movements in generated videos remains a key research problem.
The objective of this internship is to contribute to our exploratory work on motion-controlled video synthesis, where a character’s movements are extracted from an input user video, while the background and appearance are guided by a text prompt. You will join a team of AI research engineers and may work on the following topics:
Designing data processing and filtering pipelines for video generation trainings (e.g., with AI-based aesthetics score prediction and video captioning);
Reviewing and evaluating state-of-the-art methods in video generation and motion control;
Training or fine-tuning video models with additional motion control modules;
Optimizing inference pipelines to reduce computation time.