It uses AI-generated footage for what code can't draw, and programmatic animation for what pixel models can't control.
Check out the generated examples or try it yourself: https://gliadirector.com/agents?referral=hn1000
How it works:
The "Renderer" is Code: This is the core. A code-gen agent writes Canvas2D from scratch for overlays and motion design, and composites everything into the final video. It can even do FFT analysis on the music track for beat-reactive animations.
Runtime Controls Generation: The code-gen agent generates its own editing UI to expose controls, so you can tweak the result without touching the raw JS. You can ask it to "give me 3 options for the headline style," and it will actually generate those choices into the UI for you to pick from.
Specialized vs. Open Agents: We offer specialized agents for specific formats (stable pipelines) that pre-generate the storyboard/assets. These pass their full state to the open-ended "Coder" agent for refinement, or you can start directly with the Coder for freeform exploration.
Why Canvas2D code? Writing raw Canvas code from scratch means the AI can produce any animation it can describe in code. We chose raw Canvas over a framework because it gives more creative freedom, though a framework handles layout and 3D better (something we might add later).
Where we are honestly: the 'AI does everything' concept works, but the AI directors still feel a bit junior. Sometimes they don't take enough initiative on the creative planning, so you might find yourself guiding them more than you'd like. Making them smarter and more proactive is our main focus right now. It's early and rough around the edges.
Curious what you think of this hybrid approach?
loading...