I keep coming back to one question:
How does intention become execution?
Not metaphorically - literally. The gap between what someone means and what a system actually does is where most intelligence fails. That gap is where I spend most of my time thinking.
I usually approach problems from both directions until they meet.
Bottom-up: understand the mechanism. Top-down: understand the abstraction.
Some principles I tend to follow:
- I care more about the mechanism underneath a pattern than the pattern itself
- I think in translation layers: thought -> language -> structure -> action
- Abstraction is a tool, not a destination
- I would rather understand one thing deeply than have shallow opinions about many things
Most work in AI focuses on improving the translator. I'm often more interested in what is being translated, and whether the translation layer should exist at all.
RizzGen explores a simple idea:
Intent -> execution
The problem is that high-level description (what a human means) and low-level execution (what a machine performs) are separated by a translation problem.
Assembly languages once reduced this gap in a rigid way. Natural language combined with agents may reduce it again - this time flexibly.
AI becomes the translator between human intent and machine action.
Video is simply the current medium.
The deeper question is: how far upstream can the interface move before the interface disappears entirely?
Human intent
↓
Chunked description
↓
AI translator
↓
Fine-grained execution
↓
Video
Human cognition rarely solves the same problem from scratch twice.
When we repeat a task, the neural pathways used the first time get reinforced. Electrical signals travel faster along these routes. The brain learns which path to take, not just how to take it.
Most agents today do not work this way.
They recompute reasoning from zero even when they have solved structurally identical problems before. The same reasoning chain gets regenerated, the same tokens are spent again.
What is missing is procedural memory:
Agents recognizing that two tasks share a reasoning structure and reusing the pathway rather than regenerating it.
This is one of the problems I'm currently exploring.
- agentic systems
- natural language -> structured execution
- translating cognition into formal representations
- controllable video generation from intent
- procedural memory for agents
Some works that shaped how I think about cognition and intelligence:
- Gödel, Escher, Bach - recursion, representation, self-reference
- Shadows of the Mind - computation and consciousness
- Thinking, Fast and Slow - cognitive architecture and reasoning
This profile is closer to a lab notebook than a portfolio.
If you're thinking about intent-execution gaps, multimodal agents, or the architecture of human-machine translation, feel free to reach out.



