About
I tend to get fixated on the problem underneath the problem.
Most problems have a surface version that everyone agrees on, and then a structural version underneath — the one that explains why it keeps happening. I always end up pulling on the structural one. Feedback loops, hidden constraints, misaligned incentives — that is usually where the actual leverage is, and where I get interested.
At Amazon, I worked on Product Targeting early, when everyone was fighting over search relevance. The question that interested me was different: how do you build a system where the commerce graph itself becomes the discovery mechanism? That reframe turned into a product that went from zero to over a billion dollars in revenue in about two years. Later, at Navi, we built an AI companion that people genuinely used — 80-minute average sessions, real emotional engagement. But the more important thing we found was in the failures. We had over 100,000 real conversations, and buried in them was a map of where language models actually break down with real people — behavioral edge cases that no benchmark would ever surface. Those failure patterns turned out to be far more valuable than the product metrics.
I have a habit of following questions further than is strictly practical. It is how I ended up teaching myself smart contract security and auditing real protocols at 0xMacro, and why I spend a lot of time reading about economics, monetary policy, and how complex systems fail — I keep finding the same patterns in different places. Chess probably trained that instinct early, but at this point it mostly just shows up everywhere.
Now I am building Interlock Labs. The question is straightforward: how do you know an AI system is actually doing what you think it is doing? In healthcare, in finance, in any setting where the cost of being wrong is real — that question does not have a good answer yet.
If you're working on something related, I'm at proloknair@gmail.com.