“To know and not to do is not yet to know.”
I build AI products for the physical world.
I've deployed AI systems where failure isn't a bad UX, it's a safety incident. From robotics at Amazon to satellite data platforms to autonomous hydrogen operations, I build products where reliability, explainability, and human trust are the product requirements that matter most.
How I Think
Start with the constraint, not the model.
Physical-world AI is defined by what can go wrong, not what the model can do. I scope every product around the failure modes first, then work backward to the architecture.
Design for human trust, not model accuracy.
95% accuracy at 60,000 packages per day means 3,000 failures. The product question is never "how accurate is the model?" It's "does the human trust the system enough to act on it?"
Ship incrementally where you can't A/B test on humans.
When your deployment environment is a hydrogen plant or a government ministry, your evaluation framework IS your product strategy. I build for environments where iteration costs are real.