Insight
What Building Two AI Products Taught Me About Enterprise Readiness
14 Jul 2025 · Avtar Khaba · 6 min read
Lessons from shipping MyExpensePal and MyFinancePal.ai that directly inform how I advise enterprise teams on AI strategy, governance, and delivery.
There is a specific kind of confidence that comes from building something yourself. Not managing a team that builds it. Not reviewing an architecture document someone else wrote. Actually building — writing the code, making the trade-offs, dealing with the consequences of your own decisions at two in the morning when something breaks.
After shipping MyExpensePal and building MyFinancePal.ai through to beta, I have that confidence. And it has fundamentally changed how I advise enterprise teams on AI strategy, governance, and delivery.
The gap between advice and experience
I will be direct about something that bothers me about the AI consulting space: most advisors do not build. They read the research, attend the conferences, synthesise the frameworks — and then tell organisations what to do based on pattern-matching against other people's work.
I did that too, for a while. My enterprise background gave me strong instincts around governance, architecture, and delivery. But there was always a gap between what I recommended and what I had personally implemented.
Building MyExpensePal and MyFinancePal.ai closed that gap permanently. When I sit across from a CTO and talk about AI-first development, I am not reciting theory. I am describing my own workflow. When I advise a board on AI governance for conversational systems, I am drawing on guardrails I designed, tested, and iterated on in my own product.
The credibility difference between "I recommend this approach" and "I built this way, here is what happened" is enormous. Enterprise leaders can tell the difference instantly.
Five things I learned the hard way
1. AI-generated code needs the same governance as human code — maybe more.
Early on, I caught myself treating AI-generated code with less scrutiny than code I wrote by hand. That is backwards. AI can produce plausible code that passes a quick review but hides subtle issues — wrong assumptions baked into a database query, an edge case that looks handled but is not. I learned to review AI output more carefully, not less. This directly informs how I advise enterprise teams on code review processes for AI-assisted development.
2. The hardest part is not the AI feature — it is everything around it.
With MyFinancePal.ai, getting the conversational AI to produce useful financial guidance was maybe thirty percent of the work. The other seventy percent was context management, session handling, error states, disclaimers, audit logging, and the dozen mundane engineering tasks that turn a demo into a product. Enterprise teams consistently underestimate this ratio, and now I can show them exactly why.
3. One person with AI tools can outpace a team without them — but only with the right judgement.
I shipped a complete expense tracking application as a solo developer using AI-paired development. That is genuinely remarkable. But it only worked because I had the experience to direct the AI effectively, to know when its suggestions were good and when they were leading me down the wrong path. The tool amplifies whatever judgement you bring to it. For enterprise teams, this means AI tools without skilled practitioners just produce more code to maintain.
4. Conversational AI in regulated domains needs governance from day one.
With MyFinancePal.ai, I built disclaimers, scope boundaries, and audit trails into the very first version. Not because a compliance team told me to, but because I understood the domain well enough to know they were non-negotiable. Every enterprise I have seen bolt governance onto a conversational AI system after launch has regretted it. The rework cost is brutal.
5. The best AI products are the ones where users forget they are using AI.
The smart categorisation in MyExpensePal just works. Users do not think about the AI behind it — they think about how easy it is to track their spending. That is the standard enterprise AI features should aim for. Not "look at our AI" but "look how effortless this is."
How this changes my enterprise advisory
These lessons are not abstract. They change what I do with clients in concrete ways.
I now prototype concepts before recommending full programmes. When a client asks whether conversational AI can work for their use case, I can build a working proof-of-concept in days rather than spending weeks writing a feasibility report.
My governance frameworks are tested against real implementation. Every recommendation I make about AI oversight, I have applied to my own products first. The frameworks that survived contact with reality are the ones I bring to enterprise engagements.
I can show boards what AI-first development actually looks like. Not slides. Working products. The conversation shifts from "is this possible" to "how do we do this responsibly at our scale."
The build-advise feedback loop makes both activities stronger. Building makes me a sharper advisor because I understand the real constraints. Advising makes me a better builder because I see patterns across industries and organisations.
The invitation
If you are evaluating AI for your organisation, consider this: would you rather work with someone who theorises about AI development, or someone who ships AI products and advises enterprises on the same principles?
Take a look at what I have built in the labs. Then, if you want to talk about what this approach could look like for your organisation, get in touch. The conversation is always better when both sides have built something.
