← Back to Insights

Build Story

From Idea to Beta: Building MyFinancePal.ai With AI

14 Jul 2025 · Avtar Khaba · 7 min read

AI DevelopmentBuild StoryMyFinancePalConversational AILangChainFinance

How I built an AI-first financial planning assistant — from concept through to a working beta — and what it taught me about conversational AI in regulated domains.

If MyExpensePal was about proving one person could ship a product with AI, MyFinancePal.ai was about raising the stakes. I wanted to build conversational AI in a domain where getting things wrong actually matters — personal finance.

The result is a financial planning assistant that helps people think through their money decisions. It's currently in beta, and the process of building it taught me more about responsible AI than any governance framework document ever could.

The concept

Financial guidance is one of the most natural applications for conversational AI. People have questions about their money constantly — should I overpay my mortgage or invest? How much do I actually need in an emergency fund? What's the tax-efficient way to draw down my pension? These are questions that a well-structured AI can help with, not by giving definitive answers, but by helping people think through the variables.

But finance is also one of the most sensitive domains you can build in. There's a hard regulatory boundary between guidance and advice. There are real consequences when people act on bad information. And there's a trust threshold that's much higher than in most consumer products.

I wanted to build something that demonstrated what responsible AI looks like in a domain where it genuinely matters. Not as a theoretical exercise, but as a working product.

Choosing the tech stack

I kept the foundation consistent with MyExpensePal — Next.js, TypeScript, and Tailwind. There's a practical reason for this: portfolio coherence. When I show clients both products, they can see a consistent engineering approach rather than two disconnected experiments.

The interesting layer is the conversational AI stack. I chose LangChain for orchestrating conversations because it handles the complexity of context management, memory, and multi-step reasoning chains without requiring me to build all of that plumbing from scratch. It gave me a structured way to manage conversation state, inject relevant context at the right moments, and chain together multiple reasoning steps into coherent responses.

I integrated multiple AI models — Claude and GPT-4 — for different tasks. Claude handles the longer-form reasoning and nuanced explanations well. GPT-4 is strong at structured data extraction and calculations. Using both lets me play to each model's strengths rather than forcing one model to do everything.

The .ai domain was a deliberate choice. When your product's core capability is artificial intelligence, the domain should signal that immediately. It sets expectations before someone even lands on the page.

Building conversational AI that's actually useful

There's a vast gap between a chatbot and a genuinely helpful financial assistant. Most chatbots feel like slightly more natural search engines. I wanted MyFinancePal.ai to feel like talking to a knowledgeable friend who happens to understand compound interest and tax wrappers.

That meant careful work on prompt engineering, context windows, and guardrails. The system prompt establishes the assistant's role, knowledge boundaries, and communication style. It's surprisingly long — several hundred words — because the more precisely you define the AI's behaviour upfront, the more consistent and useful it is in practice.

Context management was critical. A financial conversation builds on itself. If someone mentions they have a mortgage at 4.5% and then asks about overpayments three messages later, the AI needs to remember that rate. LangChain's memory abstractions made this manageable, but I still spent significant time tuning what context to retain and what to let fade.

The most important design decision was teaching the AI what it doesn't know — and making it comfortable saying so. When someone asks a question that crosses into regulated financial advice territory, the assistant explicitly flags that boundary. It says something like: "That's a question where you'd benefit from speaking to a regulated financial adviser — here's why." This isn't just legal compliance. It's good product design. An AI that's honest about its limits is more trustworthy than one that confidently answers everything.

AI governance in practice

This is where my enterprise background directly applied to a personal product. Every governance principle I advise boards on, I implemented in MyFinancePal.ai.

Built-in disclaimers appear naturally within conversations, not just in a terms page nobody reads. The AI introduces its limitations as part of its personality rather than treating them as legal boilerplate.

Every AI-generated response is logged with the full context that produced it — the conversation history, the prompt chain, and the model used. This creates an audit trail. If someone questions a piece of guidance, I can reconstruct exactly how the AI arrived at that response.

I also built scope limitations directly into the prompt architecture. The AI won't discuss specific investment products, won't provide tax advice for situations it can't fully understand from a conversation, and won't make predictions about market performance. These aren't afterthoughts. They're foundational design decisions that went in before the first line of conversational code.

Current status and what's next

MyFinancePal.ai is in beta with core conversational features working. Users can discuss budgeting, savings goals, debt management strategies, and general financial planning concepts. The AI maintains context across sessions and builds a picture of someone's financial situation over time.

I'm iterating on personalisation and goal-tracking capabilities — letting users set financial targets and having the AI check in on progress and adjust suggestions based on changing circumstances.

I'm also exploring how user feedback can improve responses over time. When someone flags a response as unhelpful or unclear, that signal feeds back into prompt refinement. It's a manual process right now, but it's building the dataset for something more systematic later.

The enterprise connection

Everything I learned building MyFinancePal.ai feeds directly back into my advisory work. When I tell a CTO that conversational AI requires governance from day one, I can show them my own implementation. When I advise on prompt engineering strategy, I'm drawing on hundreds of hours of real iteration, not just theory.

This is the build-advise loop in action. Building makes me a better advisor because I understand the implementation realities. Advising makes me a better builder because I bring structured thinking and governance discipline to my own products.

When a board asks me whether conversational AI is ready for their customer-facing use cases, I don't give them a framework and a slide deck. I show them MyFinancePal.ai and walk them through the decisions — technical, ethical, and commercial — that shaped it.

That's a different kind of credibility. And it's the kind that matters.