← Back to Insights

Case Study

AI-Driven Pricing Thresholds That Replaced Gut Feel

19 Mar 2026 · Avtar Khaba · 4 min read

AI AnalyticsPricingData ScienceRevenue

A membership organisation needed to know exactly when to cut conference prices. We built an AI model that told them — grounded in data, not instinct.

The situation

A membership organisation running large-scale conferences had a recurring problem: pricing.

Every event cycle, the same debate would play out. When should early-bird pricing end? At what point should prices be reduced to fill remaining seats? How deep should the discount go without undermining the perceived value of the event?

These decisions were being made on gut feel, past experience, and internal politics. The commercial team would argue with the events team. The finance team would set conservative targets. And by the time anyone agreed, the window for action had often passed.

The real cost of guessing

The problem wasn't a lack of data — they had years of historical attendance, pricing, and booking patterns. The problem was that nobody had turned that data into decision triggers.

When pricing decisions are made on instinct, two things happen:

  1. You move too slowly — by the time the team agrees to reduce prices, the optimal window has closed
  2. You move by the wrong amount — discounts are either too shallow to change behaviour or too deep, leaving money on the table

Both of these were happening. The organisation was consistently either under-filling conferences or over-discounting. Neither outcome was acceptable for a body that relied on event revenue.

The AI approach

We worked with the commercial team to build an AI model that analysed historical booking data across multiple events, identifying:

  • When booking velocity typically changed — and at what point a price intervention made measurable difference
  • By how much — using standard deviation analysis to determine the optimal discount depth at each threshold
  • What the expected effect would be — modelling the likely uptake based on historical price sensitivity curves

The model didn't replace the team's judgment. It gave them a decision framework with clear triggers:

"If bookings fall below X% of target by this date, reduce price by Y%. Expected uplift: Z additional registrations."

Instead of a meeting where everyone argued from anecdote, the team had evidence-backed options on the table — with projected outcomes for each.

What made it different

This wasn't a machine learning moonshot or a prediction engine that needed a data science team to maintain. It was a practical analytical tool built on the data they already had, designed to answer the specific question they were already asking.

The AI didn't decide the pricing. The commercial team decided the pricing. But for the first time, they had clear, data-grounded triggers to act on — and the confidence that comes from knowing the numbers support the call.

The result

Pricing decisions moved from instinct to data. The commercial team could act faster, with confidence, knowing exactly where the inflection points sat.

The model was reusable across events, and the team started applying the same framework to other revenue decisions — membership renewals, sponsorship tiers, and add-on pricing.

What this means for you

If your team is making revenue-critical decisions based on experience and debate rather than data, AI doesn't have to mean a massive platform investment. Sometimes it means taking the data you already have and building a model that answers the one question that keeps coming up in every meeting.

That's the kind of AI work that pays for itself quickly — not because it's flashy, but because it removes the guesswork from decisions that directly affect revenue.