Prove your ROI with data!
By Bryan Johnson, IT Director, Alliant National Title Insurance Company
Whenever you start a new initiative, few things feel worse than having to guess your return on investment (ROI). Planning, designing and launching a program can be costly, which makes it essential to have key performance indicators (KPIs) in place before you go live. That advice goes double when it comes to AI pilot programs. As an emerging technology, it can be difficult with AI to know what you should even be tracking. We’ll cover both topics in this blog. That way, when you’re ready to launch your program, you can feel confident that it has been well worth the effort.
Before you define your metrics, define your program
Before you define your metrics, ensure your AI pilot program itself is on solid ground. Begin by running it through the SMART goal test. Is it Strategic, Measurable, Achievable, Relevant and Time-bound? If not, assessing ROI will be tough if not impossible. A strong AI pilot must:
- Improve a specific workflow
- Address the needs of a specific business unit
- Align with your larger business objectives
- Improve upon a defined baseline
- Have a definite start and end date
- Stand up to measurement and scrutiny
If you can’t honestly check these boxes, head back to the drawing board. If you can, then you’re ready to measure what matters and decide if your AI pilot has paid off.
Quality over quantity
Once you are confident in your program’s fundamentals, the most important thing is to resist overmeasuring. If you try to track a million metrics, you can quickly lose the plot. A far superior approach is to adopt the old adage of quality over quantity—that is, track only what is essential to determine whether your program improved a workflow without adding new risks.
Score your program with a three-prong approach
The best way to do this is to put together a simple scorecard that looks at three things:
- What were the outcomes?
These are the top results related to your program’s goal, which could be if the program:- Improved turnaround times
- Increased closed files per person per week
- Reduced file reworks or changes
- What were the risks?
It is just as important to score for drawbacks, as any good title agent knows errors. Examine any of the following if they are relevant to your overarching goal:- Did the program increase error rates?
- How often did issues escalate?
- How many fixes were needed?
- What was the adoption rate?
Finally, look at how widely folks inside your organization adopted the program and its associated AI tools. You don’t want to skip this step. Even the best AI program in the world will fail long-term if people resist using it. Review:- Weekly active users
- Task completions
- Edit rates, that is, how often did the AI tool trigger human intervention
Once again, you don’t need to tackle every one of the sub-bullets I have included here. The point is simply to put your AI program through these three main criteria. Once you do, you’ll have a strong view of whether it is a net positive or net negative for your organization.
Onward to AI success!
In a recent article, ALTA’s CEO wrote that “more than 90% of title and escrow professionals have adopted generative AI in at least one form.”[i] This is a stunning statistic that suggests where the industry is headed. But adoption alone does not tell us anything particularly useful about the ROI of an AI pilot. Getting a clearer picture of AI’s value within your agency requires a more holistic view. Pair your AI tool with clear goals and a definite baseline, and deploy a simple scorecard to track outcomes, risks, and adoption rates. That’s the ticket for separating real business wins from white noise. And once you do that, AI becomes more than a shiny new tool. It turns into a driver of sustained success.
[i] The digital future still needs title insurance and the expertise of professionals

