X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Smarter Renewals, Stronger Results: A Conversation with VGM on Using Machine Learning to Streamline Insurance Operations

When VGM & Associates set out to improve its insurance renewal process, the goal was clear: use data to simplify low-risk renewals and free up teams to focus on higher-impact work. SPR partnered with VGM to design and build a machine learning model that could identify risk levels and help automate the path to renewal.

Recently, we sat down with Tera O’Hare, SVP of Insurance Operations at VGM, to hear how the model is working and how human judgment still matters, the organization’s 40% efficiency gain, and what’s next as the company scales this approach.

 

SPR: To start, can you give us a quick overview of what you were trying to solve with this initiative?

Tera O’Hare: We had a pretty consistent problem: we were spending the same amount of time processing renewals for very low-risk clients as we were for complex, high-risk policies. We knew there had to be a better way. Our goal was to use data to help us identify those low-risk policies more efficiently—so we could reduce unnecessary touchpoints and streamline the experience, both internally and for our insureds.

SPR: What was the first step in bringing this model to life?

O’Hare: The first step was validation. We built the model to generate a risk score, and then asked our underwriting team to review those scores to see if they aligned with their own judgment. We didn’t want to change anything until we were confident the model could be trusted.

We had a very high alignment rate, over 90% of the time, the underwriters either agreed with the model or were neutral. That gave us the confidence to start building it into our actual process.

 

SPR: How did you decide which policies to include in the new automated workflow?

O’Hare:
We focused on the low and very low-risk population. These are insureds with clean records, minimal premiums, and few if any claims. The idea was simple: let’s remove unnecessary manual work and get their renewals out the door quickly.

We also ran A/B testing with different lead times, trying 90, 60, and 45 days before renewal. This was to see how timing impacted response rates.

SPR: And what were the results?

O’Hare:
The streamlined process performed just as well as our traditional one. We saw about an 80% conversion rate, which is right in line with our standard renewal process. So from an insured’s perspective, nothing changed. But internally, we saw significant gains.

Underwriters no longer had to go through the rating process for those low-risk policies, and our account managers didn’t need to chase down missing information. That translated into about a 40% efficiency gain in touchpoints.

SPR: You mentioned account managers. How did this impact their day-to-day?

O’Hare:
The new workflow removed a lot of manual prep work. For our direct clients, account managers were typically the ones gathering and entering missing application data before an underwriter could even get started. With the model in place, and once an account qualifies as low-risk, that back-and-forth largely disappears.

We’re not just automating for the sake of automation. The idea is to let people focus on higher-value tasks. If you’re spending 30 minutes on a very simple renewal, that’s not the best use of your time. Our teams can now spend more time analyzing complex risks, where their expertise really makes a difference.

SPR: From a change management perspective, how did the rollout go?

O’Hare:
We were very intentional about how we introduced this. We didn’t just say, “Here’s the model, go use it.” We met with our teams every two weeks during the rollout to collect feedback, clarify questions, and talk through any concerns. Even if we didn’t make changes every time, it gave everyone a voice in the process.

I think that approach made a big difference. Within about three months, the tool became part of the day-to-day. At one point, when it went down due to a security update, people immediately asked when it would be back. That told us they saw its value.

 

SPR: What’s the scale so far, and what’s next?

O’Hare:
So far, we’ve used the new process with about 450 of our 15,000 insureds. It’s a small slice, but we did that on purpose. This year has really been about getting our internal processes right and pressure-testing the experience, both for our teams and for our broker community.

But we’re excited about the potential. About 45% of our book falls into the low or very low-risk buckets. That’s a significant opportunity to scale this up.

 

SPR: You’re also launching a new policy administration system soon. How will that support the model?

O’Hare:
A lot. The new system will include Optical Character Recognition (OCR) capabilities that convert our applications into structured, consistent data. That will make the model even more accurate because the data we feed into it will be cleaner and more standardized. This is the same type of technology we used to extract historical data when we built the model.

Right now, we’re working with an amalgamation of formats, some of which are great, some not so much. With the new system, the data quality improves dramatically, which opens the door for more detailed segmentation down the line.

 

SPR: Can you expand on that segmentation? What are you hoping to learn?

O’Hare:
I’d love to see the model eventually distinguish between different business types more precisely. For example, in the home health space, is a home health aide riskier than a certified nurse? My experience tells me yes, based on factors like training, education, and turnover. But we need the data to confirm that.

As we capture better data, we may even be able to revive some of the more targeted models we explored early on, which didn’t have enough inputs at the time to generate meaningful results. This is the evolution of machine learning models, when you have new data, perform tests to see how the data features may or may not affect your outcomes.

 

SPR: From a cultural perspective, how are people responding now?

O’Hare:
I think people appreciate that we’ve been transparent and measured. They’ve seen that this isn’t about replacing anyone. It’s about using technology to take low-complexity work off their plates so they can focus on the more challenging cases.

We’ve been clear from the start: human judgment still matters. The model helps us prioritize and accelerate, but it doesn’t make decisions in a vacuum. We’ve validated that it’s aligned with our standards, and we’ve set it up to support, not override, our team’s expertise.

 

SPR: How would you describe your experience working with SPR?

O’Hare:
It’s been fantastic. This is our second major project together, and both were on time and under budget. What I’ve really appreciated is how quickly the SPR team gets up to speed on our business. They ask smart questions, they don’t make assumptions, and they’re true partners in the process.

The project wouldn’t have been successful without that level of collaboration.

SPR: What are you most excited about as you look ahead?

O’Hare:
I think we’re just getting started. With the new system, better data, and a proven foundation in place, we can continue scaling this model in a way that improves both the internal workflow and the insured experience. Ultimately, it’s about making renewals smarter, faster, and more valuable for everyone involved.

Interested in the technical side of the solution?
Read the full case study on how SPR built and deployed the machine learning model for VGM: Read the case study