Contact Us

Multidisciplinary Team

AI is a tool, not a teammate

Connect on LinkedIn
Hero Image
Connect on LinkedIn

Meet the brain trust that is guiding our use of AI to rate known risks and reveal unknown risks—without introducing new ones.

In the invention and relentless improvement of our Irix and Curv underwriting solutions, we use AI as part of a disciplined stack: natural language processing for cleaner data, machine learning for robust prediction, rules for clarity—and humans for judgment. That mix delivers something generative AI alone can’t guarantee: accuracy, consistency, explainability, and accountability.

Risk and health are not purely mathematical constructs. They’re deeply human, shaped by behavior, context, and choices that are often beyond the reach of code alone.  So we make it a point not to loosen our grasp on this reality as we develop ever-smarter insurtech tools, treating AI as one device among many—a means to an end and not an end itself.

Three of our leading AI experts have thought a lot about this. In this Q&A with Dan Becker (Principal and Product Director, Curv), Mike Hoyer (Principal and Director, Irix Product Strategy), and Michael Niemerg (Principal and Director, Data Science and Analytics), we explore where AI shines, where it’s currently limited, and why people remain indispensable to risk assessment.

Three of our leading AI experts have thought a lot about this. In this Q&A with Dan Becker (Principal and Product Director, Curv), Mike Hoyer (Principal and Director, Irix Product Strategy), and Michael Niemerg (Principal and Director, Data Science and Analytics)

First things first: What’s the real difference between “AI” and “generative AI,” and why do people conflate them?

Niemerg: Generative AI is the most visible and exciting recent development, so it dominates public perception. But really AI is a broad field that includes many techniques for reasoning, prediction, and automation—everything from classic machine learning (ML) to modern large language models (LLMs) and natural language processors (NLP). Generative AI is just one subset under the whole AI umbrella, focused on creating new text, images, or audio based on unfathomably vast amounts of data. The big shift with generative AI is that it lets ordinary people directly interact with it.

Traditional, tabular AI has mostly resided with data scientists and a disciplined end user; it reliably predicts outcomes like mortality, morbidity, and cost, but the end user only sees the prediction. There’s no conversation happening, no prompt engineering. With generative AI, it’s the first time the everyday user is asking questions and getting fluent responses.

Q: So why not let generative AI make underwriting decisions if it’s so fluent?

Becker: Because that would be a backslide. Think of it this way: Underwriting has largely moved from subjective, inconsistent human decisioning to models that are repeatable and objective. If you feed the same data into the same model twice, you get the same output twice. Generative AI reintroduces that variability we’ve worked so hard to weed out. We’ll leverage generative AI to clean and structure data and accelerate workflows, but not to be the “AI underwriter.”

Niemerg: I agree. I don’t think that using generative AI will result in better mortality and morbidity predictions if it’s used as a mere replacement for traditional machine learning. But an LLM can take the complex stuff coming out of a machine learning model and make it more dynamic, more conversational, and perhaps help to explain it. There’s a lot of potential to tap. The challenge is, generative AI does that really well most of the time, but it also hallucinates, and it’s frequently incorrect; in underwriting, the bar for accuracy is much higher than that.

Hoyer: And the stakes are high. We’re talking about financial protection for people, and reputation and business results for carriers. You need tight controls, auditability, and explainability. Today, that tips the scales toward deterministic rules and conventional ML for the decisions, with generative AI playing a supporting role for speed and presentation.

Underwriting has largely moved from subjective, inconsistent human decisioning to models that are repeatable and objective. If you feed the same data into the same model twice, you get the same output twice. Generative AI reintroduces that variability we’ve worked so hard to weed out.

How will generative AI change underwriting—or has it already changed it?

Becker: Before predictive models—whether you’re just talking about regression models or tree-based models or any of the types of machine learning models that came up with a health status factor—applicants listed all their health conditions and the drugs they were taking, and an underwriter sat down with a cheat sheet that instructed how many debit points to total up for each one of these things. Well, what happened when they processed 20 of those a day? Did the underwriter reliably assign the same rate increase to every single person that had a similar health profile? Not necessarily. Old-school medical underwriters might not like to hear it, but intuition often played a role.

Today, many people would resist making underwriting decisions with generative AI, but for argument’s sake, is it fair to hold a model to a standard that human underwriters weren’t held to? I think some of this comes down to, what leeway do we give generative AI? If we’re going to insist that our models are completely objective, then we’re going to be limited to deterministic models for the foreseeable future.

Hoyer: Even with those deterministic models, there’s a natural dissonance between clinical underwriting and an approach that’s driven by statistics. Sometimes, the statistically driven score doesn’t align with an underwriter’s learned expectations for any number of reasons: limitations in the range of clinical research on some condition, not accounting for specific comorbidities, or not taking the management of a condition into consideration. There can be some friction when the machine and the human don’t see eye to eye. So, while AI technologies have changed underwriting already, it’s a slow-moving transformation and still very much in progress. I suspect the advent of generative AI will hurry along the adoption of these more established AI techniques.

Talk a little more about Irix and Curv. Where—and how—does AI show up in those solutions?

Becker: There are a few places. On the data side, we use NLP to normalize messy text. For example, a doctor can describe diabetes 17 different ways; we need to map them all consistently to the same concept. That’s using AI to improve data quality and throughput. But the decision layer remains deterministic or statistically grounded rather than “let the chatbot underwrite.”

Hoyer: Both our Curv and Irix risk-scoring predictive models leverage advanced machine learning to find risk patterns that would be impossible for a human underwriter to spot consistently. The result is a risk score that’s both objective and repeatable—but sometimes not “intuitive” or easily explainable.

With the Irix Rules Engine, we can get that all-important consistency without AI—and that’s on purpose. It’s deliberately deterministic and explainable. It mirrors how an underwriter follows guidelines, but with perfect consistency and traceability: “These codes flowed to this rule, therefore this outcome.” That’s not the right place to introduce stochastic, non‑auditable behavior like you get from generative AI.

You’re saying that consistency is a big deal.

Becker: Absolutely. Even if a model is imperfect, it delivers consistent decisions, which in turn deliver more consistent business results. That consistency is a core value proposition versus purely human adjudication—and now versus generative AI variability as well.

Everyone’s talking about AI, but we keep kind of quiet about it, even though we’ve been using it for years. Why aren’t we putting “AI” in every headline and product name, for instance?

Niemerg: We have deep expertise in AI, but we don’t treat it as a primary value point. A lot of vendors excessively trumpet the fact that they’re using AI, as if that is a value proposition unto itself. And, indeed, C-level executives are issuing directives to adopt generative AI, for no other reason than they think they should.

It’s not that we’re keeping quiet about AI, it’s just that we’re hyper-focused on solving our clients’ problems and delivering business value. AI isn’t always the answer. We’re genuinely fascinated with this technology, but we’re even more obsessed with our clients and the ways we can give them what they really need. Our approach to innovation has always been pragmatic.

AI isn’t always the answer. We’re genuinely fascinated with this technology, but we’re even more obsessed with our clients and the ways we can give them what they really need.

AI is only as good as the data it’s trained on. The Irix and Curv risk scoring predictive models are trained on our proprietary data. Can this give underwriters more confidence in those products?

Niemerg: Definitely. You won’t be surprised to find out that as a group, life insurance applicants have different life expectancies than people with similar demographics who haven’t sought insurance. Irix Risk Score is trained on data drawn from life insurance applicants, so for the purpose of underwriting, its mortality predictions are more accurate than similar models that are trained on death data for the population as a whole.

Becker: In a similar way, the Curv score is essentially a proxy for expected costs, so the fact that the model is trained on actual claims data is, again, going to make it more useful than a model trained on overall healthcare costs. What matters to carriers is, what does such-and-such condition cost companies like ours, not what it might cost Medicare.

Hoyer: Of course, new treatments and new drugs come out all the time, and there are even new conditions emerging that aren’t present in the training data. Our products include an element of clinical expertise that makes a huge difference.

That’s a perfect segue into the next question! A lot of clinical and actuarial expertise is also baked into our products and ultimately helps to guide the way the products are used. Would it be fair to say that our models are hybrids of artificial intelligence and natural intelligence?

Niemerg: I think that’s totally fair. The challenge you have when using machine learning for predicting human health is the crazy variety of things that can go wrong with the human. ML finds patterns at scale, but it mostly finds correlations, not causes. Clinicians bring causal reasoning and judgment, especially in rare edge cases where data is too sparse for a model to learn credibly. We also invest heavily in engineering clinically meaningful product features so that our models see structured, interpretable signals and not just raw noise.

Hoyer: Agree, the hybrid is critical. Underwriting lacks a natural real‑time feedback loop, especially with mortality, where outcomes have long lags. Medicine and health are always in flux, so sometimes data is sparse and biased. In those cases, we often adjust via rules or expert overrides first, then incorporate credible experience when it exists.

Becker: That can’t be overstated. When the world changes—when there are new diagnoses or therapies—models that are trained on yesterday’s data won’t know what to do. Human expertise is imperative. It tells the system, “Treat new therapy A like older therapy B,” or “This is novel and expensive, so adjust scoring accordingly until we have that credible experience.” That’s the human‑in‑the‑loop that keeps models current.

Hoyer: Another reason why Irix Risk Score really works is the actuarial lens that we put on after the fact. Not only are clinicians ensuring that we have overrides for rare edge cases, but actuaries are the ones taking that score and saying, “This is what it means for your company’s business.”

Just at a personal level, are you guys optimistic about the long-term impacts of AI?

Niemerg: I have a generally positive view of AI’s impact on humanity. With any technological change, there can be pockets of disruption. But I think that if you play this forward a few decades, we’re going to be happy that we have AI in our lives.

Becker: When the first cell phones appeared, I don’t think anyone anticipated that someday we’d all have a computer in our pocket, all the time. I think AI will be even more transformative, and that it will accelerate the pace of technological change. Perhaps like cell phones, the development of hardware that leverages the software will have a big influence on the way it impacts us in our day-to-day lives.

Hoyer: I’m optimistic, too. The more you understand what’s going on under the hood, the more you can understand its limitations, which makes it less threatening.

If I have a serious reservation, it is whether the next generations will learn how to think, or will they just rely on these tools? How will we encourage curiosity and discourage wholesale acceptance of any and every idea when it’s the path of least resistance?

Niemerg: This is a recurring pattern with new technologies. Not all of the worries and fears are misplaced, but over time as a society, we figure it out.

So, bottom line, what’s the company’s philosophy on AI and underwriting?

Hoyer: AI is a tool, not a replacement. We use the right tool for the job: deterministic rules for transparency and auditability, machine learning for superhuman pattern recognition, and generative AI to speed up some supporting functions without ceding accountability.

Becker: And we hold machine decisioning to a higher consistency standard than humans ever have been. That’s not a knock on people. It’s a recognition that consistent underwriting tends to yield better, more reliable business outcomes.

Niemerg: The hybrid of artificial intelligence with clinical and actuarial intelligence is the winning formula. It gives clients the accuracy, speed, and explainability they need and also helps us to evolve our products responsibly as medicine and markets change.

We use the right tool for the job: deterministic rules for transparency and auditability, machine learning for superhuman pattern recognition, and generative AI to speed up some supporting functions without ceding accountability.

The hybrid of artificial intelligence with clinical and actuarial intelligence is the winning formula. It gives clients the accuracy, speed, and explainability they need and also helps us to evolve our products responsibly as medicine and markets change.
— Michael Niemerg

No one beats our human intelligence.

Get a peek inside the minds of some of our brightest people.

Meet our experts