AI Is Amazing—And That’s a Problem for Benefits Administrators

Apr 04, 2025 | Posted by Elevate

AI Is Amazing—And That’s a Problem for Benefits Administrators
How Elevate is staying ahead of AI-generated receipt fraud

There’s no denying it: the pace of AI innovation is staggering. The latest update from OpenAI’s ChatGPT, as covered in a recent TechCrunch article, shows just how far generative image tools have come. With a simple prompt, ChatGPT can now generate remarkably convincing receipts—down to the logos, formatting, and even simulating crinkled paper patterns.

It's impressive. It’s also a little unnerving.

For industries like consumer directed benefits—where participants are regularly submitting receipts to verify expenses for FSAs, HRAs, and dependent care accounts—this innovation presents a new challenge. Administrators are responsible for validating that each claim complies with IRS requirements, which usually means reviewing receipts that can vary wildly in format and content. The rise of AI-generated fakes makes that task exponentially harder.

For industries like consumer-directed benefits—where participants routinely submit receipts to verify expenses for FSAs, HRAs, and dependent care accounts—this innovation introduces a new challenge. It’s not just that participants might attempt to pass off fake documentation; fraudsters can also submit claims using accounts they’ve taken over. Administrators are responsible for ensuring each claim complies with IRS requirements, which often means reviewing receipts that vary widely in format and content. The rise of AI-generated fakes makes that task exponentially harder.

At Elevate, we’ve been preparing for this.

We know that validating documentation can’t be a manual, trust-based process anymore. That’s why we’ve built a fraud-detection engine into our platform—one that doesn't just look at the receipt, but at the context surrounding it.

Here’s how it works:

  1. We score every submission.
    Each claim is evaluated based on a number of behavioral and account-related signals:
    • Has the user recently changed their banking information?
    • Was their password updated in the last few days?
    • Are they submitting higher-than-usual claims, or at unusual times?

  2. If the submission triggers a high enough risk score, we escalate.
    When something doesn’t feel quite right, the system takes a deeper look—this time, directly at the receipt.

  3. We prompt an AI-driven forensic review.
    Our model analyzes receipts using a highly trained and detailed prompt. It’s designed to account for common variations and to look for inconsistencies, missing or incomplete information, formatting that doesn’t match typical norms, and other subtle indicators that a receipt may be fraudulent.

  1. If the receipt fails the forensic sniff test, we bring in our experts.
    The claim is automatically flagged, and an alert is sent to the appropriate operations team for manual review and next steps.

This layered approach means we’re not relying on a single source of truth. We’re looking at the whole picture—from user behavior to document forensics—and taking action when something feels off.

The rise of generative AI is pushing every industry to adapt. At Elevate, we see it as an opportunity—not just to build smarter tools, but to protect our clients and their employees with confidence and integrity.

Want to learn more about how we’re using AI to fight fraud? Let’s talk.

‍

FAQs

No items found.

Stay ahead of the curve with key industry insights and milestone updates about what Elevate can do for you.

Thank you! You're successfully signed up.
Oops! Something went wrong while submitting the form.