Six Months Later: AI Exposed Us — and That's a Good Thing

The tools work. But they don’t fix bad data or broken processes, or the temptation to trust something just because it sounds right.

By Ami Kassar

Here’s the hard truth about AI: the more convincingly it presents information, the more critical it becomes to question what’s beneath the surface.

In the past six months, we’ve had AI produce things that were fast, polished, and completely wrong. The problem wasn’t obvious. It looked right. It sounded right. We almost didn’t catch it. And that’s the real risk — not that AI gives you garbage, but that it gives you garbage wrapped in a bow. You think you’re saving time. In reality, you may just be making mistakes more efficiently. This realization led us to reassess how we approach adoption.

About six months ago, I wrote in this space that AI had already transformed our business, and that wasn’t hype. We were using tools like ChatGPT, Canva, and Zoom AI to move faster, communicate better, and get more done without adding headcount. I ended that column by saying, “Now it gets more complicated.” I was right. That complexity came from issues far beyond just technology.

Garbage in, garbage out — now with better graphics.

We’re building tools to analyze patterns in our business: where leads come from, what’s converting, what’s working. AI is very good at this. Point it at a field like “lead source,” and it will find patterns instantly and hand you answers that look smart.

But it doesn’t question the data. If the inputs are inconsistent — “referral,” “Referral,” “ref,” or just blank — it processes them anyway and gives you a clean answer. Often, that answer is garbage. Not because the AI is bad. Because the data is bad. AI just makes bad data look better than it really is.

Imagine a beautiful, professional-looking report showing exactly where your business is coming from. Compelling charts. Confident conclusions. Now, imagine your reps haven’t been consistently entering the source field for 2 years. That’s what the report you’re using to make a decision is based on.

After confronting these issues, one thing became clear: the real work isn’t technical. It’s operational.

This surprised me the most. Going in, I thought this would mostly be about tools. It’s not. It’s about how you operate.

We’ve spent more time than expected stepping back and asking basic questions. What do we actually mean by a “lead source”? How should data be entered? What should be standardized? Who owns making sure it happens? Those are business questions, not AI questions. If you don’t answer them well, AI just exposes the problem. It doesn’t solve it.

We also learned from mistakes we almost made. 

At one point, we nearly went down the custom-development path. A firm quoted us $35,000 and six months to build a proprietary AI system in what they called a secure environment. We were close to signing.

Then we realized that for a reasonable monthly subscription, we could get a secure environment inside Claude or ChatGPT and do much of what they were proposing ourselves — and pivot as the tools changed, which seems to happen every week.

Part of what stopped us was honest self-assessment: We were still learning. We didn’t know enough about what we actually needed to build the right thing. And the platforms have moved so fast that whatever we commissioned six months ago would already be dated. We would have spent real money building something obsolete. Instead, we’ve gotten a lot done using tools that keep improving faster than we can keep up with. Right now, learning matters more than building.

Stepping back, I don’t know where we stand. And that’s okay.

I’ll admit something else: I don’t really know how far along we are. I don’t know how we compare to my entrepreneur friends who are approaching these challenges in all kinds of ways. Some days it feels like we’re ahead. Some days it feels like we’re behind. I do take some comfort in watching regulated banks struggle to adopt AI under their compliance constraints — that’s a structural advantage for smaller, nimbler firms like ours. And I think it will matter more over time.

What I’ve realized is that the uncertainty itself is okay. Everyone is figuring this out in real time. So we’ve stopped trying to measure our position and just keep moving — which, in a market changing this fast, might be the only sensible strategy.

None of this has slowed us down. In fact, if anything, I’m more convinced than ever that AI will make us better. But I’m more realistic about what it takes. AI is not a golden ticket. It won’t fix bad data. It won’t clean up broken processes. If your foundation is weak, AI just builds faster on top of the cracks. If your foundation is solid, it becomes a powerful amplifier. This understanding shapes our current priorities.

All of which has convinced me of another key lesson. As we evolve, assigning ownership becomes crucial. To truly benefit from AI, you can’t treat it as a side project. Someone has to take clear ownership. Right now, I am taking on this job as the CEO.  Long term, this is not sustainable —and when it’s clear to me when and how to find the right person for the seat, I will grab them in a heartbeat.

Ami Kassar is CEO of MultiFunding.

We would love to hear from you
Ask us anything
Or suggest a topic for a podcast, an interview or a blog post