Skip to content
← Back to Blog
AI for small business financeSME CFOAI financefinance automationsmall business CFO

You Don't Need an ML Team to Run AI Finance

Jamie Saveall ·

Estimated reading time: 6 min

The typical "AI in finance" article published in the last two years was written for a reader with 400 people in finance, a dedicated data science function, and a six-figure internal tooling budget. If you run an €8M business with a financial controller and a part-time bookkeeper, that content is aspirational at best and actively misleading at worst.

The gap between what's being written about AI finance and what an SME CFO can actually deploy next Monday is now enormous. The enterprise narrative talks about fine-tuning LLMs on ten years of GL data, building internal copilots on top of Databricks, and governance committees with six workstreams. None of that applies below €50M revenue.

The good news is that none of it needs to. The AI capability that moves the needle for a small-company finance team doesn't require an ML team. It requires taste, a clear view of what to automate, and the discipline to not automate the wrong things.

Here's what that looks like in practice.

Why the enterprise narrative misleads SMEs

Read five "AI for CFOs" pieces in a row and you'll notice a pattern. The case studies are Unilever, DBS Bank, Maersk. The vendors quoted have £2M minimum deal sizes. The implementation timelines are six quarters. The assumed team structure includes a head of data, an MLOps lead, and a risk function with its own analytics team.

Then the article ends with a recommendation that "finance leaders should start by defining their AI strategy" — advice that costs nothing to give and a fortune to act on.

For an SME CFO, this content does two kinds of damage. It makes AI feel inaccessible, so you shelve the conversation. Or it makes you buy something oversized, so you burn €80K on a platform three people in your team can use. Both outcomes end the same way: you sitting in a room with your FD in nine months, agreeing that "AI didn't really land for us".

The mistake is treating "AI in finance" as one thing. At Fortune 500 scale, it often does mean custom models, proprietary data pipelines, and ML engineers. At €1M–€50M revenue it means something entirely different. You use general-purpose AI tools that somebody else has already built, on top of data you already have, for problems that are already well-defined.

You don't need to build a plane.

You need a commercial flight.

What you can actually do today with zero ML engineering

The shift that matters for SMEs isn't that AI got better. It's that the plumbing got commoditised. Three years ago, making an LLM read a 90-page management accounts pack and produce commentary needed a team. Today it needs an API key and a prompt.

The five use cases below work at SME scale right now, with no custom model training and no one on staff who can spell "transformer".

**1. Commentary on management accounts.** A well-structured LLM prompt, fed the P&L and variance commentary from the prior month, produces a draft board narrative in under a minute. You rewrite 30% of it. You save four hours. Tools: ChatGPT Team, Claude for Business, or a vertical platform that does the plumbing. Cost: €25–€500 a month depending on route.

**2. Cashflow anomaly detection.** You don't need ML for this. A weekly export of bank transactions, a prompt asking "what looks unusual compared to the last six months?", and you catch the duplicate supplier payment before it ages into a write-off. Nothing fancy. Useful.

**3. Document extraction.** Invoices, bank statements, purchase orders. The OCR-plus-LLM stack now handles around 95% of SME document volume at roughly 10 cents per page. The twenty hours a week your AP clerk spends keying data turns into a weekly review queue.

**4. Customer credit triage.** Pull the last 12 months of AR aging, feed it through an LLM with a prompt that flags customers whose payment behaviour has drifted, and you get a collections list with context attached. Not a credit model. A better list.

**5. Forecasting narrative.** The numbers still come from your model. The AI writes the explanation. Why the Q2 revenue forecast dropped, what the assumptions were, what's sensitive to change. The FD reviews it, never generates from nothing.

In every one of these cases the AI is doing the expensive human task — writing, pattern-spotting, extracting — on top of data the finance team already owns. No custom model. No data science hire.

Three things not to attempt yet

The enterprise content will push you toward things that sound ambitious and deliver nothing for an SME.

**Don't try to build a custom forecasting model.** If you have one finance person and an outsourced accountant, you do not have the data volume, the feature engineering capacity, or the validation discipline to do this better than a spreadsheet plus judgement. The ROI is negative. The maintenance cost is worse.

**Don't fine-tune an LLM on your data.** Fine-tuning is a specific technical process most SMEs conflate with "pointing an LLM at my files". The second is easy and works. The first is expensive, rarely outperforms a good prompt, and needs an engineer to maintain. Skip it.

**Don't let AI make decisions.** Commentary, drafting, extraction, flagging: fine. Approving payments, setting credit limits, signing off accruals: not yet, and possibly not ever. If the model is wrong, you need a person in the loop who can catch it. That's not a limitation of current AI. It's a governance principle.

The governance layer you can't skip

Size doesn't excuse you from this part. Whether you run a €3M business or €3B, three controls are non-negotiable.

First, don't send your general ledger to the consumer version of anything. Use the business or enterprise tier of whichever tool you choose, where your data isn't used for training and the vendor has signed an actual data processing agreement. That's €20 a seat more a month and it buys you your job.

Second, log what the AI is doing. If your AP clerk uses an LLM to code invoices, you want an audit trail. If it writes variance commentary, the draft and the final version should both be recoverable. Small firms skip this because it feels bureaucratic. It isn't. It's the difference between explaining a mistake and hiding it.

Third, name one person who owns AI use in finance. Not a committee. Usually the CFO. Every tool gets approved. Every output gets checked. Every prompt that touches customer data gets reviewed.

Those three controls cost almost nothing and protect everything.

The real question

The honest version of the question most SME CFOs are asking themselves isn't "can we do AI?" It's "can we afford not to, when the competitor down the road is shaving fifteen hours a week off the close and we're still printing packs on Thursday night?"

The answer is no. And the entry cost is lower than you think, because the hard part was never the AI. The hard part is deciding which problems are worth solving first, and then having the discipline to leave the rest alone.

That's a finance decision, not an engineering one. Which is why you don't need an ML team. You already have the team you need.

A note on how Stratavor thinks about this

We built Stratavor on what we call the final-mile principle. The heavy lifting (KPI computation, trend analysis, peer benchmarking) runs deterministically, in code, the way your auditor expects. The AI sits on top and narrates. No models for you to train, no engineers for you to hire, no custom integration on your side. Start a trial and your finance team gets the hours back inside the first close.