Fornida's internal purchasing review used to take one or two days. The buyers would pull inventory, check spreadsheets, deduce run rates, weigh which products actually made money, and then cut the order. Most of that was manual sorting work, not actual judgment.
After we rebuilt the workflow, the same review takes about 15 minutes. The interesting part is not the time saved. It is the design choice underneath it. The system does not place the order. It hands the buyer a sorted list with a color flag on every line, and the human still makes the call.
The red, orange, green pattern
Each purchase candidate gets one of three signals.
- Green means buy this now. High-runner, clear ROI, no ambiguity.
- Orange means maybe. The numbers are mixed and a person should look at it.
- Red means do not buy this. The signal does not justify the spend.
The point of the colors is not to tell the buyer what to do. It is to compress where their attention goes. Greens get fast-tracked. Reds get skipped. The buyer's actual judgment ends up concentrated on the orange rows, which are the ones that actually need a human.
Now we've got that workflow down to 15 minutes. It spits out an answer. They still have to look at it and still make a judgment call. But it gives them red, you don't need to buy this. Orange, you might want to. Green, you have to buy these, this is a no-brainer.
— Farzad Vahid, Founder, Fornida
That is decision support, not decision automation. The two words look similar and behave very differently in production.
Why purchasing is not a good fit for full automation
Some workflows reward turning the human all the way off. Purchasing is not one of them. The consequences of a wrong call are real and asymmetric.
- A bad order ties up cash that the business needs for something else.
- A bad order strains a vendor relationship that took years to build.
- A bad order creates inventory the business has to sit on or sell at a loss.
- A bad order is also the easy shape for fraud and policy abuse.
A purchasing system that fully replaces the human has to be right about all of that, every time, with no recourse. A purchasing system that gives the human a sorted list and a color signal only has to be right about the easy ones. The hard ones still get a person.
That is a deliberate trade-off, not a limitation of the model. SMB owners who push for full automation in this kind of workflow usually end up with something they do not trust, which means they end up reviewing every line anyway. The shorter path is to build a system the buyer trusts to handle the obvious stuff and trusts to flag the ambiguous stuff.
What changes for the human in the loop
When the workflow ships, the buyer's day reshapes around it.
Before:
- Pull inventory.
- Cross-check against historical run rates.
- Sort by margin and velocity.
- Decide what to order.
- Cut the order.
After:
- Open the flagged list.
- Skim greens, approve.
- Look at oranges, decide.
- Skip reds.
- Cut the order.
The grind goes away. The judgment stays. The buyer also gets time back, which is the part that compounds. Once the purchasing review stops eating two days a week, that person notices the next workflow that is bleeding hours, and that is usually where the next automation comes from.
Each automation gives the team capacity to find the next one. Purchasing freed up enough time that we started pulling at other workflows we had been ignoring.
— Farzad Vahid, Founder, Fornida
Where this fits in a broader automation program
Purchasing is one of three Fornida-internal workflows that followed the same iterative shape. Accounting reconciliation moved from four or five days at month-end to a few hours. Commission reporting moved from two or three days to minutes. Purchasing moved from one or two days to 15 minutes. The pattern across all three is the same: take a workflow that already hurts, automate the structured parts, leave humans in charge of the parts that actually require judgment.
A few things make this kind of automation work in practice.
- The data has to be in one place the system can read. If purchasing data lives in three spreadsheets and two laptops, the automation has nothing stable to build on. Background here: data cleanup before AI.
- The first version is not the final version. Color thresholds, ROI rules, and run-rate windows all need tuning. The first run will get some calls wrong. The curve from week one to week six is the actual product.
- The workflow gets governed like any other piece of company infrastructure. Approved tools, role-based access, and visibility into what the system can see.
If you are picking a first project, purchasing is a strong candidate when the buying team already works from exports and run-rate logic. If you are still deciding, the broader frame is here: how to choose your first AI workflow and workflow automation for small business. The companion piece on a sister workflow is here: commission automation for small business. Pillar context: AI for small business automation.
A short note on what not to do
A few patterns that look reasonable on a slide and fail in practice.
- Skipping the data step. If the source data is messy, the color flags will be wrong, and the buyers will stop trusting the system within a week.
- Removing the human from the loop too early. Even when the system is accurate, vendor relationships and budget calls deserve a person.
- Treating the rollout as a one-shot deliverable. The reason this kind of automation works at all is that somebody is tuning it as the business changes.
Start with the workflow that already hurts
If your buyers are spending one or two days a week sorting through inventory, run rates, and Excel exports, that is a workflow worth automating. The goal is not a system that buys for you. It is a system that hands a person a clean, flagged list so the actual decision happens faster and with better information.
Talk to Fornida if you want help scoping a purchasing automation that leaves your buyers in charge.



