Framework

The 3 Question AI Readiness Test

A simple filter to determine whether a workflow is ready for AI automation.

Stop Guessing, Start Filtering

Most businesses approach AI adoption backwards. They pick a tool first, then look for something to do with it. The result is a solution searching for a problem, which almost always ends in disappointment and wasted budget.

The better approach is to start with your workflows and ask whether they are suitable for automation. Not every process benefits from AI, and knowing which ones do before you spend money is the most valuable filter you can apply. We use three questions to make this determination. If a workflow passes all three, it is a strong candidate. If it fails even one, you should either redesign the workflow first or look elsewhere.

Question 1: Is the Workflow Repeatable?

AI excels at tasks that follow a consistent pattern. If your team performs the same sequence of steps every time (or nearly every time), that workflow is a candidate. If every instance requires unique judgment with no predictable structure, AI will struggle.

Good examples of repeatable workflows

Invoice processing: Receive document, extract vendor name, amount, date, line items, match to purchase order, route for approval. The steps are the same whether it is the 5th invoice or the 500th.

Monthly bank reconciliation: Download bank statement, match transactions to ledger entries, flag unmatched items, generate exception report. Same process every month, same logic, different data.

Employee onboarding documentation: Collect personal details, generate offer letter from template, create system accounts, schedule orientation sessions. Predictable and sequential.

Weak examples

Strategic negotiations: Every deal is different, the variables change, and success depends on reading the room and adapting in real time. AI can prepare briefing notes, but it cannot run the negotiation.

Creative brand development: While AI can generate options, the process of defining a brand identity is inherently exploratory and non linear. There is no repeatable sequence to automate.

Question 2: Is the Data Accessible?

AI needs data to work with. If the information required for the workflow lives in structured, accessible systems, you are in good shape. If it is scattered across personal email inboxes, handwritten notes, or locked in software with no export capability, you have a data access problem that must be solved before AI can help.

What accessible data looks like

Structured and digital: Data lives in spreadsheets, databases, cloud drives, or APIs. It can be extracted programmatically without someone manually copying and pasting.

Consistent formatting: Invoices follow a predictable layout. Reports use the same template. Emails have a standard structure. The AI can learn the pattern because there is a pattern to learn.

Sufficient volume: There is enough historical data to validate the AI's outputs against known correct results. Even 50 to 100 examples is usually enough for a pilot.

Red flags for data accessibility

Knowledge that lives in one person's head: If the workflow depends on tribal knowledge that has never been documented, the AI has nothing to learn from. You need to capture that knowledge first.

Data spread across disconnected systems: If completing the workflow requires logging into five different platforms and manually combining information, you may need integration work before AI adds value.

Privacy or regulatory restrictions: Some data cannot be processed by external AI tools due to compliance requirements. This does not make AI impossible, but it constrains your tooling options to on premise or private cloud solutions.

Question 3: Is the Decision Low Stakes Enough for Automation?

This is the question most teams forget to ask. Even if a workflow is repeatable and the data is accessible, you need to consider what happens when the AI gets it wrong. Because it will get things wrong sometimes.

Low stakes decisions (good for automation)

Drafting email responses: A human reviews before sending. If the AI drafts something incorrect, the reviewer catches it. The cost of an error is a few minutes of editing time.

Categorizing support tickets: If a ticket is misrouted, someone reassigns it. The delay is minor and easily correctable. No permanent damage occurs.

Generating first draft reports: The output goes through human review. Errors are caught and corrected before the report reaches its audience.

High stakes decisions (keep humans in the loop)

Medical diagnoses: An incorrect classification could lead to a wrong treatment. AI can assist by flagging anomalies, but the decision authority must remain with a qualified professional.

Legal contract execution: Signing a contract based on AI analysis without human legal review could create binding obligations with significant financial exposure.

Hiring and firing decisions: Employment decisions have legal, ethical, and human implications that require human judgment, empathy, and accountability.

The middle ground

Many workflows fall between low and high stakes. The solution is a human approval trigger: the AI does the work, but a person reviews and approves before the output is finalized. This hybrid approach captures most of the efficiency gains while maintaining the safety net of human oversight.

Scoring Your Workflows

Take your top five most time consuming workflows and run them through these three questions. For each workflow, mark yes or no against each question. Any workflow with three yes answers is a strong pilot candidate. Two yes answers means the workflow may be viable after some preparation work. One or zero means you should look elsewhere first.

This exercise takes about 30 minutes and can save you months of misdirected effort. We run this assessment as part of every readiness engagement, but you can start it yourself today with nothing more than a whiteboard and honest answers.

Want a structured readiness assessment?
Our AI readiness sprint applies this framework across your entire operation and produces a prioritized shortlist of pilot candidates with ROI estimates.