Free Assessment

AI Readiness
Checklist

Assess your organisation across 7 critical dimensions. Answer honestly — this is for your eyes only. At the end you will receive a scored maturity result with guidance on where to focus first.

7

Dimensions

35

Questions

15 min

To complete

About this assessment

What the AI Readiness Checklist measures

The Lumii AI Readiness Checklist is designed to give business leaders an honest, evidence-based view of their organisation's readiness to adopt and benefit from artificial intelligence. It assesses seven dimensions that our advisory practice has identified as the critical determinants of AI programme success — spanning strategy, infrastructure, people, and governance.

Each dimension contains five questions rated on a four-point scale. Scores are totalled across all 35 questions to produce a maturity band: AI Unaware, AI Aware, AI Active, or AI Leader. Each band comes with a description of your current position and a set of prioritised actions to move forward.

01

Commercial Clarity

Leaders care about return, risk, and accountability — not technology. This dimension assesses whether AI is tied to specific commercial outcomes your leadership team can quantify and own.

If this is weak, you don’t have an AI strategy — you have experiments.

02

Use Case Discipline

AI fails most often from too many use cases and no sequencing. The businesses that succeed pick the few high-leverage opportunities, attach numbers to them, and execute one at a time.

If this is weak, you’ll stay in exploration. You won’t reach impact.

03

Execution Ownership

Programmes without a single accountable owner stall at pilot stage. Ownership is the single biggest predictor of whether AI moves from experiment to embedded capability.

If this is weak, you will stall at pilot stage.

04

Data & Systems

AI is only as good as the data that powers it. Mid-market businesses with siloed, inconsistent, or inaccessible data consistently underperform — regardless of the tools they invest in.

If this is weak, AI will produce output, not impact.

05

Technology & Tools

The right technology foundation is not the most advanced stack — it is one that is integrated, cloud-capable, and maintainable. Without it, AI tools either fail to deploy or fail to scale.

If this is weak, AI tools will get stuck in pilot, not move into production.

06

Capability & Skills

Technology is the easy part. People are where AI programmes succeed or fail. Capability is not the same as buying training — it is whether AI changes how work gets done, supported by ongoing learning and structured rollout.

If this is weak, you’re renting intelligence, not building it.

07

Risk & Governance

AI risk is not just a compliance question — it is a business-continuity question. The biggest governance issue in mid-market today is shadow AI: employees using tools and inputting data without anyone knowing what is happening or where the data goes.

If this is weak, risk will slow your adoption more than regulation will.

0 / 35 answered
01

Commercial Clarity

Leaders care about return, risk, and accountability — not technology. This dimension assesses whether AI is tied to specific commercial outcomes your leadership team can quantify and own.

If this is weak, you don’t have an AI strategy — you have experiments.

01We have defined the specific business outcomes (revenue, cost, time saved, error reduction) we expect AI to deliver in the next 12–24 months.

02We can quantify a conservative 12-month commercial impact for at least one AI use case.

03AI is tied to existing strategic priorities, not run as a side project.

04Our leadership team has agreed on the level of risk we are willing to take with AI deployment.

05We have allocated a dedicated budget for AI initiatives in the next 12 months.

02

Use Case Discipline

AI fails most often from too many use cases and no sequencing. The businesses that succeed pick the few high-leverage opportunities, attach numbers to them, and execute one at a time.

If this is weak, you’ll stay in exploration. You won’t reach impact.

01We have mapped the business processes that are most repetitive, time-consuming, or error-prone.

02We have a prioritised list of 3–5 high-impact AI use cases with estimated business value attached to each.

03Each AI use case is tied to a specific team and measurable metric (time saved, revenue, error rate).

04We have documented standard operating procedures that could be used to configure or train AI tools.

05We have run at least one AI pilot with defined success criteria and measured results.

03

Execution Ownership

Programmes without a single accountable owner stall at pilot stage. Ownership is the single biggest predictor of whether AI moves from experiment to embedded capability.

If this is weak, you will stall at pilot stage.

01We have a single, named accountable owner for AI outcomes — not a committee.

02We have active executive sponsorship for AI adoption at board or C-suite level.

03Our teams know what changes in their workflow over the next 30–60 days.

04AI is embedded in how work gets done, not just which tools sit on the desktop.

05Our leadership actively communicates AI goals and progress to the wider team.

04

Data & Systems

AI is only as good as the data that powers it. Mid-market businesses with siloed, inconsistent, or inaccessible data consistently underperform — regardless of the tools they invest in.

If this is weak, AI will produce output, not impact.

01Our core business data is centrally stored and accessible — not siloed across spreadsheets and legacy systems.

02We have a single, unified view of our customer or operational data — not multiple disconnected versions across systems.

03We have documented data governance policies covering quality, ownership, and privacy.

04Our data is regularly cleaned, labelled, and structured in a consistent format.

05We have a clear understanding of which data we can and cannot use to train or feed AI systems.

05

Technology & Tools

The right technology foundation is not the most advanced stack — it is one that is integrated, cloud-capable, and maintainable. Without it, AI tools either fail to deploy or fail to scale.

If this is weak, AI tools will get stuck in pilot, not move into production.

01Our current tech stack includes cloud infrastructure (AWS, Azure, or Google Cloud).

02We have evaluated and deployed at least one AI tool with measurable adoption beyond casual experimentation.

03Our key systems are integrated via APIs rather than requiring manual data transfer between platforms.

04We have a structured process for evaluating and onboarding new technology.

05Our IT team or external partner is capable of supporting AI tool deployment and maintenance.

06

Capability & Skills

Technology is the easy part. People are where AI programmes succeed or fail. Capability is not the same as buying training — it is whether AI changes how work gets done, supported by ongoing learning and structured rollout.

If this is weak, you’re renting intelligence, not building it.

01We have access to AI or data expertise — internal or external — that we can call on for advice and implementation.

02AI literacy training is part of our regular learning programme, not a one-off event.

03Our employees understand how AI can assist their specific roles and day-to-day work.

04We have involved frontline staff in identifying AI use cases and testing solutions.

05We have a structured approach to rolling out new tools and processes — training, communication, ongoing support — not just announcing and hoping for the best.

07

Risk & Governance

AI risk is not just a compliance question — it is a business-continuity question. The biggest governance issue in mid-market today is shadow AI: employees using tools and inputting data without anyone knowing what is happening or where the data goes.

If this is weak, risk will slow your adoption more than regulation will.

01Our organisation has an AI ethics policy or guidelines for responsible use.

02We have clear policies on which AI tools employees are permitted to use, and what company or customer data they can input into them.

03We have processes to review AI outputs for bias, accuracy, and fairness before acting on them.

04We have a risk register that includes AI-specific risks such as hallucinations, data misuse, or reputational harm.

05We have a process to explain AI-driven decisions to customers or regulators if required.

35 questions remaining.