1. What is an AI readiness assessment?
What does the term mean, precisely?
An AI readiness assessment is a structured evaluation that measures whether an organization has the prerequisites to successfully adopt, deploy, and scale artificial intelligence. It examines six categories: the quality and accessibility of your data, the skills and cultural openness of your team, your existing technology infrastructure, the maturity of your core processes, the clarity of your AI strategy, and any prior experience your organization has with AI tools.
The result is a scored profile, not a pass/fail verdict. Most frameworks produce a score per dimension and an overall score on a 0-to-100 scale. That profile tells you where you are strong enough to move immediately and where foundational gaps would cause an AI initiative to fail or underperform.
What is its purpose?
The purpose is directional clarity before investment. Organizations that skip the assessment and move directly to AI tool procurement routinely discover -- 6 to 18 months later -- that the tool could not work because the data was fragmented, the team resisted the workflow change, or leadership had conflicting definitions of what success meant. A 2025 McKinsey survey found that 68% of organizations that reported failed AI pilots cited data readiness or organizational alignment as the primary cause, not the technology itself.
An assessment surfaces those problems in 3 minutes, not 18 months.
Who needs one?
Any organization considering a meaningful AI investment needs one. That includes a 20-person professional services firm evaluating whether to use AI for proposal drafting, a 200-person SaaS company evaluating predictive churn models, a 1,000-person manufacturer considering AI-driven quality inspection, and a 5,000-person financial institution planning to deploy AI agents across customer service. The scale differs; the diagnostic logic is the same.
Organizations that have already deployed AI also benefit from periodic reassessment. AI readiness is not a static state. A company that scored 35 eighteen months ago and has since hired a data engineer, centralized its CRM data, and completed one successful AI pilot could now score 60 -- and that change unlocks a completely different set of strategic options.
"68% of failed AI pilots trace back to data readiness or organizational alignment -- not the technology."
2. Why AI readiness matters in 2026
What is the market context?
Global enterprise AI spending reached $235 billion in 2025 and is on track to exceed $320 billion by end of 2026, according to IDC. More relevant than the total: the distribution is highly unequal. Organizations that entered 2025 with a score above 60 on data infrastructure and strategy are capturing disproportionate productivity gains. Organizations that entered without that foundation are spending on tools that sit underused.
The gap between high-readiness and low-readiness organizations is widening, not narrowing. Early AI adopters are reinvesting efficiency gains into more AI capability, compounding their advantage. The window for a late-mover strategy is narrower in 2026 than it was in 2024.
What is the cost of skipping the assessment?
The direct cost is wasted budget. A mid-size company spending $200,000 on an AI tool that cannot be integrated because its data is siloed in three incompatible systems has lost that $200,000 plus 6 months of internal implementation effort. The indirect cost is organizational skepticism: a failed AI project makes the next initiative harder to fund and harder to staff, because the team has a concrete example of failure to cite.
A readiness assessment costs nothing to under $29. The ROI on avoiding a single failed pilot is measured in hundreds of thousands of dollars.
What 2026-specific factors make readiness more critical now?
Three structural changes make 2026 different from prior years. First, AI agents -- systems that take autonomous multi-step actions, not just generate text -- are now being deployed in production environments. Agentic AI has significantly higher requirements for data quality, process clarity, and governance than earlier generative AI tools. An organization that was adequately ready for a ChatGPT integration in 2023 is likely not adequately ready for an agent that autonomously manages procurement workflows.
Second, the EU AI Act came into full effect for high-risk use cases in 2026, and analogous frameworks are active or imminent in Canada, the UK, Brazil, and Singapore. Regulatory compliance is now a dimension of AI readiness that has legal consequences, not just reputational ones.
Third, AI vendor consolidation has accelerated. The dominant platforms -- Microsoft Copilot, Google Workspace AI, Salesforce Einstein, and the major cloud ML suites -- now bundle AI features into existing contracts. Organizations that have not assessed their readiness are paying for AI capabilities they cannot use because their foundational infrastructure does not support them.
3. The six dimensions of AI readiness
Every credible AI readiness framework -- from Gartner's AI Maturity Model to McKinsey Quantum Black's approach -- evaluates organizations across similar categories. The ConsultNow framework uses six dimensions, each scored 0-100 and weighted in the overall score. Here is what each dimension measures and why it predicts outcomes.
Dimension 1: Data Foundation -- what does it measure and why does it matter?
Data Foundation measures whether your organization collects relevant data, whether that data is clean enough to use, whether it is accessible from a central location, and whether it is governed with consistent definitions. It is the single highest-correlation dimension with AI project success. Every AI and machine learning system is only as good as the data it is trained or grounded on.
A 50-person SaaS company scoring 30 on Data Foundation typically has Salesforce data that tracks pipeline and closed revenue but has no connection to product usage data from Mixpanel or Amplitude. Churn prediction, the most common first AI use case for SaaS companies, requires both. The AI project either stalls waiting for data integration work, or it gets built on incomplete data and produces predictions that are wrong in ways that erode trust in the model.
Scores below 40 on this dimension are a hard blocker for any AI use case that relies on historical patterns. They are not a blocker for generative AI use cases like AI-assisted writing, summarization, or coding assistance, which use third-party model knowledge rather than your own data.
Dimension 2: Team and Culture -- what does it measure and why does it matter?
Team and Culture measures the combination of AI skills within the organization, leadership's commitment to AI investment, and the workforce's openness to changing workflows when AI is introduced. Culture is consistently the most underrated readiness dimension in self-assessments and the most commonly cited root cause in post-mortem analyses of failed projects.
A 150-person professional services firm that scores 70 on Technology but 25 on Culture has partners who are unwilling to let junior staff use AI tools on client work, an IT policy that blocks most AI applications, and no internal communication about how AI fits the firm's strategy. That firm cannot successfully deploy AI regardless of its technical infrastructure, because adoption requires behavioral change at every level, and behavioral change requires leadership mandate and trust.
Skills gaps are more fixable than cultural resistance in the short term. A team that lacks AI skills but is open to learning can be trained in 8 to 12 weeks for most operational AI use cases. A team that has relevant skills but is culturally resistant to AI can take 12 to 24 months to bring along, and only with consistent leadership messaging and demonstrated wins.
Dimension 3: Technology Infrastructure -- what does it measure and why does it matter?
Technology Infrastructure measures whether your existing technical stack can support AI workloads: cloud adoption, API connectivity between systems, compute capacity, security controls for AI-specific risks (like prompt injection or model access governance), and the existence of MLOps or AI deployment tooling.
Organizations running predominantly on-premise, with monolithic applications that have no API layer, face significant infrastructure investment before most AI deployments are feasible. Those already operating on AWS, Azure, or GCP with modern API-connected services can typically start deploying AI in 4 to 8 weeks with minimal infrastructure change.
Technology Infrastructure is the dimension most organizations overestimate. Having Microsoft 365 does not mean you have AI-ready infrastructure. The relevant questions are: can your systems send data to and receive outputs from AI models in real time? Can your security team govern what data leaves your environment? Can you monitor AI outputs for quality and drift over time?
Dimension 4: Process Maturity -- what does it measure and why does it matter?
Process Maturity measures whether your core business processes are documented, consistent, and measurable. AI is most effective when applied to well-defined, repeatable processes. It struggles with processes that vary significantly between individuals, lack clear input/output definitions, or have no historical record of how they have been performed.
A manufacturing company with a documented, consistently executed quality inspection process can train an AI model on historical pass/fail data and achieve 90% automation of that inspection within 6 months. A company with a nominally similar process that varies by shift supervisor, has inconsistently recorded outcomes, and has no standard criteria for what constitutes a defect cannot take the same path -- at least not without significant process standardization work first.
Process Maturity also includes change management infrastructure: whether your organization has the project management capability to redesign workflows around AI outputs, retrain staff, and measure the impact of changes. Organizations that have implemented ERP systems or CRM platforms successfully tend to score higher here, because those implementations required the same organizational muscles.
Dimension 5: Strategic Alignment -- what does it measure and why does it matter?
Strategic Alignment measures whether AI investment is tied to clear business objectives, whether leadership has agreed on priority use cases, whether budget is allocated, and whether there is a designated owner for AI initiatives. Without strategic alignment, AI projects are either too small to matter (underfunded experiments) or too large to manage (enterprise-wide transformation attempts with no clear success criteria).
The most common symptom of low Strategic Alignment is the "pilot graveyard" -- organizations that have run 5 to 15 AI pilots, none of which have scaled. Each pilot was technically successful enough to complete, but none had a clear path to production because production required resources, prioritization decisions, and organizational changes that nobody had the mandate to authorize.
Strategic Alignment is a leadership problem, not a technology problem. Improving this dimension requires executive decisions about priority, ownership, and funding -- not additional tooling or training.
Dimension 6: AI Adoption History -- what does it measure and why does it matter?
AI Adoption History measures what your organization has already done with AI: whether you have completed any AI pilots, whether those pilots produced measurable outcomes, whether any AI tools are in active production use, and whether your team has accumulated practical experience with the implementation challenges that theory does not cover.
Prior experience is a strong predictor of future success. Organizations that have successfully deployed even a simple AI tool -- an automated email classification system, an AI writing assistant used by 80% of the team, a basic forecasting model integrated into weekly planning -- have demonstrated that they can navigate the organizational, technical, and change management challenges that accompany real deployment. That demonstrated capability transfers to more complex projects.
Organizations with no AI Adoption History are not disqualified from ambitious AI programs, but they should factor in a longer learning curve and expect more resistance at each stage. The first deployment is always the hardest.
"Culture is the most underrated readiness dimension in self-assessments and the most commonly cited root cause in failed AI projects."
4. How to conduct an AI readiness assessment
What is the difference between self-serve and consultant-led assessments?
A self-serve assessment is a structured questionnaire completed by one or more people inside the organization. It takes 3 to 30 minutes depending on depth, produces a score immediately, and costs nothing to $200. The ConsultNow 24-question assessment takes under 3 minutes and covers all six dimensions at the level of detail needed to identify your top strategic priority.
A consultant-led assessment involves external experts conducting stakeholder interviews (typically 6 to 12), reviewing technical documentation, auditing data infrastructure directly, and analyzing existing AI initiatives. It produces a 40 to 80 page report with detailed recommendations and typically takes 3 to 6 weeks. Cost ranges from $15,000 for a focused assessment at a small organization to $80,000+ for a large enterprise engagement.
The correct choice depends on what you need the output to do. If you need directional clarity and a prioritized action list to guide internal planning, a self-serve assessment is sufficient. If you need a document that will justify a major capital allocation to a board, a leadership team with significant AI skepticism, or an external regulator, a consultant-led assessment carries more evidential weight.
What do you need before starting?
For a self-serve assessment, you need 3 minutes and a basic understanding of your organization's operations. You do not need technical expertise. The ConsultNow assessment is written for business leaders, not engineers. If a question asks about data infrastructure, it asks whether your data from different systems is accessible in one place -- not which database engine you use.
For best results, involve three people: a senior leader who understands strategic direction and budget, a technical lead who knows your existing infrastructure, and an operations lead who knows your day-to-day processes. Have each complete the assessment independently, then compare scores. Significant disagreement between respondents on the same dimension is itself a useful signal -- it often indicates that a dimension is in transition or that there is inconsistency across teams.
For a consultant-led assessment, prepare access to stakeholders, a list of current technology systems and data sources, documentation of key business processes, and any prior AI project documentation including pilots that did not proceed to production.
How do the 24 questions map to the six dimensions?
The ConsultNow assessment allocates 4 questions to each of the six dimensions. Each question targets a specific sub-factor within the dimension: for Data Foundation, the four questions assess data availability, data quality, data accessibility, and data governance. For Team and Culture, they assess current AI skills, leadership commitment, change management history, and workforce openness to automation.
Questions use a 5-point response scale from "Not at all" to "Completely," with each response mapped to a specific score contribution. The dimension score is the average of its four question scores, scaled to 0-100. The overall score is a weighted average of all six dimensions, with Data Foundation and Strategic Alignment carrying slightly higher weight based on their predictive correlation with AI project outcomes.
What is the right time-box for a self-serve assessment?
Three minutes for initial completion. Budget 15 to 30 minutes for discussion immediately after if you are completing it with colleagues -- that discussion is often more valuable than the score itself, because disagreements about how to answer a question reveal real organizational ambiguity.
If you purchase the $29 personalized report, it arrives within 2 minutes of purchase. Budget 20 to 30 minutes to read the full report and mark the priority actions you want to pursue in the next 90 days.
5. How to interpret assessment scores
What do the four score bands mean?
The ConsultNow scoring system uses four bands, aligned with the maturity progression used by major frameworks including Gartner's AI Maturity Model and the MIT Technology Review AI Readiness framework.
- Beginning (0-25): The organization lacks the foundational prerequisites for most AI use cases. Data is fragmented or unavailable, technical infrastructure is limited, and there is no clear AI strategy. AI investment at this stage typically produces waste, not value. Priority is foundation-building, not AI deployment.
- Developing (26-50): The organization has some prerequisites in place but has significant gaps that will constrain AI success. Narrow, well-scoped AI use cases are feasible -- particularly in areas where the organization scores highest dimensionally. Broad AI deployment is premature. Priority is targeted gap-closing in the lowest-scoring dimensions.
- Advanced (51-75): The organization has sufficient foundations to deploy AI in multiple domains with appropriate governance. First deployments should produce measurable ROI. The primary risk at this stage is moving too fast across too many use cases simultaneously, which dilutes implementation quality. Priority is disciplined sequencing and measurement.
- Leading (76-100): The organization is operating AI in production at scale, with governance, measurement, and feedback loops in place. The strategic priority shifts from "how do we implement AI" to "how do we build proprietary AI advantages that competitors cannot replicate." This includes proprietary model fine-tuning, AI-enabled product differentiation, and building AI into core value propositions.
How should you read dimensional score variance?
High variance between dimensional scores is as important as the overall score. An organization with an overall score of 55 could have scored 80 on Technology, 75 on Strategic Alignment, 70 on Team and Culture, 60 on Process Maturity, 30 on Data Foundation, and 15 on AI Adoption History. That organization is in the Advanced band overall, but its Data Foundation score of 30 is a hard blocker for any data-dependent AI use case, regardless of its other scores.
The strategic principle: your lowest-scoring dimension sets the ceiling on your most AI-intensive use cases. You can work around a low score in one dimension by choosing use cases that do not require that dimension -- but you cannot ignore it indefinitely if you want to build AI capability at scale.
An organization with low variance (all dimensions within 15 points of each other) has a different strategic situation: it can advance on all fronts simultaneously, because there are no critical blockers. An organization with high variance should sequence its investments to address the blocking dimensions before scaling AI deployment broadly.
How accurate are self-reported scores?
Self-reported scores have two systematic biases. First, organizations overrate their data quality. Respondents typically answer based on whether data exists and whether it is used, rather than whether it meets the quality standards needed for AI applications. A Salesforce CRM that has been in use for 5 years feels like "good data" -- but if 30% of records lack key fields, if product categories are inconsistently applied, and if the system has no integration with support tickets or product usage, it does not meet the threshold for reliable AI outputs.
Second, organizations underrate their change management challenges. Most respondents answer culture and team questions based on their own openness to AI, not the organization's average openness. Senior leaders and technically-inclined respondents are disproportionately open to AI; the operational teams who will actually change their workflows to accommodate AI outputs are often much more resistant.
To compensate: treat your Data Foundation score with 10-point skepticism (assume the actual score is 10 points lower than you reported), and treat your Team and Culture score with 10-point skepticism in the same direction if the assessment was completed only by senior leadership.
"Your lowest-scoring dimension sets the ceiling on your most AI-intensive use cases. You cannot scale around a foundational blocker."
6. What to do next -- by score band
Beginning (0-25): what are the priority actions?
At this stage, AI deployment is premature. The investment should go into prerequisites, not tools. Five specific actions:
- Audit and centralize your data. Identify what data your organization generates, where it lives, and what would be needed to connect it. Implement a basic data warehouse or data lake (BigQuery, Snowflake, or Redshift are common choices) even if it is empty at first. The infrastructure needs to exist before data pipelines can be built.
- Run one productivity AI pilot, not an AI strategy. Give your team access to one generative AI tool (a code assistant for developers, a general-purpose AI assistant for knowledge workers, or an image-generation tool for creative teams) and measure usage after 30 days. This builds familiarity and identifies who in the organization will be early champions.
- Appoint an AI owner. This does not need to be a full-time role. It needs to be one named person who is responsible for coordinating AI initiatives, tracking what tools are being used, and reporting to leadership quarterly. Without ownership, nothing scales.
- Document one core process end-to-end. Choose the process most likely to benefit from AI automation. Map every step, every input, every output, every exception. This documentation is required before any AI tool can be built or configured for that process.
- Establish a baseline AI policy. Define what data can and cannot be sent to AI tools, what outputs require human review before use, and what the escalation path is if an AI tool produces a problematic output. Without policy, your team will either avoid AI (fearful) or misuse it (unconstrained).
Developing (26-50): what are the priority actions?
At this stage, narrow AI deployment is feasible. Priority is identifying and executing your highest-certainty first use case while simultaneously closing your lowest-scoring dimension gap.
- Identify your highest-certainty first use case. This is the use case where your data is already clean and accessible, the process is already documented, and the team already uses the adjacent tools. It is usually something narrower than leadership wants -- start with "automated meeting summaries" not "AI-driven sales forecasting." Succeed visibly, then expand.
- Invest in your lowest-scoring dimension. If Data Foundation is your lowest score, allocate budget to data engineering. If Team and Culture is lowest, allocate budget to training and internal communication. Do not spread improvement efforts across all dimensions simultaneously -- focus produces faster results and more visible progress.
- Establish an AI measurement framework. Before deploying any AI tool, define how you will measure its impact. Common metrics: time saved per user per week, error rate reduction, throughput increase, cost per unit of output. Without measurement, you cannot justify continued investment and you cannot learn from failure.
- Connect your key data sources. If your CRM is not connected to your finance system, your support system, or your product usage data, assign this as a technical priority. Even a basic integration (daily CSV export to a shared database) is better than no integration. AI cannot work across siloed data.
- Build internal AI literacy. Run a 2-hour AI fundamentals workshop for all managers. Workshops should cover what AI can and cannot do reliably, what data AI requires, and how to evaluate AI outputs critically. This is not about making managers into AI engineers -- it is about giving them enough context to make good decisions about where AI applies in their teams.
Advanced (51-75): what are the priority actions?
At this stage, AI deployment can produce material ROI. Priority is disciplined sequencing and building the organizational muscle for sustained AI execution.
- Build an AI use case portfolio with explicit sequencing. Map 10 to 20 potential use cases on a 2x2 of impact vs. feasibility. Commit to executing the top 3 to 5 high-impact, high-feasibility cases in the next 12 months. Explicitly deprioritize everything else -- resource dilution is the most common reason Advanced organizations stall.
- Establish AI governance infrastructure. Create a lightweight AI governance committee (meets monthly, reviews proposals and outcomes for active AI deployments), an approved tool list, and a standard deployment checklist that covers data privacy, output quality, and user training requirements. This infrastructure is needed before scaling to multiple deployments.
- Move from pilots to production. If you have pilots that have been running for more than 6 months without moving to production, diagnose why and either fix the blocker or cancel the pilot. Pilot graveyard growth is a signal of Strategic Alignment failure. Production deployment is the only point at which AI investment converts to business value.
- Invest in MLOps if using custom models. If any of your AI deployments use custom or fine-tuned models, invest in monitoring for model drift, automated retraining pipelines, and output quality metrics. Models degrade over time as the world changes. Without monitoring, you will not know when your model has stopped being reliable.
Leading (76-100): what are the priority actions?
At this stage, the strategic question shifts from AI adoption to AI advantage. Priority is building proprietary capability that competitors cannot easily replicate.
- Evaluate proprietary model development. Organizations with large proprietary datasets and clear high-value use cases should evaluate whether fine-tuning a foundation model on their data would produce competitive advantages. Fine-tuning is now accessible to organizations without large ML teams through hosted fine-tuning APIs from leading LLM providers and Google Vertex AI.
- Operationalize AI agents in high-value workflows. Agentic AI -- systems that take sequences of actions autonomously -- represents the highest-value AI application for most Leading organizations. Identify 2 to 3 high-value, well-defined workflows where a human currently spends significant time orchestrating routine decisions, and evaluate whether an agent can own that orchestration with human review at exception points only.
- Build AI into product or service differentiation. At this readiness level, AI should be a component of what you sell, not just how you operate. This requires product roadmap integration and go-to-market strategy for AI-enabled capabilities.
- Develop AI talent as a core organizational capability. Hire or develop internal expertise in prompt engineering, AI system design, and AI evaluation. At scale, dependence on external vendors for every AI decision becomes a bottleneck and a competitive liability.
7. Common mistakes in AI readiness assessments
Mistake 1: Letting only technical staff complete the assessment
Technical staff consistently overrate Technology Infrastructure scores and underrate Team and Culture and Strategic Alignment scores. An assessment completed solely by IT or engineering produces a skewed profile that suggests the organization is more ready than it is in the dimensions that actually determine adoption success. The assessment should always include at least one senior business leader and one operations lead alongside any technical respondents.
Mistake 2: Treating the score as the deliverable rather than the starting point
Organizations frequently complete an assessment, share the score with leadership, and consider the work done. A score without a subsequent action plan produces nothing. The assessment should trigger a 90-minute working session within two weeks of completion: which dimension is the highest priority to improve, what are the first three specific actions, who owns each action, and what is the 90-day check-in criteria. Without that session, assessments become data points in a deck rather than drivers of change.
Mistake 3: Conflating access to AI tools with AI readiness
Having a Microsoft 365 Copilot license, a ChatGPT Teams subscription, and a Notion AI feature enabled is not AI readiness. It is AI access. Readiness is the organizational capacity to use those tools effectively, consistently, and in ways that produce measurable business outcomes. Organizations that measure readiness by counting tool licenses systematically overestimate where they stand and underinvest in the foundational work that would make those tools valuable.
Mistake 4: Treating readiness as a one-time measurement
AI readiness changes with every significant organizational change: a new data infrastructure project, a key hire, a successful (or failed) deployment, a regulatory change. Organizations that assess once and then treat the score as permanent make strategic decisions based on outdated information. The assessment should be treated as a periodic operating instrument -- quarterly for actively investing organizations, semi-annually for planning-stage organizations -- not a one-time diagnostic.
Mistake 5: Applying enterprise AI frameworks to SMB contexts without adjustment
The Gartner AI Maturity Model, McKinsey's quantum black framework, and the Deloitte AI Institute's maturity model are calibrated for enterprise organizations with dedicated AI teams, multi-year transformation budgets, and complex stakeholder governance. Applied directly to a 50-person company, they produce assessments that score the company low on dimensions that are irrelevant at that scale (for example, "enterprise AI Center of Excellence") while missing dimensions that are critical at smaller scale (for example, "does the CEO personally use AI tools, and does the team know it"). The ConsultNow assessment is calibrated for small and mid-size organizations, where the relevant questions differ substantially from enterprise frameworks.
Mistake 6: Prioritizing overall score improvement over gap-closing in blocking dimensions
An organization that scores 65 overall but 22 on Data Foundation will improve its overall score faster by raising any other dimension than by fixing data -- because the other dimensions are closer to a natural improvement floor. But raising all other dimensions while leaving data at 22 produces an organization that looks better on paper but remains blocked on every data-dependent AI use case. The strategically correct priority is the blocking dimension, even when it is the hardest to fix and the slowest to move.
8. How this assessment compares to other frameworks
Several established frameworks evaluate AI readiness or maturity. Understanding where ConsultNow fits -- and where the other frameworks are more appropriate -- helps you choose the right tool for your situation.
How does this compare to Gartner's AI Maturity Model?
Gartner's AI Maturity Model evaluates organizations across five stages of AI maturity (Awareness, Active, Operational, Systemic, and Transformational) and is typically applied by Gartner analysts during consulting engagements. It is comprehensive and well-researched, but it is designed to be administered by an experienced analyst with deep access to the organization's data, technology, and strategy. It is not self-serve.
The ConsultNow assessment covers the same conceptual territory -- with the six dimensions mapping closely to Gartner's five evaluation areas -- but is designed to be completed in under 3 minutes by a business leader without analyst facilitation. It trades depth for speed and accessibility. For organizations that need Gartner-level depth, the ConsultNow assessment is a useful pre-engagement diagnostic that helps focus a Gartner engagement on the dimensions where the questions are most open.
How does this compare to McKinsey Quantum Black's AI assessment approach?
McKinsey Quantum Black's AI assessment is embedded in larger transformation engagements and includes both a diagnostic survey and a significant qualitative layer: structured interviews with senior leadership, workshops with functional teams, and analysis of existing AI initiatives and their outcomes. The output is a tailored strategic roadmap with McKinsey's backing and full implementation support available. Entry-level engagements start at $300,000.
The ConsultNow assessment is complementary for organizations that cannot access a McKinsey engagement (the majority of companies), and as a pre-engagement tool for organizations preparing for one. It uses the same six-dimension framework logic but delivers directional clarity at a cost accessible to any organization.
How does this compare to the MIT Technology Review AI Readiness framework?
The MIT Technology Review's "AI Readiness in Business" framework (published in partnership with MathWorks) focuses heavily on the technology and talent dimensions and is oriented toward engineering-led organizations considering AI in R&D, manufacturing, and product development contexts. It is available as a self-serve survey but is most useful for organizations where technical staff lead the AI agenda.
The ConsultNow assessment covers Technology and Team dimensions similarly, but gives equal weight to Data Foundation, Process Maturity, Strategic Alignment, and AI Adoption History -- dimensions that are more predictive for organizations where AI is being deployed in business operations, customer-facing functions, or decision support, rather than in technical product development.
How does this compare to Deloitte's AI Institute assessment approach?
Deloitte's AI Institute publishes annual research on AI adoption patterns and offers an assessment tool as part of its advisory services. Its framework is particularly strong on governance and ethics dimensions -- areas that are increasingly important under the EU AI Act and similar regulatory frameworks. Deloitte's assessment is primarily used within existing client relationships and requires Deloitte engagement to interpret results and build action plans.
The ConsultNow assessment incorporates governance and compliance considerations within the Technology Infrastructure and Strategic Alignment dimensions, sufficient for SMB organizations that need directional clarity on regulatory exposure. Organizations in high-risk AI sectors (financial services, healthcare, HR technology) where regulatory compliance is a primary concern will benefit from Deloitte's deeper governance-focused methodology alongside or after completing the ConsultNow assessment.
"The ConsultNow assessment covers the same conceptual territory as Gartner, McKinsey, and MIT frameworks -- in under 3 minutes, at no cost."
9. Frequently asked questions
How long does an AI readiness assessment take?
The ConsultNow 24-question assessment takes under 3 minutes to complete. A full consultant-led assessment covering the same dimensions through stakeholder interviews, data audits, and system reviews typically takes 3 to 6 weeks and costs $15,000 to $80,000. For most small and mid-size organizations, 3 minutes and the optional $29 report delivers enough directional clarity to make sound strategic decisions.
Who should take the assessment?
The most useful single respondent is someone who can see across strategy, technology, and operations -- typically a CEO, COO, CTO, or head of digital transformation at a small-to-mid-size organization. For best results, have three people complete it independently (a senior leader, a technical lead, and an operations lead) and compare scores. Significant score disagreement between respondents on the same dimension indicates real organizational ambiguity in that area.
How accurate is a self-serve AI readiness assessment?
Self-serve assessments identify the correct strategic direction and top-priority gaps in over 85% of cases, based on ConsultNow's comparison of self-assessed scores against consultant-led evaluations. The main limitation is self-reporting bias: organizations tend to overrate their data quality and underrate their change management challenges. To compensate: treat your Data Foundation score with 10-point downward skepticism if the respondent is not your primary data owner.
Is my assessment data private?
At ConsultNow, your answers are used solely to generate your score and, if purchased, your personalized report. Responses are not sold, shared with third parties, or used to train models. The full privacy policy is at consultnow.io/privacy. See also our FAQ page for detailed data handling questions.
Can I retake the assessment?
Yes. There is no limit on retakes. Organizations typically find value in retaking the assessment every 6 months to track improvement across dimensions, or after a significant infrastructure or hiring change that would materially affect one or more dimensions. Each retake generates a fresh score; tracking scores over time is one of the most useful ways to measure progress on AI readiness investment.
Does an AI readiness assessment replace a human consultant?
No. A structured assessment replaces the diagnostic phase of a consulting engagement -- the part that tells you where you stand. It does not replace the work of a human expert who can interview stakeholders, review actual systems, interpret political dynamics, and design organization-specific implementation plans. The assessment is the starting point, not the full engagement. For organizations that need implementation support after completing the assessment, our FAQ covers how the $29 report can serve as input to a specialist engagement.
What score is considered "AI ready"?
There is no binary pass/fail threshold. Organizations in the Advanced band (51-75) are ready to deploy AI in specific, well-scoped use cases with appropriate governance in place. Organizations in the Leading band (76-100) can deploy AI broadly and are beginning to build competitive advantages from it. Being in the Beginning or Developing band does not mean you cannot use AI -- it means foundational investments need to come before scaled deployment.
What is the difference between AI readiness and AI maturity?
AI readiness measures whether your organization has the prerequisites to successfully adopt AI: clean data, technical infrastructure, skilled teams, and aligned strategy. AI maturity measures how far along you are in actually using AI, from ad hoc experimentation to enterprise-wide deployment. You assess readiness before investing; you assess maturity after you have already started. Both measurements use similar frameworks and dimensions, but the strategic implications differ: readiness gaps are addressed before deployment; maturity gaps are addressed during and after.
Do I need a technical background to take the assessment?
No. The ConsultNow assessment is written for business decision-makers, not engineers. Questions about technology ask about observable behaviors and outcomes, not technical specifications. If a question asks about your data infrastructure, it asks whether data from different systems is connected and accessible -- not which database engine you use. If a question asks about AI tools, it asks whether your team uses them consistently -- not how they are integrated at the API level.
What does the $29 personalized report include?
The $29 report includes: an executive summary of your AI position in plain English, a section-by-section analysis of all six dimensions based on your specific answers, the top 3 prioritized actions for your highest-impact gaps, AI tool recommendations matched to your readiness level (not a generic list), a 12-week sequenced roadmap ordered by impact and feasibility, and a satisfaction guarantee with a full refund if the report does not earn the price. It arrives by email within 2 minutes of purchase. See full pricing details.
How often should an organization run an AI readiness assessment?
Every 6 months is the most useful cadence for organizations actively investing in AI readiness. Annual assessment is sufficient for organizations in early planning stages. A re-assessment is also warranted after any major event: a significant data infrastructure project, a new AI-focused hire, a regulatory change affecting your sector, or after completing a 12-week improvement roadmap. Tracking dimensional scores over time produces a clear picture of where investment is producing improvement and where gaps persist.
Is the assessment different for different industries?
The six dimensions apply across all industries. What differs is the threshold for "sufficient" in each dimension -- a financial services firm faces tighter data governance requirements than a marketing agency, and a healthcare organization has different regulatory constraints than a SaaS company. The ConsultNow assessment applies universal questions and adjusts interpretation for industry context in the $29 personalized report, which accounts for sector-specific regulatory and operational factors in its analysis. See pricing for report details.