AI Apocalypse: 80% of Projects Crash and Burn, Billions Wasted says RAND Report
A new RAND Corporation report reveals the sobering reality behind artificial intelligence (AI) projects: despite the hype, most of them fail. The study, based on interviews with 65 experienced data scientists and engineers, exposes the root causes of these failures and offers a roadmap for success.
“By some estimates, more than 80 percent of AI projects fail,” the report states. “This is twice the already-high rate of failure in corporate information technology (IT) projects that do not involve AI.” With private-sector investment in AI increasing 18-fold from 2013 to 2022, the stakes are higher than ever.
Table of contents
- Leadership Failures: The Blind Leading the Blind
- The Data Dilemma: Garbage In, Garbage Out
- Chasing Shiny Objects: When Engineers Lose Focus
- Infrastructure: The Unsexy Foundation of Success
- Recommendations: A Reality Check for AI Aspirations
- The Academic Perspective: Publish or Perish
- A Wake-Up Call for the AI Industry
Leadership Failures: The Blind Leading the Blind
The most common cause of AI project failure? It’s not the technology – it’s the people at the top. Business leaders often misunderstand or miscommunicate what problems need to be solved using AI. As one interviewee put it, “They think they have great data because they get weekly sales reports, but they don’t realize the data they have currently may not meet its new purpose.”
Many executives have inflated expectations of what AI can achieve, fueled by salespeople’s pitches and impressive demonstrations. They underestimate the time and resources required for successful AI implementation. One interviewee noted, “Often, models are delivered as 50 percent of what they could have been” due to shifting priorities and unrealistic timelines.
The report highlights a critical disconnect between business leaders and technical teams. Without clear communication and understanding of project goals, AI initiatives are doomed from the start. Leaders need to invest time in learning about AI capabilities and limitations, while technical teams must improve their ability to explain complex concepts in business terms.
Furthermore, the study found that many organizations lack the patience required for successful AI implementation. Projects are often abandoned prematurely or shifted to new priorities before they have a chance to demonstrate real value. This short-term thinking undermines the potential of AI and wastes significant resources.
The Data Dilemma: Garbage In, Garbage Out
Data quality emerged as the second most significant hurdle. “80 percent of AI is the dirty work of data engineering,” an interviewee stated. “You need good people doing the dirty work—otherwise their mistakes poison the algorithms.”
Many organizations lack sufficient high-quality data to train effective AI models. Legacy datasets, often collected for compliance or logging purposes, may not be suitable for AI training. Even when large quantities of data exist, they may be unbalanced or lack the necessary context for AI applications.
The report highlights a critical shortage of data engineers, describing them as “the plumbers of data science.” High turnover in data engineering roles leads to knowledge loss and increased project costs. Organizations often undervalue the crucial role of data engineers, leading to a brain drain in this essential field.
Another challenge is the lack of domain expertise within AI teams. Data scientists frequently lack deep understanding of the business contexts they’re working in, leading to misinterpretations of data and flawed model designs. The report emphasizes the need for closer collaboration between domain experts and AI practitioners to ensure that models are built on a foundation of accurate, relevant data.
Chasing Shiny Objects: When Engineers Lose Focus
The study found that engineers themselves sometimes contribute to project failures. Many data scientists and engineers are drawn to using the latest technological advancements, even when simpler solutions would suffice. “AI projects often fail when they focus on the technology being employed instead of focusing on solving real problems for their intended end users,” the report notes.
This tendency to chase “shiny objects” can lead to unnecessarily complex solutions that are difficult to maintain and explain to stakeholders. It also often results in wasted resources as teams invest time and effort into learning and implementing cutting-edge technologies that may not be the best fit for the problem at hand.
The report suggests that organizations need to strike a balance between innovation and practicality. While it’s important to stay current with technological advancements, the primary focus should always be on solving real business problems effectively. This may require a shift in mindset for many technical professionals, as well as changes in how organizations incentivize and evaluate their AI teams.
Infrastructure: The Unsexy Foundation of Success
Under investment in infrastructure emerged as another key factor in AI project failures. Organizations often lack adequate systems for data management and model deployment. The report emphasizes that “investing in data engineers and ML engineers can substantially shorten the time required to develop a new AI model and deploy it to a production environment.”
Many companies are eager to jump into AI projects without first laying the necessary groundwork. This can lead to a host of problems, including difficulty in scaling successful prototypes, inconsistent data quality, and challenges in maintaining and updating deployed models.
The report argues that organizations need to take a more holistic view of AI implementation. This means investing in robust data pipelines, automated testing and deployment systems, and tools for monitoring model performance in production. While these investments may not be as exciting as cutting-edge AI algorithms, they are crucial for long-term success.
Furthermore, the study found that many organizations struggle with the transition from successful AI prototypes to production-ready systems. This “last mile” problem often derails promising projects, as teams discover that their models don’t perform well in real-world conditions or can’t handle the scale of production data.
Recommendations: A Reality Check for AI Aspirations
The RAND report offers several recommendations for organizations looking to improve their AI project success rates:
- Ensure technical staff understand the project purpose and business context. “Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure,” the report states. This requires ongoing dialogue between business and technical teams, as well as efforts to build shared understanding and vocabulary.
- Choose enduring problems. “Before they begin any AI project, leaders should be prepared to commit each product team to solving a specific problem for at least a year.” This recommendation pushes back against the tendency to chase quick wins or constantly shift priorities. By focusing on long-term, high-impact problems, organizations can give their AI initiatives the time and resources they need to succeed.
- Focus on the problem, not the technology. “Chasing the latest and greatest advances in AI for their own sake is one of the most frequent pathways to failure.” The report emphasizes the importance of selecting the right tool for the job, even if it’s not the most cutting-edge solution. This may require changes in how organizations evaluate and reward their technical teams.
- Invest in infrastructure. “Up-front investments in infrastructure to support data governance and model deployment can substantially reduce the time required to complete AI projects.” While these investments may not be as glamorous as AI research, they are crucial for long-term success. This includes building robust data pipelines, implementing version control for models and data, and developing systems for monitoring and maintaining deployed AI solutions.
- Understand AI’s limitations. “AI is not a magic wand that can make any challenging problem disappear; in some cases, even the most advanced AI models cannot automate away a difficult task.” The report calls for a more realistic assessment of what AI can and cannot do, urging organizations to temper their expectations and focus on areas where AI can truly add value.
The Academic Perspective: Publish or Perish
The study also examined AI research in academia, finding that publication pressure and the pursuit of prestige often overshadow practical applications. “If an AI project did not result in publication, then the project was not perceived as a success,” the report notes, highlighting a misalignment between academic incentives and real-world impact.
This focus on publications can lead researchers to prioritize novel but impractical approaches over incremental improvements that could have significant real-world impact. The report suggests that academic institutions should consider broadening their criteria for success in AI research, potentially including metrics related to practical applications or industry collaborations.
Furthermore, the study found that many academic researchers struggle with access to high-quality, real-world datasets. This can lead to a disconnect between academic research and practical applications. The report recommends increased collaboration between academia, industry, and government agencies to provide researchers with access to more relevant data while maintaining necessary privacy and security measures.
A Wake-Up Call for the AI Industry
This RAND report serves as a much-needed reality check for the AI industry. While the potential of AI remains immense, the path to successful implementation is fraught with challenges. Organizations must bridge the gap between hype and reality, focusing on solid fundamentals like data quality, infrastructure, and clear communication between technical and business teams.
As one interviewee wisely noted, “Stakeholders want to be a part of the process. They don’t like it when you say, ‘it’s taking longer than expected; I’ll get back to you in two weeks.’ They are curious.” This highlights the need for ongoing, transparent communication throughout AI projects, keeping all stakeholders informed and engaged.
The report also emphasizes the importance of patience and persistence in AI development. Quick wins are rare, and organizations need to be prepared for a long-term commitment to see real benefits from their AI initiatives. This may require a shift in organizational culture and expectations, moving away from short-term thinking towards a more strategic, long-term view of AI implementation.
By heeding these lessons and adopting a more realistic, patient approach to AI development, organizations can increase their chances of success in this transformative field.
The future of AI is bright – but only for those who can navigate the very human challenges that stand in its way. As the industry matures, those who can balance innovation with practicality, and technical excellence with business acumen, will be best positioned to harness the true potential of AI.