Sections

Commentary

Imagining failure to attain success: The art and science of pre-mortems

How might AI fail to meet its promises? Image generated by Shutterstock AI.

How often have we thought, “If I knew then what I know now, I’d have done things differently”? Such a statement reflects the wisdom of hindsight, highlighting how decisions once seen as beneficial can lead to unanticipated negative consequences. It underscores missed opportunities to assess and mitigate risks, actions that would have led to much more positive outcomes.

This reflection is at the center of our approach to the Brookings Global Task Force on AI and Education. As the world grapples with the negative impacts of social media on young people, it would be irresponsible not to ask what could go wrong with artificial intelligence (AI) and how it could negatively impact the learning and development of children and adolescents.

To be clear, we strongly believe in the potential of technology to augment children’s educational experience and to redefine its possibilities. Across the globe we have observed the power of interactive audio instruction and mobile learning in providing learning opportunities to students—and their teachers—in communities affected by conflict, challenging geography and a lack of qualified educators. We have seen teachers skillfully harness tablets, assistive technologies and multimedia applications to help children overcome learning challenges, build understanding, and expand their creativity.

But we have also witnessed the failure of significant public investments in technologies, often implemented with little consideration for their pedagogical use, potential negative effects, or long-term sustainability. And we know that within education, we have historically approached the relationship between technology and education with a kind of innocence, often embracing new tools without fully considering their impact on the educational ecosystem and acting only when technology provokes a crisis (as with social media or phones in the classroom).

Thus, given these issues, perhaps the most important step we can take today to harness the incredible power of AI for advancing young people’s learning and development is to confront the difficult questions about what could go wrong. By identifying potential risks now, we can better anticipate and begin to address the challenges that today’s technologies pose to the education systems of tomorrow. While it may seem like a hypothetical abstraction, proactively examining these negative risks and potential failures now is the only way to begin to prevent or mitigate these negative outcomes in the future.

A pre-mortem on generative artificial intelligence in education

Enter the concept of a pre-mortem. Unlike a post-mortem, which examines the causes of failure after it has occurred, a pre-mortem moves the autopsy forward. It is a proactive, strategic planning tool designed to anticipate potential failures before they occur. Commonly employed in fields such as software and product development, a pre-mortem assumes that failure has already occurred and works backward to uncover the reasons behind it. This “prospective hindsight” helps teams identify vulnerabilities and mitigate risks before launching a project or implementing an innovation. This pre-mortem approach is the foundation of our approach to the Brookings Global Task Force on AI and Education. It shapes the questions that task force members are collectively exploring and guides the research we are pursuing regarding the potential impact of generative AI on the education of children and adolescents.

Pre-mortem explained

A pre-mortem is a forward-looking thought experiment conducted with a team. It starts with a simple future-oriented premise: “It’s (2035). Our innovation, product, or project has failed. Why?” Team members brainstorm potential causes of failure, discuss how and why these issues might arise, and group the causes into categories. From there, they work backwards to the present, prioritizing the most critical risks and identifying actions to begin to address these challenges now.

For our task force research and discussions, the pre-mortem is especially valuable in addressing the evolution of generative AI in education in that it helps us systematically identify how AI might fail to meet its promises while enabling us to identify the strategies to prevent failures and enable success.

Benefits of a pre-mortem

Starting from failure is the superpower of a pre-mortem.

First, focusing on failure reduces overconfidence. Human beings often fall victim to overestimating their abilities and the likelihood of success, particularly when they share common goals and experiences. Such overconfidence can blind us to potential risks and lead us to dismiss contradictory information, perpetuating a cycle of self-reinforcing optimism. Pre-mortems break this cycle by requiring participants to confront the possibility of failure head-on, fostering a more realistic and critical approach to planning.

Second, pre-mortems help mitigate social desirability bias. This occurs when individuals respond in ways they believe are socially acceptable or aligned with group norms, rather than being entirely truthful. In the context of innovation, social desirability bias can make it difficult for team members to voice concerns or critique ideas about which others are enthusiastic. This challenge is particularly pronounced in technology-related projects, where “techno-optimism” about potential benefits often overshadows critical evaluation. By starting with the assumption of failure, pre-mortems create safety for honest dialogue, enabling participants to express genuine concerns without fear of judgment.

Third, the reduction of overconfidence and social desirability bias encourages participants to share candid insights and raise “out-of-the-box” concerns. This openness results in a richer, more accurate understanding of potential risks, which can significantly improve an innovation’s outcomes. When people feel psychologically safe voicing their doubts and fears, the quality of information increases, leading to more robust decision-making,  innovative problem-solving, and  addressing vulnerabilities and mitigating potential issues before they arise.

Finally, by imagining failure, we can pinpoint potential problems at the start of a project, a product, or innovation, allowing for proactive solutions rather than reactive measures. The value of learning from failure is well-documented. When seen as an opportunity for learning, failure becomes a resource for awareness, adaptation and prevention of future failures.

Thus, the pre-mortem process offers a shift in perspective. No one can easily dismiss a concern with “that could never happen” because failure has already been assumed. The assumption of failure forces us to uncover explanations for the failure and in doing so we surface risks and concerns that might otherwise remain overlooked.

The pitfalls and challenges of a pre-mortem

Not surprisingly, any prospective risk analysis, like a pre-mortem, especially on a constantly evolving technology like generative AI, presents numerous challenges. First, AI is still a young and evolving technology. Its speed and scale are disorienting, making its trajectory hard to predict. This uncertainty fosters a mindset where we assume that what has not happened yet cannot happen. Second, our imaginations are limited, and we often fill gaps in understanding with past experiences. Thus, we tend to assume that tomorrow’s problems will resemble yesterday’s, leaving us unable to anticipate novel outcomes. Third, human beings are naturally drawn to new technologies, often overestimating their benefits while underestimating challenges and long-term risks. Fourth, engaging in a pre-mortem does not guarantee progress. Even when armed with the information generated from a pre-mortem process, governments, education agencies, educators, families, young people and citizens may choose to do nothing.

But perhaps the greatest challenge of the pre-mortem rests with its challenge to the inherent techno-optimism of educationalists, who are naturally inclined to embrace innovations that inspire and nurture the growth and potential of young people. Educators and educationalists often view educational innovations through a lens of possibility and progress, which makes envisioning a future where these innovations fail feel profoundly counterintuitive. Asking the educationalists to start from a presumption of failure not only conflicts with this “optimism bias” but also represents a significant psychological hurdle, as it requires a shift from focusing on hoped-for benefits to grappling with potential risks, most of which they’ve never experienced before and cannot yet imagine.

The ultimate goal of a pre-mortem is awareness and improvement

More than three decades ago, the American cultural critic Neil Postman warned that all technological change “is a Faustian bargain. For every advantage and new technology offers, there is always a corresponding disadvantage.” We see this tradeoff in our modern technological landscape: social media platforms have dissolved geographic boundaries, enabling global interactions while simultaneously becoming vectors for misinformation and social polarization and contributing to an epidemic of loneliness. The automobile has provided unparalleled mobility and independence, revolutionizing transportation, tourism and commerce while also damaging public health and the environment, scarring landscapes, and contributing to the decline of small towns and independent businesses.

A pre-mortem analysis allows us to focus on potential first and second-order adverse effects so we can better anticipate the challenges AI poses to education. This foresight can enable us to consider how our adoption, integration, and use of AI may inadvertently compromise what is most essential in education—critical cognitive skills, student-teacher relationships, learning as a goal in and of itself, and education as a human endeavor. And as regret teaches us, it is far better to establish healthy policies and practices for students, teachers, administrators and families now than asking them to unlearn unhealthy technology habits later on.

Pre-mortems are not a silver bullet. Future events are difficult to predict, even for experts, and though this pre-mortem may allow educators to speak in a more informed way about generative AI in education, it will by no means be the definitive statement.

However, we believe that engaging in pre-mortem analyses around generative AI in education will help education stakeholders thoughtfully anticipate and address potential negative impacts of AI while maximizing its benefits. By carefully imagining and analyzing possible future outcomes, we can adopt a more informed and balanced approach to integrating AI into educational systems, avoiding the missed opportunities for thoughtful planning that accompanied the adoption of technologies like social media. If the past is prologue, then we must prioritize strategies that mitigate AI’s possible harms as energetically and rigorously as we promote its potential benefits.

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).