In the modern business arena, the pursuit of certainty is a valiant, if often foolhardy, quest. We crave clear paths, predictable outcomes, and the reassuring hum of data telling us precisely what to do next. And why not? With the dazzling rise of Artificial Intelligence, it feels like we’re finally on the cusp of truly knowing. But before we hand over the keys to the kingdom entirely, it’s crucial to remember a distinction as vital as it is frequently ignored: the difference between risk and uncertainty. Neglecting this subtle yet profound chasm doesn’t just lead to suboptimal decisions; it can lead to delightful corporate chaos, the kind that makes excellent post-mortem case studies.
Risk lives in the comfortable confines of statistics. It’s when you know the possible outcomes and can assign probabilities to them, often with a confidence level that borders on smug. Think of it like a very sophisticated game of cards where the deck is fixed, and you’ve memorized every permutation. This is where AI truly shines, a beacon of computational brilliance. AI thrives on data, patterns, and the satisfying crunch of numbers. Its ability to process gargantuan datasets, identify correlations even a seasoned detective would miss, and predict outcomes based on quantifiable risk is quite impactful.
Take sales forecasting, for instance. Before AI, it was often a skilled practice by a few with spreadsheets and a good sense of intuition (read: educated guessing). Now, AI can analyse historical sales data, seasonality, promotional impacts, and even external variables to predict next quarter’s sales with impressive accuracy. Or consider fraud detection. AI models can sift through millions of transactions in milliseconds, identifying anomalous patterns indicative of fraud more efficiently and accurately than many human teams could. It’s a logical process of data points, allowing us to manage the probabilities of what we already understand. A well-designed algorithm will likely even send you a message if you manually override its decision to flag a “suspicious” transaction.
Uncertainty, however, is the unruly guest at the party. It’s when you don’t know all possible outcomes, or when assigning probabilities feels like trying to predict the flight path of a drunk butterfly. As the brilliant statistician Leonard J. Savage explored in his work on “small worlds”, uncertainty lurks outside the perfectly defined “small worlds” where probabilities can be rigorously applied. It’s the truly novel, the utterly unprecedented, the sudden plot twist the universe throws at you just when you thought you had it all figured out.
Imagine a global pandemic grounding logistics (who saw that coming?), a competitor launching a product so disruptive it renders your entire market analysis moot, or a sudden, inexplicable shift in consumer behaviour that makes all your meticulously crafted sales forecasts look like the ramblings of a particularly creative toddler. These are not scenarios where more data will magically give you probabilities.
Consider the challenge of expert knowledge generation in regulated contexts. While AI can digest vast amounts of legal text and regulatory documents, it struggles with the nuanced interpretation, the ethical judgment, and the foresight required to develop truly robust, defensible, and forward-looking regulatory strategies. Regulations evolve, precedents are set, and the “spirit” of the law often defies simple algorithmic categorization. This isn’t about calculating probabilities; it’s about navigating ambiguous language, anticipating unforeseen consequences, and applying human discretion where an algorithm would merely see conflicting data points. These are the “unknown unknowns". Asking an algorithm about these is like asking your toaster for relationship advice – it’ll give you an answer, but it won’t be particularly helpful.
Herein lies the critical lesson for decision-makers: an algorithmic solution, however sophisticated, will never be able to account for the intangible, the truly novel, or the profound uncertainty that permeates reality.
AI is an unbelievably powerful engine for navigating risk. It can slice and dice market trends, optimize every conceivable variable, and even identify potential security threats that would otherwise be the stuff of late-night IT nightmares. The sheer value AI brings in terms of efficiency, accuracy, and actionable insights into quantifiable risks is undeniable. It’s the ultimate spreadsheet guru, the tireless number-cruncher, the digital analyst who never complains about overtime.
However, AI operates strictly within the “small world” of its training data and algorithms. It excels at tasks where the rules are clear, the data is defined, and the outcomes, while probabilistic, are definable. It cannot, by its very nature, grasp the nuances of human irrationality, the complexities of a volatile geopolitical landscape, the emergent properties of social movements, or the profound impact of a truly disruptive, utterly unprecedented idea. These are the realms of uncertainty, where the best AI can do is wave its digital arms wildly and scream, “Insufficient Data!” It certainly won’t be able to intuit your company culture or your existential dread.
Confusing risk with uncertainty leads to a dangerous overreliance on algorithmic solutions for problems they were never designed to solve. It fosters a false sense of security, leading to the kind of spectacular corporate missteps that make headlines (and then quickly vanish from our collective memory, only to be repeated later).
Therefore, the human decision-maker remains, rather inconveniently, indispensable. Our responsibility is to:
In essence, AI is an extraordinary amplifier of our decision-making capabilities, particularly when it comes to understanding and mitigating risk. But let us not be lulled into the illusion that it can eliminate uncertainty. The most successful businesses will be those that master the nuanced art of distinguishing between the two, harnessing AI’s magnificent power for the quantifiable, and reserving human ingenuity (and the occasional dark chuckle) for the truly unpredictable. Because, let’s face it, reality has a particularly wicked sense of humour.
PS: Agentic AI was intentionally left out of the conversation - beyond the large debate on its value, it is bringing complexity reserved for an other post.
Copyright 2025 - Mikael Koutero. All rights reserved.