00:00:00 Discussing the importance of forecasting and skepticism.
00:00:40 Introducing guests Jonathon Karelse and Joannes Vermorel.
00:01:37 Inspiration behind writing “Histories of the Future” and the importance of questioning forecasting methods.
00:05:49 Defining forecasting and its roots in the early 20th century.
00:08:53 Debating the sensibility of extending series of measurements to predict the future.
00:09:40 Classic 21st century forecasting perspective and its attachment to time series.
00:10:33 Recalibration of success measurement in forecasting and focusing on making money in business.
00:13:25 Newtonian principles and deterministic approach in forecasting and their influence on economic thought.
00:16:55 European advances in math and statistics, and their impact on North American forecasting methods.
00:18:25 Adapting to the inherent inaccuracies of forecasting and accepting the idea that it will never be perfectly accurate.
00:19:29 The problem with deterministic forecasting and embracing probabilistic approaches.
00:20:36 Early thinkers on AI and their predictions for its capabilities.
00:21:55 The influence of behavioral economics on forecasting and the classical approach.
00:23:00 The irrationality of humans and the emergence of behavioral economics.
00:26:34 Heuristics, their evolutionary benefits, and the downside in interpreting data.
00:28:55 Examining human behavior when making decisions based on data.
00:29:37 How framing the data with a story affects decision-making.
00:31:13 The impact of organizational biases on forecasting.
00:33:00 The issue of over-optimism in promotion forecasting.
00:36:23 Applying reason on top of irrationality and the potential of human ingenuity.
00:38:53 Importance of not over-relying on complex models for operational strategies.
00:39:48 The dangers of “naked forecasts” and the need for tangible connections to the business.
00:42:34 How bureaucratic processes and supply chains are vulnerable to problems in forecasting.
00:45:31 Behavioral economics and human bias in the forecasting process.
00:47:53 Maximizing the value of human judgment in forecasting by understanding biases.
00:48:39 Importance of acknowledging biases and their role in forecasting.
00:50:40 Limitations of time series perspective in forecasting.
00:52:00 Human problems in forecasting that go beyond biases.
00:54:53 The future of AI development and its role in aiding or replacing human forecasters.
00:57:01 The importance of human ingenuity and asking the right questions.
00:58:47 Discussing numerical recipes and human role in automation.
01:01:58 Future automation in supply chain management.
01:04:11 Potential topics for a second edition book.
01:05:22 Leveraging behavioral economics in C-level meetings.
01:08:46 Limitations of forecasting in aviation and retail.
01:09:30 Focusing on decisions and the strange nature of predictive modeling.
01:10:27 Comparing the strangeness of future predictions to quantum mechanics.
01:11:12 Jonathon’s advice to supply chain practitioner.
01:11:56 Conclusion and thanks to guests.

Summary

In an interview, Joannes Vermorel, founder of Lokad, and Jonathon Karelse, CEO of NorthFind Management, discuss the importance of understanding and questioning the purpose of forecasting in businesses. They advocate for a skeptical approach, emphasizing that accuracy should not be the sole measure of success. Forecasting should be viewed as a diagnostic metric to identify and address errors for continuous improvement. Both experts agree that biases can influence forecasting, and businesses should focus on techniques that have tangible impacts. They also discuss the role of AI in supply chain optimization, noting that while AI can assist, human ingenuity remains essential.

Extended Summary

In this interview, host Conor Doherty discusses forecasting with Joannes Vermorel, the founder of Lokad, and Jonathon Karelse, the CEO of NorthFind Management. Karelse explains that his approach to forecasting is centered around understanding its impact on businesses. Many organizations forecast because it’s “supposed” to be done, but often don’t ask why they’re forecasting, or whether there are ways to improve the process. He emphasizes the importance of having healthy skepticism and continually questioning practices to improve forecasting.

Karelse shares the inspiration for his book, “Histories of the Future,” which stemmed from his desire to examine the historical context of forecasting and the validity of certain forecasting principles. He refers to the work of Bruno LaTour, who questioned the certainty of scientific principles and advocated for understanding the historical context in which they were born. This approach inspired Karelse to apply a similar lens to the field of forecasting.

When asked to define forecasting, Karelse says it is essentially taking a guess at what the future looks like. While the guess can become more scientific and guided, it’s important not to lose sight of the fact that forecasting is ultimately based on uncertainty. Vermorel adds that the classic forecasting perspective, which dates back to the early 20th century, is centered around time series and extending measurements over time. However, he believes that new ways of looking at the future will continue to emerge in the 21st century.

Karelse emphasizes that forecasting accuracy should not be the sole measure of success. Instead, forecast accuracy should be viewed as a diagnostic metric that can help identify root causes of errors and sub-optimalities, which can then be used to recalibrate and optimize for continuous improvement. The goal of forecasting is to make money, and understanding a business’s specific needs and expectations is key to utilizing forecasts effectively.

Vermorel agrees that forecasting has not always been approached with skepticism. Early proponents like Roger Babson believed in the absolute power of science to predict and model the future. However, both Karelse and Vermorel advocate for a more skeptical approach that questions conventional wisdom and seeks to improve forecasting methods in a way that benefits businesses.

The discussion begins with a brief history of forecasting, specifically the cultural and geographic aspects that played a role in its development. The conversation then turns to the classical approach to forecasting, which was based on a deterministic philosophy that relied on mathematical and scientific principles to arrive at accurate conclusions. The limitations of this approach are discussed, including the fact that humans are not always rational actors and that unconscious biases can influence decision-making. The concept of heuristics is introduced, and the benefits and drawbacks of relying on them are discussed. The idea of overconfidence, which is a precursor to a discussion on behavioral economics, is also explored. The conversation then shifts to the importance of probabilistic forecasting and how it can help organizations better understand the limitations of their predictions. The discussion concludes with a brief mention of artificial intelligence and its potential to help with forecasting, but also the need to accept that there are ultimate limitations to our capacity for understanding everything.

The issue of positive bias in forecasting, particularly in organizations with cultural and business-based biases towards growth and positive outcomes. Even without overt biases, research shows that people are four times more likely to make positive adjustments to a forecast than negative adjustments. This bias is attributed to our evolutionary tendencies towards risk aversion and materializing upside possibilities.

Joannes Vermorel shared his experience with clients in retail where the bias towards positive uplift for promotions was prevalent, leading to nonsensical forecasts. His solution was to approach forecasting as a technique among many others, rather than as the core approach. This entails only using numerical techniques that enable the tangible impact on the business, such as producing something, moving something from place A to place B, or using data that is directly connected to something tangible. Vermorel insisted on the need to treat forecasting as one among many techniques and not to have naked forecasts that are not connected to something tangible.

Jonathon Karelse contributed to the discussion by adding that all models are wrong, but some models are useful, and the ultimate goal is to aim for parsimony and model selection. He also warned against micromanaging forecasts as it wastes time when the forecast accuracy at a horizon of seven or eight months is already abysmal. He suggested that applying infinite capacity for ingenuity in specific applications where the probability of upside is greatest is the way forward.

They concluded by stating that forecasting is just one technique among many and not the only way to approach the future. They agreed that a greater understanding of behavioral economics within an organization can improve forecasting. By recognizing the biases that can influence forecasting, organizations can avoid making nonsensical forecasts and focus on techniques that enable a tangible impact on the business.

The discussion revolves around the use of AI and forecasting in supply chain optimization. They explore the sources and degrees of bias in human judgement and how it affects the process. Vermorel argues that the focus should be on engineering numerical recipes that operate at scale and generate reasonable decisions. He states that such recipes should be entirely automated in daily execution, whereas humans should focus on longer-term decisions that require more mental bandwidth. Karelse agrees that AI can assist people in forecasting but not replace them, and human ingenuity is still essential in asking interesting and important questions that AI can solve. The discussion ends with Karelse’s hope that organizations can balance the potential upside of human insights with the frailty that afflicts everyone due to the imperfection of how our minds work.

The future of supply chain optimization. Vermorel expressed his belief that with better tooling and techniques, large teams of people in supply chain management might become unnecessary, and he described his experience of seeing people persist in doing things that are clearly irrational, despite the evidence that proves otherwise. Karelse agreed with Vermorel and added that he leverages behavioral economics to help c-level executives understand why their processes are flawed and how to measure their business value. Vermorel believes that focusing on predictive modeling in supply chain management will become increasingly strange, and Karelse recommended that practitioners never be satisfied with simply knowing, but should always ask why. The interview ended with Karelse recommending Vermorel’s book, and both guests thanking Doherty for his time.

Full Transcript

Conor Doherty: Welcome to the show. I’m your host, Conor. Today, I’m joined by Joannes Vermorel, co-founder of Lokad, and we have a special guest today, Jonathon Karelse, CEO and co-founder of NorthFind Management. He’s a published researcher in the field of unconscious bias and he’s written this lovely book, “Histories of the Future.” Jonathan, thank you very much for joining us.

Jonathon Karelse: Thanks for having me.

Conor Doherty: Right, Jonathan, I hope you’re ready for a sea of flattery because I actually read the book. I really enjoyed it. I think I might, in fact, be your target audience because I’m literate, but also I have an interest in these topics, you know, economics, business, behavioral economics. But I actually don’t have that formal training; my background, as we discussed before, is music and philosophy. So, I actually learned quite a bit going through the history of forecasting. You have a very nice tone, very accessible and readable, so thank you very much. So, we’ll start at the beginning, I guess. What exactly was the inspiration for writing a book on the last 100 years of forecasting?

Jonathon Karelse: Well, my approach to forecasting and practice has always been understanding what’s going to make a business impact, and that might seem obvious. But in a lot of organizations, forecasting is done because it’s “supposed to,” and not necessarily a lot of thought is given to why we’re forecasting. As a result, a lot of received wisdoms get passed down from generation to generation in the business, and people just go through the rote process of forecasting without really understanding what the elements of the process are that are impacting the business positively. Are there things we could do to improve it, and most importantly, why?

The “why” question is something that I guess maybe I wouldn’t call myself a contrarian per se, but I think it’s always good to have a little bit of healthy cynicism or skepticism. I’ve often asked why, and a book that really resonated with me when I was an economic student was written by Bruno LaTour. He’s from that LaTour family. He’s essentially the black sheep of the family because he’s the one not making wine, but Bruno LaTour has a Ph.D. in epistemology from Le Cole de Mine, which, for those of you familiar with it, you’ll know it’s not a half-bad university. He spent a lot of time researching the modes of learning and the modes of knowledge, and he wrote a book called “Science in Action.”

This book “Science in Action” looks at some of the black box foundations of science, things like the dual helix structure of DNA, and brings them back to before they were facts, before they were black boxed, and helps us understand the historic context in which they were born. In doing so, he really illustrates that a lot of these scientific certainties are a lot less certain than we think they are. It’s convenient

Conor Doherty: So, when you use the term forecasting, what exactly do you mean?

Jonathon Karelse: That’s a great question. At its essence, forecasting is taking a guess at what the future looks like. This guess can become more scientific and guided by principles of uncertainty, but ultimately, we’re guessing. It’s important to not lose sight of this fact, as it’s based on uncertainty.

Conor Doherty: That’s an interesting point. And Johannes, a foundational principle of Lokad is embracing uncertainty, right?

Joannes Vermorel: Yes, but to answer the question about forecasting, I believe there is a classic forecasting perspective that dates back to the beginning of the 20th century, popularized by people like Roger Babson and Irving Fisher. This perspective approaches forecasting through time series. You have measurements performed over time, such as the amount of steel produced or the number of potatoes harvested. You end up with a sequence of measurements that you can represent as a time series. The obvious thing to do is to extend the curve and see where it goes next. This is the essence of the classic forecasting perspective that emerged at the beginning of the 20th century. However, it’s only one way to look at it.

The real question is whether it makes sense to approach the future by just extending a series of measurements. It’s not necessarily wrong, but it’s an opinionated way to approach the future. This approach has been very much a thing of the 20th century, progressing and refining methods throughout the century. However, there are likely to be new ways to look at the future that will emerge in the 21st century, some of which may be much stranger than the classic approach.

Conor Doherty: Johannes, on that point about the classic approach to forecasting, I want to throw it back to Jonathon. Something that permeates all of your work is a recalibration of how people measure the success of a forecast. Your thesis seems to be that it’s not about forecast accuracy per se. Could you expand on that, please?

Jonathon Karelse: I hope we follow up on the idea of the classical approach versus different philosophies of forecasting moving forward. But in the meantime, one of the things that puzzles me is how people often talk about knowing the forecast will always be wrong as if it’s a get-out-of-jail-free card. They’ll say, “You’re asking me to forecast. I’ll do my best, but the forecast is always wrong, so don’t blame me when it is.

Conor Doherty: …but then they still calibrate operational strategies and indeed financial strategies on the hope for a highly accurate forecast. So I want to be very clear because I talked about this at a couple of conferences in Amsterdam last week, and I had some very angry people, particularly software vendors in those sessions, saying, “Well, what are you talking about? Forecasting doesn’t matter.” And I want to be clear: forecasting absolutely does matter in particular applications because there are some places where it doesn’t matter from an ROI standpoint.

Jonathon Karelse: If you’re a bespoke clothier and you can make three suits a year and your customers are willing to wait for 10 years, you don’t need to spend a ton of time forecasting demand. You’re at maximum capacity. The ROI is going to be minimal. For everybody else, there probably is an ROI, but the point is forecast accuracy for me is not the scorecard metric. Forecast accuracy is not the goal. Forecast accuracy is a diagnostic metric that we can use to identify root causes of errors and root causes of sub-optimalities that we can then use to recalibrate and optimize for continuous improvement. The goal of forecasting is to make money because the goal of business is to make money, unless you’re in a business that doesn’t do that. And forecasting is one of a number of tools that we have at our disposal to do that. In some cases, wielded properly, it’s the best tool we have. In other cases, it’s a supportive tool, and in other cases, it probably won’t yield a lot of benefit. But it’s understanding your business to understand what you should expect of a forecast that I think matters a lot.

Joannes Vermorel: The forecast is always wrong, and now people use that as a “get out of jail free” card. I really love this expression. The interesting thing is that it wasn’t always the prevalent perspective. You know, Roger Babson was an immense fan of the work of Sir Isaac Newton, and even then, there was this incredible belief in the absolute power of science, that you would be able to capture things and have some kind of numerical modeling, just like you can predict down to the last second of arc the position of Mars three centuries from now.

Jonathon Karelse: They both believed, as I ultimately do, that math underpins everything and if we had the capacity and enough data, math could explain everything. But in practice, we’re not there yet. And I would say that’s something that was not very well understood at the beginning of the 20th century. There are orders of magnitude of difficulty that are just not there, and so it’s not just like the ultimate formula is around the corner.

Joannes Vermorel: I believe that one of the key discoveries of the 21st century is to realize how much, for all the things that are knowledge-related, there are entire fields of knowledge that just escape our grasp. It’s not just about finding something equivalent to the law of gravity, where in just one equation you can explain an enormous amount of things. That was the sort of thinking that existed at the time.

Conor Doherty: For the audience, we are talking about North American statisticians described in the book that emerged in the USA due to the fact that there was the emergence of a middle…

Conor Doherty: So, who owns stock options, not stock options, sorry, stocks, and they wanted to have a projection about what will give them the best returns. They were very interested in all those sorts of forecasts, and thus it was something that really emerged in the USA and in North America. The cultural component or the geographic geocultural component is key.

Joannes Vermorel: That’s very important because it wasn’t particularly statistically driven in North America. Like you pointed out, Babson loved Newton and everything Newtonian. He took what was a fairly superficial understanding of Newtonian principles and tried to apply it without the benefit of a statistical understanding to forecasting. Essentially, if something goes up for a while, it’s going to come down for a while because that’s what happens with gravity.

Jonathon Karelse: Irving Fisher, who earned North America’s first PhD in economics, tried to apply his mathematical background to what had been a social science up to that point. He started to marry some of the statistics, which you have to say was absolutely being driven in Europe rather than North America, to the North American field of economics. But really, it’s in Europe during that time where we see all of the advances happening in the math that ultimately forecasting would use.

Joannes Vermorel: There was this deterministic approach where people believed that you could model the future in a mechanistic way. This thinking lasted for a long time. Even science fiction works in the 60s, like Isaac Asimov’s Foundation series, embraced the idea of psychohistory, a science that can predict the future very mechanistically.

Jonathon Karelse: That’s very interesting because that’s the classic perspective. But due to the fact that people have been operating for decades with fairly inaccurate forecasts, they have come to realize that the forecast is always wrong. However, they have not come to terms with the consequence that it will never be completely right.

Joannes Vermorel: That’s an interesting point. People have morally accepted that the forecast is always wrong and they don’t fire people because of that, which is good. But should we re-challenge in-depth to embrace this aspect of the forecast? Not really.

Jonathon Karelse: What’s very interesting is that you’ve mentioned determinism a couple of times, and I think this is key. A lot of the science that was emerging in the 19th and early 20th centuries globally, not just in North America, was basically born of the momentum that we started gaining in the Renaissance. We came out of the Dark Ages and started figuring out that by applying scientific principles, we can begin to shine a light into these dark areas.

Conor Doherty: Areas of knowledge and really elevate ourselves, and we started to get, I think, a bit arrogant about the extent to which we could do that. We started to believe in the 19th and early 20th centuries that with enough effort, there’s really nothing that we can’t learn. And that informs two really important themes in forecasting. The first is a deterministic approach makes sense with that philosophy because it means if I work hard enough and I’m smart enough, I will get to that accurate conclusion rather than accepting that it’s a fool’s errand. I am always going to be wrong and probabilistic approaches, which incidentally…

Jonathon Karelse: …and any of those. Well, Joannes Komagarov was doing all of his work in statistics around the same time that these early deterministic approaches were being born. So it’s not like we had to wait another hundred years for the possibility of probabilistic approaches. The math was there. The second piece is that believing that with enough effort, with enough focus, we could figure out anything. It brings us to what today is a very hot topic: AI. Now, the idea that AI can solve non-value-added activities or routine activities for humans is not new. In fact, there was a conference in the 1950s at Dartmouth College where a bunch of the early thinkers on AI set out 10 things that they thought AI could accomplish in the next 10 years. And we haven’t done any of them 70 years later. That doesn’t stop us from trying, and I think the trying is important. But I think ultimately, the lesson is we need to accept that there are ultimate limitations on our capacity for understanding everything. And once we understand that, then we become more open to other approaches like, for instance, probabilistic forecasting, which then sets us up for the reality that we say we know we’re always going to be wrong. But now accepting that, let’s understand what that looks like in terms of real business outcomes and calibrate our strategies on the knowledge that we will be wrong rather than the hope that we’re somehow going to be right.

Conor Doherty: It seems like you’ve dropped two very interesting pins there, one being essentially a precursor to a discussion on behavioral economics, I think you’re referencing overconfidence, and the second on AI. I figured in Chapter 6, I think five or six, so we’ll take those in turn if you don’t mind. First, to the behavioral economics, I know that that’s very much your métier. If you could expand a bit on how behavioral economics actually influences or interacts with forecasting.

Jonathon Karelse: Sure. So Joannes, early on in the conversation, mentioned several times the classical approach to forecasting. And I would say the classical approach to forecasting is itself sort of a byproduct of the classical or, more accurately, neoclassical economic way of business in general. And that is again from a very 19th and 18th-century viewpoint that if we work hard and apply mathematical and scientific principles to this, we can understand. And Adam Smith in 1776 wrote the seminal work, The Wealth of Nations, and one of his key points is that basically all of commerce can be understood by the basic principle that humans are rational actors who, when given clear value-based choices, will gravitate naturally to the one with the greatest utility. And that doesn’t necessarily mean the greatest money, but the one that they have the greatest benefit of some sort from. And intuitively, that feels correct. The problem is, for any of the listeners who have studied economics, you’ll know, especially econometrics

Conor Doherty: Although certainly, in application there are principles of neoclassical economics that hold, we need to understand in a broader sense how these systems of demand and supply, price setting, and ultimately decision-making are influenced by unconscious drivers, unconscious psychological drivers which in some cases are environmental, in some cases are hardwired evolutionarily, but in all cases exist. No matter how free from bias one thinks one is, no matter how objective one thinks one is, you are still subject to these unconscious biases that create a lens through which you are interpreting data.

Conor Doherty: Actually, sorry, you actually said in the book that the average person makes about 30,000 decisions a day, and I mean we’re obviously not conscious of all of those. We couldn’t possibly be.

Jonathon Karelse: No, and this is the benefit of these heuristic processes that we have. I mean, a lot of times we view heuristics as a pejorative, like it’s a shortcut. When Joannes mentioned in the 70s and 80s when some of these more complicated scientific or statistical approaches to forecasting began arising, their proponents like George Box and Willem Jenkins, who many of your listeners will know were co-authors of the ARIMA method, they sort of scorned the simpler methods like simple exponential smoothing or Holt Winters triple exponential smoothing as being too simple and being just a heuristic, a shortcut.

Jonathon Karelse: But what the first four M competitions showed were that, in many cases in practice, being a heuristic isn’t necessarily bad. And now, psychologically, there’s a huge benefit to being able to make decisions very quickly from an evolutionary standpoint. If I am aware of a tiger in my peripheral vision stalking me in the woods, if I stop and consider all my possibilities and I’m thinking about all of the various things the tiger can do and all the various options I can have and then try to weigh the most appropriate one for me, I’m probably eaten by a tiger. And that means I don’t procreate, and that means my DNA ceases to exist. So over time, we learned that there’s a number of heuristic processes that benefit us evolutionarily.

Jonathon Karelse: One of them is the representativeness heuristic, which is “this looks like something I’ve seen before, the last time I encountered this, and I had a successful outcome. This is the thing I did. I’m going to do that again.” So we don’t have to teach babies to recoil from things that look like snakes; it’s baked in. We don’t have to stop and think about what to do when we see a bus coming towards us; we jump back. And the 30,000 decisions we have to make a day, most of them are navigated by some sort of heuristic. If we had to think about all of them objectively, we would be crippled.

Jonathon Karelse: The downside of heuristics is that the thing that we think looks like something we’ve seen before doesn’t always actually represent that thing. And especially when it comes to interpreting data, we’re often subject to something called the cluster illusion bias. So when we’re paying people to interpret data and make a forecast, they feel a need to add value. We pay them to find patterns, and they find patterns even when the patterns don’t actually exist. It’s natural that this happens; one can’t blame them. But there’s a host of biases that impact our ability to rationally and objectively interpret data well.

Conor Doherty: Jonathan, on that point, you actually have an example in the book from research that you published elsewhere. You presented completely sanitized random data to a group of people and asked them to guess if the line would go up, down, be static, or if they didn’t know. Can you explain that and the significance of that finding?

Jonathon Karelse: Sure. The choice framework we presented is a spoiler for anybody who’s going to ultimately do our bias assay. A lot of the data presented is stochastic. We were given a number of different stochastic data sets and we wanted to make sure that we didn’t inadvertently have trend or seasonality in any of them. These are as stochastic as it gets; there’s no possibility any stat package will find trend, seasonality, or any other pattern in these data sets.

When we presented the unedited, unframed data set and asked people where they thought the demand would go, we had a pretty much even split between up, down, and unchanged. We didn’t have many people saying they didn’t know, which would be a completely appropriate answer because that would represent the fact that they don’t know anything about what the data means. They don’t even have the benefit of being able to run a stats tool on it to see if there’s any trend or seasonality, and by the way, you can’t predict the future anyway. That would be the correct answer, but very few people actually say that.

We then presented the same data set later in the assay with a bunch of other questions in between, but this time it came accompanied with a little story. The data is the same, and the story contains information that might seem useful but in reality has no bearing on the data. What we found is that about 70 percent of people become more certain of the decision they’re going to take. Anyone who was a “don’t know” typically moves out of that camp, and most people that were “unchanged” move into either the “above” or “below” category.

It depends on how we frame it. If we have a positive frame, we see a lot of people gravitate that way. That’s a really important insight from a practical forecasting standpoint because the data hasn’t changed. In the first example, the outcome is probably closest to the most appropriate you could expect from a human. A computer would have done it immediately. But once we accompany that with a story, suddenly all logic and rationality go out the window, and we end up with an extremely biased view of the data.

The problem is, in practice, it isn’t that different. We ask people to create demand plans, but they’re doing this within the broader context of an organization which has its own cultural biases and business-based biases towards growth and positive outcomes. It’s not really that surprising, then, that when we measure the effect of human intervention on computer-based forecasts, we most often see positive bias being driven. In some cases, there’s even overt pressure to have a positive bias in organizations, pressure to forecast, to plan, and to hit certain goals. People are basically told to change.

Conor Doherty: The forecast, but even minus those overt biases, some research by Len Tashman and, uh, oh I’m going to forget all of their names, Spheros Mocker Docus, um, Paul Goodwin. Their long-term research shows that we’re probably about four times more likely to make positive adjustments to a forecast than negative adjustments, which makes no sense if we start with a statistically driven forecast. The residual should be falling normally distributed on either side of that forecast. If it required a human adjustment over time, we should balance out. But because of this unconscious bias, where we’re much more risk averse than reward-seeking, and again there are evolutionary reasons for that, we like to materialize upside possibilities much more than we’d like to materialize downside risk, and we end up with people’s fingerprints all over positive bias in forecasting. Do you find that you want us to be the case when you’re doing forecasting?

Joannes Vermorel: Yeah, I mean, a decade ago, when Lokad was still doing, I would say, classical forecasting, we started as a software vendor by doing classical forecasting. Right now, I would say we have an element of predictive modeling in the toolkit, but the way we operate, we can discuss that. It’s very, very strange and outside the context of what would be deemed relevant about these trees of your future unless you start talking about the history of the future for the 21st century. But back to those experiences, it’s very interesting because we had, um, very similar experiences, notably with our clients. We had a series of clients, uh, still have in retail, and when it came to forecasting promotions, one of the things that we were frequently getting is that the uplift of the promotion is limited. You know, yes, you’re going to have like, let’s say an order of magnitude, a hypermarket, yeah maybe, you know, it’s going to improve increase by 30-50% the sales. That’s a lot, but that’s very much below the sort of “we are going to do 10x for this product” that people were expecting.

And the interesting thing was, for those promotions, we did a World Series of benchmarks with teams of actually just, you know, modeling a super simplistic uplift for the promotion versus people that were micro optimizing, saying, “Ah, I know exactly this brand of chocolate,” etc. And look at what’s coming out on top in terms of accuracy with, um, I would say ridiculously simple models, you know, uh, the sort of things that were of the order of the complexity of exponential moving, but just for the uplift of a promotion, which is just a constant factor plus 50, and you’re done. And that was actually better, but much better than people that were like micro optimizing. And indeed, the bias was very much in the positive, where they would say, “But you realize that this brand, it’s the first time in the last 10 years that they are promoted; they are going to do 10x!” And we are thinking, “Yeah, probably not. It’s probably going to be like plus 50. I know that you’re going to be disappointed.”

But then you end up with super weird things where, for example, you have a forecast that is completely nonsensical, like, you say you’ll do 10x, and you don’t do 10x, but purchasing 10x is actually the good move because the supplier actually gives the retailer a massive discount. So basically, it’s kind of a speculation on the value of the inventory. And if your supplier gives you a 25%

Conor Doherty: You will sell over time, it could turn out to be a smart decision, but you see that there was something that was very bizarre in terms of thinking. That was, I’m going to start by making a very nonsensical forecast like I used to, and then due to the fact that usually with promotions I’m buying the stock with a vast discount from the supplier as a way for me to be able to put a big discount on the tag price of the stuff, I end up doing a good operation over time.

Joannes Vermorel: But you see, the deconstruction is, there is an element of rationality. You end up being right for the wrong reasons.

Jonathon Karelse: Exactly, and that’s very interesting. You know, that’s the sort of things where, it’s not, and again, the fact that people may be irrational doesn’t mean that you can’t apply reason on top of that to model this irrational. Absolutely, it’s irrational, but it’s not, and that’s why I would say, my own perspective would be there is no limit to human ingenuity. You know, apparently that’s my belief, that’s not an element of science. My core belief is that there is no limit to the amount of human ingenuity, but make no mistake, some stuff to be addressed requires an absolutely immense amount of human ingenuity, and probably, you know, things that are of the, and we’re talking of centuries of work. So, we have to be very modest in this grand journey of science that started a few centuries back. This is only the beginning, and there are probably entire classes of knowledge that we don’t even have yet the suspicion that it might even exist.

Joannes Vermorel: So yeah, and I fully agree with you, Jonathon. It’s a core belief of mine too.

Jonathon Karelse: I believe it was Pascal who said, “If it exists, it can be quantified.” And of course, there are limitations on our ability to do that, but I believe ultimately, with sufficient capacity, everything can be quantified and understood. But obviously, the issue is we are so far away from that capacity that, in practice, beginning any kind of business-based journey with that philosophy is insane because we’re too far away from the goal. But, it’s an important follow-on from the idea of the forecast always being wrong and the point that Joannes made about micromanaging forecasts. When George Box said, “All models are wrong, but some models are useful,” that’s sort of where the forecast is always wrong has come from. There are two other things he said that most people ignore. The first was, “Since all models are wrong, but some are useful, aim for parsimony in model selection.” So, in other words, you’re going to be wrong no matter what, so especially, economists building a huge complicated model is still going to end up with some degree of wrong. So don’t predicate the need for a huge complex model that’ll give you accuracy because you’ll still be wrong. But the second, and this is, to me, the more important one in practice, is “Don’t sweat the mice when there are tigers around.” The number of times we work with organizations where they say they know the forecast is always wrong, their forecast accuracy in practice is abysmal, but we spend hours debating one or two percent at a horizon of seven or eight months on a SKU is crazy. Your forecast accuracy at that horizon at the SKU level is, for instance, 30 percent.

Conor Doherty: Adjusting it one or two percent is immaterial. You’re going to be wrong, and you’re going to be so wrong that the time you took to make that one or two percent adjustment is a complete waste of time. You should only be looking at applying that ultimately infinite capacity for ingenuity that I also believe humans have in specific applications where the probability of upside is greatest. And that’s when A) you understand something with certainty about the future that isn’t reflected in history, B) the value of the thing you’re touching is sufficiently valuable to warrant the intervention, and ultimately C) the scale of that intervention is sufficiently large to warrant taking it because otherwise, you still end up inside the error margins and you’ve got safety stock or some other mechanism taking care of it anyway.

Joannes Vermorel: That’s very intriguing because that reflects very much the sort of journey that Lokad went through. Nowadays, the way to approach that is first to only approach the anticipation of the future for its consequences. That’s why now it’s almost like a dogmatic part of the Lokad dogma is to say naked forecasts are not allowed. So, you’re not allowed to do it, and that’s reinforced. I’m able to enforce it at Lokad, obviously, being the CEO. The idea is that when you do a naked forecast, you are, by definition, insulated from the real-world consequences. The forecast in and of itself is an abstraction of a measurement for the future. It says nothing about whether your business is good or bad. Yes, you can tweak the numbers, but ultimately it’s not even really connected to reality. It’s a very abstract thing.

And again, people are only willing to go into this sort of exercise due to the fact that classic forecasting has been pretty much rarified. There are people that have forecasting in their resume, like, “I’m certified to do forecasting.” There is forecasting, and demand planner is a thing. They have positions and processes. So, you see these things that are very abstract, that were one way to approach the future, have been reified through job positions, software components. You pay money for licenses to get them, so you see, that’s a way to make it real. If you pay for something, certainly it exists.

And so, the approach, if I go back to this idea of naked forecast, the sort of answer that Lokad had was that no, we have to treat forecasting as one technique among many others, numerical techniques that just let us make decisions. There are tons of things that have a tangible impact on the business. The idea is that if you don’t have a direct connection to something that is very tangible, such as producing something, moving something from place A to place B, or producing something so you get rid of some materials and you have some outcome, then you’re not allowed to have predictive modeling. That’s the thing that is very tempting; as soon as you have a time series or any kind of data, you can always engineer a model.

Conor Doherty: Joannes, can you provide some insight on the challenges of using projections in supply chain optimization?

Joannes Vermorel: The beauty of projections is that they are feasible, whether they are relevant or wise. However, the problem is that when you have a hammer in your hand, everything looks like nails. If you have a certification in forecasting techniques, you can take any data set and start applying your models. Our policy at Lokad is “no naked forecasts” because they are too dangerous. If you don’t connect the forecast with something very real, you will be subject to intense bias or even bureaucratic problems. When you come up with a metric, you can have all sorts of things within the organization to optimize against this made-up metric. Considering that supply chains are bureaucratic by nature, aligning supply and demand is a very bureaucratic exercise. It’s about synchronizing a lot of people, processes, and software. If you add fuel to the fire, you can end up with something that rapidly takes on large proportions. Supply chains are human constructs made up of many people, software, and processes, and this creates a fertile ground for problems, especially with forecasting.

Conor Doherty: Jonathan, how does a greater understanding of behavioral economics within the organization improve the forecasting process in concrete terms?

Jonathon Karelse: I would say there are two broad ways in which it improves the process. First, a lot of organizations believe that humans are not impacting the forecast process, so they try to keep human judgment as far away from the forecasting process as possible. They believe that, as a result, they’re more immune to biases and gamesmanship that take place in what Joannes aptly called a very bureaucratic process. However, I would argue that even in situations where we think we’ve kept humans away from the process, there are still human fingerprints all over it. There’s human influence in the selection of data, the selection of software, and most importantly, the actions we take as a result of the forecasting process.

The forecast itself is just an idea, a potential set of instructions or a map. We still have to decide what to do with it afterward, and that requires humans in the supply chain taking action. Understanding the extent to which, and the ways in which, we are biased helps us understand the potential pitfalls in our process. Working backward from the potential outcomes to the process, rather than assuming the process will bring us to a specific outcome, allows for a better understanding of the sources and degrees of bias in the people involved.

Conor Doherty: Supply chain and in planning helps us understand with even greater insight what those outcomes could be. More likely, an organization has a forecasting or demand planning process that has some degree of automation and computer-driven elements, but also, by design, the integration of human judgment. I believe, subject to specific guidelines, there is value over time to integration of judgment, subject to specific criteria. But you help maximize the potential for having that human judgment add value if, again, you understand the extent to which the people providing that judgment have biases. It’s in organizations that either actively do not want to believe they have bias or are oblivious to the fact that they do have bias that you’re most likely to be transmitting bias into the demand planning process, either through active integration of judgments or with these human fingerprints that exist everywhere else. When you begin to look at the biases that are in your organization, you can start to provide guardrails that mitigate their impact. It’s always going to be there. I mean, human judgment is always going to be faulty, but it’s a matter of balancing the potential upside of human insights in particular instances versus the certainty that with those insights is going to come the frailty that afflicts all of us because of the imperfection of how our minds work.

Joannes Vermorel: I agree with the idea, which is also my experience, that if you’re not even acknowledging the fact that you might have biases, it is a very time-tested recipe to maximize the amount of biases that you have. For organizations, that is very much my own experience. The sort of things where I would say, and if I have to deconstruct further this idea of approaching the future, when people think about those biases, they still have this time series perspective in mind. And it’s very difficult to think about what am I doing wrong in my forecasting activity without having the sort of solution or mechanism by which I’m doing it in mind. The bias refers to the fact that you have things that are too high or too low, and this is a very one-dimensional perspective with the idea that you’re operating with a time series.

The sort of problems that I’ve seen, and that has been the sort of technological evolution of Lokad, is that if you want to convey information about the future, there are entire classes of things that cannot be expressed with time series. It doesn’t mean that it cannot be expressed with numbers; it just cannot be expressed with time series. Time series are a very simplistic way, it’s literally a sequence of measurements that extend into the future. Just to give an example, if I am looking at my sales of one product, I could forecast my sales volumes, but my sales volumes are conditional on the price I practice, and the price is not something that is a given, it’s a decision for me. So even if I was able to have a very accurate forecast, it would still be incomplete.

Conor Doherty: Something that would be kind of strange would have to be mathematically a function that says if I do this for the price, then this would be the outcome. So, here we are suddenly touching the fact that even if we are looking from this very deterministic perspective of just having bias and whatnot, I’m just pointing out that there are elements where this time series perspective is very weak to take into account things that are very big. It’s not just about having something that is too high or too low, it is almost a different dimension that is just not accounted for. So, here I’m just giving the idea of being able to literally shape the outcome by other actions. It’s not just observing the movement of planets; I can act and modify the future of the outcome. But also, even if we stay with a purely passive observer, there are situations where time series are still insufficient.

Joannes Vermorel: For example, if I go for aviation maintenance, I want to keep my craft lines. I can forecast the demand for parts, but the thing is that when I repair an aircraft, there is a list of parts that I need to repair. So, I’m simplifying the schema, an aircraft comes into the hangar for maintenance, people do a diagnosis, there is a list of parts that they need to change, and until every single one of those parts has been changed, the aircraft cannot fly again. It’s grounded. The fact that I can forecast all the parts independently does not tell me anything about the joint availability of all the parts. In theory, if all my forecasts were absolutely perfect for all the parts, then yes, the joint knowledge would be perfect as well. But as soon as you have even very minute uncertainty on every part, knowing that for the audience, an aircraft is about 300,000 distinct parts, even very minute uncertainty on the sort of need that you have for every single part means that the uncertainty that you have for the joint availability of all the parts that you need to repair the aircraft is absolutely gigantic.

Joannes Vermorel: And that’s an example where the classic time series perspective just mathematically is not expressive enough. So that’s another class of problems where the sort of problems that we have, if we go back to biases, is that you have the sort of bias like forecasting too high or too low, but you have also other classes of very human problems that are just not even looking in the right direction or not even looking with the sort of structure that would give you a relevant answer. And those are, I would say, very much the 21st-century sort of way of looking at it, and they are much more puzzling.

Jonathon Karelse: I absolutely agree.

Conor Doherty: Well, that then leads us, I think, tidally to discussing the future, or the next hundred years, the future of the future, the future of the futures. So, Jonathon, I’ll go first to you. In terms of AI development and technology, do you see that aiding people in forecasting or ultimately replacing them?

Jonathon Karelse: When Daniel Kahneman gets asked about AI replacing people, he’s on the one hand hopeful that it

Conor Doherty: We’re so bad at making objective judgments, but on the other hand, certain that it will never happen. And again, this is, to me, the importance of dividing the theoretical or the philosophical from the practical. On the theoretical side, it should occur at some point in the future, at some point where our capacity for processing data, our capacity for understanding at a much more nuanced and granular level how human thought works and what intelligence itself actually is, will allow us to give rise to complex systems like the guys at the Dartmouth conference in the 50s were aiming for when they thought they could replicate the human brain in a matter of a couple of decades. That’s on the theoretical side.

In real life, in my lifetime, in your lifetime, I don’t believe that will happen. And I can say that with some degree of certainty just by looking at the trajectory of what we’ve seen over the last 70 years of AI. Certainly, we’re learning a lot today. Computing processing power is expanding exponentially, as is the amount of available data, but that has still not yielded anything close to the kind of breakthrough in AI in practice that will replace humans. Can it assist us? Certainly. There are all kinds of examples today of where the nascent application of AI is having a benefit in a lot of different areas, but the gap between replacing people and assisting people today remains a gaping chasm.

Going back to something that Joannes said early on that I agree with very much is the human capacity for ingenuity is that piece that I think we’re in no danger of having replaced by computers or by AI. I think the value of humanity is not in being able to answer complex questions because we can correctly put computers to use to solve complex questions. I think where we are most valuable is asking interesting and important questions in the first place. It’s only by posing those questions that we can bring to bear the sum total of technology today to come up with the answer, but it’s asking those blue-sky questions that makes humans still a critical part of the process.

Do you want to add something, Joannes? I’ll throw it over to you.

Joannes Vermorel: My take is that what people see as forecasting as a human activity, in the classical sense, like having an army of clerks or companies having their S&OP processes supported by hundreds of people processing spreadsheets and generating numbers, I’m very hopeful that within my lifetime, I will be seeing it disappear. The sort of practice that we have at Lokad makes me very hopeful because, for the clients that we serve, we have pretty much eliminated that.

But the way we’ve done it, and that’s the sort of product, is not by eliminating people or having some kind of artificial intelligence. The way we did it was by focusing on those decisions and having smart engineers engineer numerical recipes. That’s the typical term that I use because some might be heuristics, some might be even more mundane, just filters and whatnot. Even that’s not even a heuristic, that’s something even more basic.

Conor Doherty: So engineering numerical recipes that just operate at scale for those companies, the mundane daily stuff, and that can be entirely automated now. Does that mean that we have removed humans from the picture?

Joannes Vermorel: Not really, because first, the numerical recipes are very much a human product. It takes a really smart human engineer to craft them, and the maintenance is also entirely human-driven. The numerical recipes are just a sort of know-how of what sort of numerical processes work at scale to generate reasonable decisions. Is there any intelligence in the numerical recipes? Absolutely not. The numerical recipe is a very mechanical affair. Yes, there might be bits of machine learning, but that’s just statistical techniques. They are still incredibly mechanical in nature.

Conor Doherty: So where is it very interesting?

Joannes Vermorel: If you start going from this perspective, what you end up with is still a process that automates something that keeps hundreds of people busy in large companies. Yet, at the end of the day, you still have a team of people who are very much in charge of those numerical recipes that don’t operate by themselves. The key is for humans to have the mental bandwidth to think, and if they are completely buried under the minute details of super complex things in supply chain, it becomes difficult.

An example of a super complex thing in supply chain would be having 50 million SKUs that need some kind of micromanagement, where I need to choose whether I’m going to have one unit in stock, two, three, five, etc. And I have 50 million of those stock levels to micromanage on a daily basis. My hope is that the minute forecasts needed to power this sort of decision will be entirely automated in the sense of daily execution. But for a longer horizon, like from one year to the next, where the company itself evolves, where its market evolves, where the right questions to answer evolve, I don’t think we will see it answered through machines in my lifetime.

Conor Doherty: So what does that mean for companies in practice?

Joannes Vermorel: I believe this automation will replace layers of the ecosystem where people are doing things that have very little added value, especially under the umbrella of S&OP. Some people would argue that it’s maybe not the real S&OP or the good S&OP, but that’s not my debate. My point is that I’ve observed, in the supply chain industry, there are a lot of large companies with staggeringly large teams of people who are just pushing numbers up and down, and I suspect that might disappear. Not because we have some kind of fantastical tooling that would eliminate the need for humans, but because with better tooling, we can improve the efficiency of supply chain management.

Jonathon Karelse: I agree with Joannes. As we continue to develop better tools and technologies, we’ll see a shift in the roles that humans play in supply chain management. While automation can handle many of the mundane and repetitive tasks, human expertise will still be vital for strategy, innovation, and adapting to evolving market conditions.

Conor Doherty: And better sort of techniques, we can have a few smart people that can engineer things that operate at a very large scale. Well, if I just throw it back, Jonathan, do you have anything in response to that? Because I want to give you the last word on that.

Jonathon Karelse: I mean, I can have the last word, but I agree broadly with everything he’s saying for sure. And I won’t be drawn into the “S&OP” debate either.

Conor Doherty: We’ll push on a little bit then. In terms of going forward, if you were to write a second edition of “History’s Future Histories of the Future Part two of the 21st century,” are there any specific ideas that you would focus on?

Jonathon Karelse: No, my second book will not be a sequel to this book. My second book will have to be after I’m retired, because it would be the story of all the insane things I’ve seen people do in supply chain over the course of my career. Who, despite all the mountains of evidence of how crazy it would be to do it, they persist in doing it anyway. But obviously, all of you current clients out there, don’t worry, you won’t be in it. But no, I mean, we’re only a few months after I published this book, so I don’t think there are any of these new, as Joannes said, as-yet-undiscovered systems of knowledge or types of science that I need to start thinking about.

Conor Doherty: Well, on that note, and it’s something I didn’t get a chance to ask earlier, Joannes, I’ll ask you as well. In your experience at NorthFind, when you’re in a room with C-level executives and you’re trying to sell them these ideas that we’re talking about, and you encounter that level of resistance that we talked about earlier through unconscious biases, how do you leverage behavioral economics to push through that, to try and avoid the kinds of insane examples you were just alluding to?

Jonathon Karelse: I’m going to partially reject the premise of your question. I don’t think I particularly try to use behavioral economics as a way to come to a desired conclusion in these discussions. I think I’m in maybe an easier position to navigate that ground than, for instance, a software vendor. Because for me, business success doesn’t look like selling a piece of software. And to be clear, I’m not saying software is not important; it absolutely is, it’s a critical enabler. But because we’re in the business of assessing processes and issues, and ultimately architecting solutions, I’m not often in the position of trying to push C-suite folks towards a certain direction. It’s more an understanding of, given the culture of their business, given their available resources - be they data, tools, or people - where is the most likely or most optimal first step on the journey towards process transformation? And if they’re strongly against the idea of relinquishing their grip on a forecast and they really want to have 300 salespeople spend time every month adjusting a forecast, that’s not necessarily a hill for me to die on. I mean, it’s then okay. If this continues to be our reality, let’s have that as part of the process, but importantly, let’s measure the business value of that activity. And they’ll often come to the conclusion themselves. The reason that a lot of these crazy activities exist is because

Conor Doherty: The legacy in these organizations is some sort of a measurement that allows them to persist. It’s a measurement that doesn’t make obvious how crazy the activity is. The measures themselves are often crazy because it requires a crazy measure to substantiate a crazy process. When you go to an organization and see the measuring accuracy at the top level dollar value and averaged across three months, you know that it is the product of them not wanting to know how bad the forecast process is. Because if they were using it for its intended purpose, which is diagnostic rather than a scorecard, you would never aggregate multiple months, and you would never aggregate that high in a hierarchy. I’m rambling a bit, but the bottom line is, I’m not trying to push them to a conclusion if they’re really hung up on a crazy process. We just help them understand by measuring the business benefit of that crazy process whether or not they want to continue to do it and often they’ll come to the conclusion themselves.

Joannes Vermorel: Obviously, being in the shoes of a software model, my approach is usually quite different. My approach usually is to outline examples as simple as I can make them, where the sort of forecast just cannot deliver what they are looking for. Sometimes, there are very simple situations. In aviation, if you do things at the part level, that still gives you zero information about whether you’re going to repair the aircraft. If you go into retail and you say that the store has tons of products that are very good substitutes for one another, you have other class of problems. It’s not going to give me a good indicator at all. Am I very successful with this sort of organization? I don’t know. Maybe your own approach, which is to have them do their own journey, might be more efficient. It’s a difficult journey. One of the points that make the Lokad experience interesting, not necessarily easier, but interesting, is that by focusing on the decisions, the sort of things that we are doing in terms of predictive modeling are just very strange, positively. There is this journey that I see where the sort of forecasts that are most useful are getting stranger and stranger. I suspect the 21st-century histories of the future will be very strange, a bit like the sort of strangeness that arises with quantum mechanics. It’s a whole set of ideas that are just absolutely not intuitive. They come with math that is just bizarre. When you apply that, you end up with even more bizarre things than expected.

Jonathon Karelse: Well, gentlemen, I think I might bring this to a close. But before we go, Jonathan, if you had one piece of advice to give everyone in supply chain management or any supply chain practitioners out there, what would it be?

Jonathon Karelse: Buy the book, available in stores. That’s advice that maybe my accountant would give. If it’s a single piece of advice, it’s to ask why. Never be satisfied with just knowing; try to understand why. We actually have a very nice quote, and I don’t know if you wrote it, but it was “a bad forecaster with data is like a drunk with a lamppost: he uses it for support rather than illumination.” So always look for the light.

Conor Doherty: Thank you very much. Well, Jonathan, thank you very much for your time. Joannes, thank you for yours. And thank you all for watching. We’ll see you all next time.