A/B Testing is an approach - part of the broad field of the design of experiments - that tests two variants against each other in order to determine which is the most effective. A good example of this is the placebo effect that compares new drugs against the status quo in order to determine how well they function.
In this episode of LokadTV, we learn why this type of testing is currently being used in a number of industries and why it has historically been so popular. We learn more about how this approach is profoundly weakened when applied to a supply chain and the complexities that mean that alternative approaches work much better.
A/B testing is usually used to test two hypotheses against one another; it’s very popular in sales and marketing where it’s much more efficient than when applied to supply chain. For example, in the early 2000’s, Google famously did a series of A/B tests to investigate search engine results. Yet supply chains are highly interconnected by nature and by design, therefore an A/B testing approach fails to take into account the bigger picture.
In business, the rules are constantly changing. The trade-off between exploration and exploitation is a superior perspective compared to A/B testing. We explain why it can be beneficial to introduce some intentional randomness to algorithms in order to learn more and operate more efficiently afterward. We debate whether this will be of interest to companies that normally prioritize profitability and learn whether is is possible to actually quantify knowledge successfully.
To conclude, we investigate in more detail the impact that introducing noise can have on pricing optimization and we try to understand how a supply chain professional can approach an exploration vs exploitation situation.
00:34 What is A/B Testing?
01:50 What types of testing are we talking about here?
02:37 So the idea is to see what out of two possibilities perform the best?
03:31 Why is this something which is of interest to us here at Lokad?
04:37 How well does this technique actually work in the real world?
08:15 What would be a better approach?
11:59 How can we generate information on all of the possible scenarios within a Supply Chain?
14:49 So we are talking about intentionally introducing a percentage error to find out more about what could possibly happen?
19:05 Companies are normally most interested in maximising their profitability. As such, is it difficult to get a company to incorporate these intentional errors?
21:19 Is there any way of quantifying this knowledge and working out what it is worth to a company?
22:58 How does this approach fit in with what we do at Lokad? Surely interesting noise fundamentally goes against this belief of maximising every possible purchase decision?