The ability to process vast amounts of data has been a long-standing requirement for the supply chain industry. However, this can be very costly both in terms of time and processing power when one considers that optimizing a whole supply chain can require hundreds of incremental adjustments.
In this episode of LokadTV, we discuss the sheer amount of data that is managed within a supply chain and try to understand why there is a real need for an iterative approach to supply chain optimization and how this affects the scale of information being managed.
We discuss the research and development Lokad has been conducting into Terabyte scalability in order to deal with these data requirements. We also evaluate why Envision, our home made programming language, is such a good tool for tackling these challenges when compared to other programming languages such as Python, Java, C#, C++, etc. - namely, because Envision is domain specific and constructed specifically for supply chain problems. Meanwhilst the others are generic programming languages, which are highly capable but have specific algorithms and “plumbing requirements” that are more difficult to tailor to supply chains.
Hardware improvements are becoming increasingly more diversified. We try to understand the costs associated with these improvements and discuss why pure scalability is no longer that interesting now that we are able to process the data from the largest retail networks and supply chains in the world.
We wrap things up by explaining what this breakthrough means in real world terms and finally learn what is next for Lokad in 2019, where we plan to put in place processes to address some of the more bizarre situations that can occur in supply chains.
00:34 What exactly have you been working on in 2018?
01:24 What steps did you have to take to reach this improvement?
03:55 How does working with Envision compare with working with other programming languages?
06:27 Using Envision seems more simplistic than another programming language, is that right?
07:26 What were the key insights that led to these improvements?
10:47 How did implementing terabyte scalability affect the costs of working with data?
14:18 What does this breakthrough mean for the real world?
17:48 Why is there such an iterative approach?
21:04 What does this mean for the future of Lokad? How do you see this changing the outlook for 2019?