Terabyte Scalability for Supply Chains

The ability to process vast amounts of data has been a long-standing requirement for the supply chain industry. However, this can be very costly both in terms of time and processing power when one considers that optimizing a whole supply chain can require hundreds of incremental adjustments.

In this episode of LokadTV, we discuss the sheer amount of data that is managed within a supply chain and try to understand why there is a real need for an iterative approach to supply chain optimization and how this affects the scale of information being managed.

We discuss the research and development Lokad has been conducting into Terabyte scalability in order to deal with these data requirements. We also evaluate why Envision, our home made programming language, is such a good tool for tackling these challenges when compared to other programming languages such as Python, Java, C#, C++, etc. - namely, because Envision is domain specific and constructed specifically for supply chain problems. Meanwhilst the others are generic programming languages, which are highly capable but have specific algorithms and “plumbing requirements” that are more difficult to tailor to supply chains.

Hardware improvements are becoming increasingly more diversified. We try to understand the costs associated with these improvements and discuss why pure scalability is no longer that interesting now that we are able to process the data from the largest retail networks and supply chains in the world.

We wrap things up by explaining what this breakthrough means in real world terms and finally learn what is next for Lokad in 2019, where we plan to put in place processes to address some of the more bizarre situations that can occur in supply chains.