Artificial intelligence is one of the hot topics of the day. Each day there are new applications that appear, often in the area of predictive modeling. In the case of this wildfire classifier model, the headline could be interpreted much more generously than the actual results warrant. The model correctly predicted the occurrence of a large fire only around 50% of the time.
The fire prediction model illustrates the typical problem with artificial intelligence and machine learning: the methods may work up to a point, but the precision may be inadequate for the specific use case in question. A good example of this is natural language processing, or when a machine tries to make sense of text written by humans. An algorithm may be accurate enough to conduct document classification, i.e to categorise a text according to a predefined typology like news, sports, or fiction. However, natural language processing technology will often not be sufficiently reliable to perform document extraction, which involves extracting precise information from a text. This is in part because understanding of text involves “common sense” like reasoning, which cannot be replicated at the level of a machine. The relevance of machine learning in a business setting and its practicality as a solution therefore needs to be evaluated in the context of the demands that are placed on the technology. Will approximate methods still make a meaningful contribution, and if yes, to what degree?
The organisational foundation of the modern warehouse is the WMS, or warehouse management software. This tool manages all inbound and outbound logistics basics like receiving and shipping, but also makes sure that inventories are stocked. A more advanced functionality typically included in a WMS is the ability to distribute tasks so as to reduce overlap and waste of worker time. Examples of this are assembling order shipments (picking) in such a way as to optimise the trips made by employees around the warehouse, or efficient put-away of deliveries. However, WMS typically doesn’t include quantitative methods like demand forecasting.
In the past, there have been attempts to use classical mathematical methods like ARIMA to forecast demand in a warehouse. These attempts have displayed fairly high error rates, to the point where it is unclear whether they would be of any use in deployment in a business setting. Judging by the fact that most versions of WMS do not include such a functionality, the methods were not reliable enough to be deployed. A recent application of machine learning techniques, however, has led to a dramatic decrease in the error rates of the forecasting methods. The result involved layering several different AI techniques to increase the precision of the model. Furthermore, the authors state that the method works well with imperfect data, which is also important in a business setting.
With the cited error levels, the methods could well be implemented within a WMS. Even if the forecasting (as it always is) imperfect, that doesn’t negate its usefulness in running the warehouse. To take the previous example of text extraction, the result can be binary, the interpreted information is either correct or isn’t. Furthermore, the problem is that even something like a 15% error rate can easily be unacceptable in a business setting. If we take something like text extraction for a data provider, 15% of error simply isn’t acceptable: clients that pay for data will not accept data to be wrong 15% of the time. With the case of warehouse inventory, however, a 15% error rate isn’t a problem. If the inventory is a little overstocked, that is something that needs to be balanced with the increased efficiency of getting the forecast right 85% of the time. The gradual nature of this problem means that the precision of the forecast simply needs to breach a threshold above which there is a solid economic rational to implement AI methods in the warehouse, as the costs of implementation are below the resulting savings.