Shaping the Future in a Data-Centric Connected World 26th Edition | Page 110

The Data Centric Architecture of a Factory Digital Twin
There are substantial advantages to using equations to describe the behavior of processes . For example , the role of data describing process behavior is well-defined – like the minimum and maximum extent limits in relationship ( 1 ). The list of the constraints plus the objective allows a precise statement of the problem to be solved . Uncertainty can also be addressed by allowing parameters like cycle time or the amount produced from an activity to be a random variable with some experimentally observed statistical distribution .
A key complexity management aspect of using equations is the fact that reasoning about the problem description is decoupled from the reasoning about how to get a good or best solution . This promotes rapid evolutionary improvement of the technology . One of the most powerful types of formulation families discovered ( Elkamel 1993 [ 4 ], Pantelides 1994 [ 5 ]) for FDTs , one with very precise and extensible descriptive power is known as the Uniform Discretization Model ( UDM ).
This type of formulation divides the timeline up into uniform pieces ( buckets ) and mathematical relationships such as ( 1 ) can be written over each bucket . A solution that can be shown on a Gantt Chart ( see below ) and in other plots is an assignment of variable values that satisfies the constraints over every bucket and results in a good or best possible objective function value . Unfortunately , UDM formulations face two seemingly intractable problems .
First , the number of mathematical relationships that result from realistic sized buckets and an industrial scale time horizon are enormous . The traditional approach to using mathematics has been to generate the equations that are needed and then pass these to a solution algorithm to get an answer . Because practical problems and real data require small bucket sizes for the appropriate realism there can be hundreds of millions or billions of relationships required . Even contemporary and future computers are not sufficiently fast nor do they have enough memory to represent most real world problems . In addition , the number of yes-no variables implies that the potential number of solutions is incredibly large . Thus , the second major difficulty with process management problems is that studies have shown the number of solutions to often be 10 2500 to 10 25000 or more due to the combinatorial nature of the yes-no decision variables . Any algorithm that explicitly attempts to look through this vast number of solutions will have a prohibitive execution time on any computer now or in the future .
The work of Miller and Pekny ( Miller et al 1991 , Miller et al 1995 [ 6 ]) demonstrated that highly engineered , custom algorithms could solve even very large combinatorial ( yes-no ) decision problems . The essence of this algorithm engineering is to develop data structures and mathematical theory highly specific to the class of problem and specific instances of interest . The goal of the mathematical theory is to develop properties that allow implicit search of the solution space , identify regions of that solution space where good or best solutions lie , and focus computational power on exploring these regions .
The other key aspect of the work of Miller and Pekny is to never explicitly generate all the mathematical relationships needed to describe the problem because most of the relationships
106 February 2025