Industrial Internet: Towards Interoperability and Composability
for instance, wear indicators, deviations from normal within a tolerance range, etc. This requires
enough data from similar parts to generate a model to which we can compare a given part, but
it still does not track variations in the part. Through leveraging data on a massive scale, we can
instead create ‘digital twins’37 which customize the model for the part in hand, rather than trying
to fit the part to the generic model. However, the computation involved may exceed what is
available locally – thus requiring non-local modeling.
Such a process essentially puts the control authority into the cloud as well – because it is the
source of the control policy. This brings about two challenges. First, there is security – data about
my process will not be under my company’s immediate control and policies may be interfered
with maliciously. Because my company is still responsible for the effect of the control policy, we
have to make certain that any leakage of information (or control) does not lead to disastrous
consequences, from existential threat to the company to safety issues that result in substantial
casualty.
The second challenge is the introduction of a ‘single point of failure.’ The single point of failure is
the connection between the machine and the cloud model. It creates a challenge that
traditionally might be addressed with multiple models, each with different software (to avoid
single points of failure due to the same software or operating system faults), as well as multiple
connections to different cloud services (which may or may not be possible without local
intermediation adding more latency). However, even given that some part of the model has
failed, we would like the alternatives to be composable – we can smoothly substitute other (sub)
models and resources to quickly recover from any issue as well as to cross check critical results
between geographically and provenancially diverse systems.
Engineering processes today rely on whole-life practices that perform offline checks limiting what
can be changed at runtime. Every combination must be tested in advance and because assets
always do only what they are told, any remote operations team itself may become a ‘single point
of failure’ (insider threat). Another challenge is the large amount of data that must be sent back
and forth to cloud-based resources; the introduced latencies demand non-real time approaches
limiting the response time to events that are not handled through prior local policies. To really
take advantage of having multiple variations of model components, they must all make similar
assumptions, have similar data formats and generate a similar singular point of view of the
system. This will tend to lead to either large costs in bringing up new systems or a tendency to
keep new systems as mere variations of old systems to avoid changing this language – how the
system is described and understood.
37
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120008178.pdf
IIC Journal of Innovation
- 71 -