Industrial IoT Edge Architecture for Machine and Deep Learning
In a typical Industrial IoT implementation we
have the following components:
1. Edge Data Aggregation and Streaming
Framework
2. Edge Cache
3. Platform Streaming framework
4. Platform Cache
In addition, for Machine Learning (ML) and
Deep Learning (DL) we have the following
items:
1. Edge ML/DL Inference Framework
using a CPU and/or a GPU if low
latency is required
2. Platform DL Inference Framework
using CPU and/or GPU
3. Platform DL Training framework using
CPU and/or GPU
In
classical
implementations,
all
Machine/Deep Learning is done on the cloud
hosted platform or enterprise tiers, and all
real-time data is sent to the platform/cloud
for training. All inference is also traditionally
done in the cloud. This methodology has the
following drawbacks:
1. Excessive data transport cost to the cloud.
2. High latency in obtaining results creating a
non-real-time inference on real-time data.
New Ideas
Three major innovations proposed are:
1. Move computation to the edge creating a
low latency, distributed solution.
2. Implement the lambda architecture at the
edge; i.e., handle both real-time and batch
data.
3. Use two inference engines – one at the
edge and one at the platform to get two
different views of data – local and global.
Benefits
The new ideas have the following benefits.
Some benefit details are obtained from
SDxCentral-Innovations-in-Edge-Computing-
and-MEC-2017-Rev-A.
1. Latency: The edge can provide latency in
milliseconds while multiple hops and long
transmission distances to the Platform tier
is in the 50-150 ms range. Latency to
centralized data centers and the public
cloud is even greater.
2. High throughput: The throughput
available to the user from the edge, served
Figure 8: Industrial IoT Implementation with Machine and Deep Learning
IIC Journal of Innovation
- 59 -