The cloud complicates this process even more.
Instead of sending the data back to the data center, it
is sent to a remote server that can be thousands of
miles away. To make matters worse, we send it over
the open Internet. However, considering the amount
of processing that needs to occur, the cloud may offer
the best bang for the buck. A Few Key Points to Remember
Overcoming the Latency Challenge The core benefit of edge computing is to reduce
latency, and, as a result, increase performance of the
complete system, end to end. Moreover, it lets you
respond to critical data points more quickly, such as
shutting down a jet engine that’s overheating, with-
out having to check in with a central process.
To address the latency problem, many suggest “com-
puting at the edge.” It’s not a new concept, but it’s
something that was recently modernized. Computing
at the edge pushes most of the data processes out to
the edge of the network, cl ose to the source. Then it’s
a matter of dividing the work between data and pro-
cessing at the edge, versus data and processing in the
centralized system.
The concept is to process the data that needs to
quickly return to the device. In this case, it’s the the
pass/fail data that indicates the success or failure of
the physical manufacturing of the auto part. How-
ever, the data should also be centrally stored, and,
ultimately, all of the data sent back to the centralized
system, cloud or not, for permanent storage and
future processing.
Edge processing means that we replicate processing
and data storage that’s close to the source. But it’s
more of a master/slave type of architecture, where
the centralized system ultimately becomes the point
of storage for all of the data, and the edge processing
is merely a node of the centralized system.
To accommodate edge processing, we need to think a
bit harder about how to build our IoT systems. That
means more money and time must go into the design
and development stages. However, the performance
that well-designed IoT systems will provide to meet
the real-time needs of IoT will more than justify the
added complexity.
I suspect that computing at the edge architecture will
become more popular as IoT becomes more popular.
We’ll get better at it, and purpose-built technologies
will start to appear. Computing at the edge of an IoT
architecture is something that should be on your
radar, if IoT is in your future.
Edge computing is about putting processing and data
near the end points. This saves the information from
being transmitted from the point of consumption,
such as a robot on a factory floor, back to centralized
computing platforms, such as a public cloud.
Although this latency reduction can aid all types of
systems, it’s mostly applicable to remote data pro-
cessing, such as IoT devices.
Edge computing is not about snapping off parts of
systems and placing them at the edge, but rather
about the ability to look at data processing as a set of
tiered components that interact, one to another, each
playing a specific role.
The data that’s processed and stored at the edge typ-
ically only resides there temporarily. It’s ultimately
moved to centralized processing, such as a public
cloud, at certain intervals. That central location’s
copy becomes the data of record, or the single source
of truth.
Don’t do edge computing unless you have a specific
need for it. Edge computing is a specialized approach
to solving specialized problems. Enterprises are often
guilty of adopting technology just because it’s men-
tioned more than once in the tech press. But doing so
will cost you more money and add risk — and edge
computing falls into this category.
So, What Does This All Mean?
Edge computing is a tactical way to solve the latency
issues, built upon many tried-and-true architectures
of the past. However, what’s new is the element of the
cloud, and the ability to leverage edge systems as if
they were centralized. The new cloud element is
bringing new relevance to edge computing.
FALL 2017 | THE DOPPLER | 43