IIC Journal of Innovation 17th Edition Applying Solutions at the Digital Edge | Page 9

Key Criteria to Move Cloud Workloads to the Edge
step is to photograph the crosswalk with a connected camera . That camera has certain frame rate ( or in general , a sensor has a sample interval or integration time ) that adds latency to the system . A 30 frame per second camera could introduce a latency of 33ms . Next , a frame of the sensor data must be packaged for transmission into the network , which could involve compression . This could add another frame ( or possibly more ) of latency , for an additional 33ms .
Next , the data is transmitted into a local access network , and sent to an internet point of presence . This is relatively instantaneous if the connection is completed with metro optical fiber , but in the worst case , 4G / LTE cellular network connections can add up to 150ms round trip 6 . When the data leaves the local access / wireless network and enters a long-haul fiber , it is routed on an intra-city network to the selected cloud data center ( which can be thousands of km away ). Light in optical fibers travels approximately 68 % of the speed of light in a vacuum , so for each 1000km of distance between the IoT device and the cloud data center processing its data , a round trip delay of about 10ms 7 .
Then , there are queuing and software scheduling and execution delays in the cloud data center ’ s routers and servers which are highly variable but could easily contribute an additional 50ms . After the cloud acts upon the data and decides which action must be taken with an oncoming vehicle , a message is constructed , sent back to the approaching vehicle , and its control computers apply the brakes as commanded ( with minimal latency ).
All told , this architecture could have a round-trip sense-compute-actuate latency of almost 300ms . A vehicle approaching a pedestrian at 100km / hour travels about 8m during this 300ms interval getting that much closer to a collision , so one can appreciate the safety reduction for those pedestrians the 300ms latency introduces .
Let us explore how edge computing could improve this situation . If instead of sending compressed video to the cloud for analysis , incurring the 4G , fiber and cloud queueing delays , we can locate an edge computer right at the intersection capable of performing the same analytics operations and pedestrian safety application . There is no need for compression , because the camera can be directly connected by a cable to the edge node .
We can increase the frame rate ( perhaps to 240FPS ) because the bandwidth on this direct cable is basically free . There is no need for the high latency 4G or long-haul fiber connections . Cloud routing and queueing delays are transformed to edge node queueing delays , which we can exercise much tighter control over . A dedicated DSRC radio ( which can have sub millisecond latency ) connects the edge node with the oncoming vehicle . Under this scenario the latency picture is much improved : about 4ms for video frame latency , basically zero for all the
6
3G / 4G wireless network latency : Comparing Verizon , AT & T , Sprint and T-Mobile in February 2014 | FierceWireless
7
Calculating Optical Fiber Latency ( m2optics . com )
IIC Journal of Innovation - 5 -