Key Criteria to Move Cloud Workloads to the Edge
decisions . Finally , various measures of system lifetime cost ( including purchase price , programming / configuration / evolution costs , energy , and ongoing operational costs ) must be factored in to determine how edge solutions can optimize the overall deployment .
The flow on Figure 4 includes a set of decomposition steps , where specific requirements criteria ( latency , bandwidth , data gravity , trustworthiness , energy , space / weight , environment , modularity , and lifetime cost ) are split out for individual consideration in parallel . There is also an extra category for application specific requirements that may not be included in the aforementioned criteria , but are nonetheless important to the success of a specific system .
Analysis and simulation tools are applied to each of the criteria individually , for example , to determine the performance or efficiency if that subfunction is implemented in the cloud , at some layer of the edge or in the intelligent IoT device . Based upon this analysis and simulation , an optimal implementation layer is selected for the subfunction in each of the named criteria .
Often , a specific subfunction of a network will be optimized in some layer ( cloud , edge , intelligent IoT device ) based upon the analysis / simulation for one of the criteria , but it is optimized in a different layer for some different criteria . This is where the weighting shown in Figure 4 comes in . Weights ( derived from the system requirements ) point to which of the criteria should receive higher emphasis , and in places where the criteria indicate different cloud-edge-device partitioning for the same subfunction , the weighting helps referee the discrepancy .
Prototyping is a valuable way to understand application behaviors and adjust preliminary partitioning decisions accordingly . By prototyping key aspects of an application ( inner processing loops , for example ), one can determine if they will operate adequately in the resources of edge nodes , if the analysis and simulation steps yielded accurate results , and what sort of tradeoffs may be involved moving sub-functions from cloud data centers to edge nodes .
Limited deployment of the final application is the best indicator of the validity of preliminary partitioning decisions . Several different partitioning models of various elements of a complex application between the cloud and edge nodes will allow you to experiment and analyze the performance differences and make an intelligent decision on optimal partitioning before fullscale roll-out . This is also where the initial deployment and ongoing operational cost structures will be adequately understood .
A final check is made of the limited deployment to determine if all system requirements are met . If not , adjustments to the cloud – edge – IoT device algorithm partitioning can be made , and a subset of the previous steps in this process can be repeated . Once all requirements are satisfied , the architecture is ready for full-scale deployment .
Let ’ s look at a concrete example applying the techniques in Figure 4 : a video surveillance system for a medium-sized airport . Each gate has a number of intelligent cameras , interconnected to edge nodes and a set of cloud servers . The algorithm can be partitioned into a set of sequential sub-functions , including the steps of : contrast enhancement , feature extraction , object
- 14 - June 2021