IIC Journal of Innovation 17th Edition Applying Solutions at the Digital Edge | Page 19

Key Criteria to Move Cloud Workloads to the Edge
recognition , multi-camera correlation , threat detection , automated response , and supervision . The challenge is to partition each of those sub-functions optimally into the cloud , edge or intelligent IoT device hosted computational resources . The requirements are decomposed into the ten criteria shown , and analysis or simulation is performed to measure the performance of each if implemented in the cloud , edge , or intelligent device .
We may discover , for example , that the feature extraction subfunction has the best latency if performed at the edge , but it may have a lower cost if implemented in the cloud . Weights will be applied , and the decision of which layer of computational resources represents the best compromise for that subfunction is made . The entire process is repeated for the remaining subfunctions . This generates a straw proposal for the full system partitioning , defining which subfunctions will reside in the cloud , in one or more layers of edge nodes , or in the intelligent IoT devices . At that point , the entire system is verified with a process of prototyping and limited deployment , iterating and adjusting the subfunction partitioning as required until all system requirements are met , and full-scale deployment can begin .
Finally , the partitioning between cloud , edge and IoT device execution for an initial deployment can be modified as more system experience is gained . Some edge orchestration systems use containers like Kubernetes or Docker to support their workloads , and these systems can dynamically move parts of the algorithms between levels of the network in response to changing load profiles or fault events . AI techniques are being applied to these edge orchestrators 14 , so repartitioning in response to changing workloads can be at least partially automated and could potentially react on sub-one second timescales as system loads change .
CONCLUSIONS
By taking a fresh , focused look at the key performance indicators and system-level requirements of networks , it is possible to optimize the performance , trustworthiness , and lifecycle cost of applications by segmenting workloads between cloud data centers and execution on edge nodes . If the partitioning of computational workloads and storage operations between cloud data centers , edge computing nodes and intelligent devices is carefully considered , IoT networks will be better able to service their critical applications .
14
Y . Wu , " Cloud-Edge Orchestration for the Internet-of-Things : Architecture and AI-Powered Data Processing ," in IEEE Internet of Things Journal , doi : 10.1109 / JIOT . 2020.3014845 . Cloud-Edge Orchestration for the Internet-of-Things : Architecture and AI-Powered Data Processing | IEEE Journals & Magazine | IEEE Xplore
IIC Journal of Innovation - 15 -