IIC Journal of Innovation 17th Edition Applying Solutions at the Digital Edge | Page 17

Key Criteria to Move Cloud Workloads to the Edge
usually implemented with lower cost hardware and no redundancy , compromising reliability and resilience .
Considered together , certain simple portions of the IoT workload can be successfully run on constrained IoT endpoint devices . However , as the requirements on these networks become more advanced , it is clear that many IoT devices will be extremely underpowered due to the above constraints , and we need to consider moving a significant portion of their workload to layers of edge computing .
Using These Criteria to Partition Workloads to the Edge
The above discussions should demonstrate that many IoT workloads will not perform adequately if run in cloud data centers or on intelligent IoT devices . An intelligent partitioning must be performed to decide which workloads , or sub-functions of workloads are optimally served on edge nodes . This does not mean that all workloads or computational sub-functions must move to the edge – it means that each workload or subfunction should be located on the level of the cloud-edge-IoT device hierarchy where it is most optimally executed . Figure 4 is a process flow that can assist in partitioning workloads between the cloud and edge computing .
Start Derive Subfunc�on Requirements , Key Performance Indicators
Decompose Latency Requirements
Analyze / Simulate Latency
Decompose Bandwidth Requirements
Analyze / Simulate Bandwidth
Decompose Data Gravity Requirements
Analyze / Simulate Data Gravity
Decompose Trust Requirements
Analyze / Simulate Trust
Decompose Energy Requirements
Analyze / Simulate Energy
Decompose Space / Weight Requirements
Analyze / Simulate Space / Weight
Decompose Environment Requirements
Analyze / Simulate Environment
Decompose Modularity Requirements
Analyze / Simulate Modularity
Decompose Life�me Cost Requirements
Analyze / Simulate Life�me Cost
Decompose App . Specific Requirements
Analyze / Simulate App . Specific
Select Layer
Select Layer
Select Layer
Select Layer
Select Layer
Select Layer
Select Layer
Select Layer
Select Layer
Select Layer
Apply Weight
Apply Weight
Apply Weight
Apply Weight
Apply Weight
Apply Weight
Apply Weight
Apply Weight
Apply Weight
Apply Weight
Choose Layer for Each Subfunc�on
Prototype Key Aspects Limited Deployment System Requirements Met ?
Adjust Par��oning
Full-Scale Deployment
Fig . 4 - Process for Partitioning Workloads Between Cloud and Edge Computing .
The process for partitioning workloads begins with system requirements . Careful attention must be paid to performance-related requirements such as latency , bandwidth , throughput , capacity , etc . Trustworthiness-related requirements are important too , as the safety , security , reliability , resilience , and privacy needs of the applications will have a strong influence on these partitioning
IIC Journal of Innovation - 13 -