Tags:Deep Neural Network, DNN partitioning and Edge Computing
Abstract:
Deep Neural Network (DNN) inference requires high computation power, which generally involves a cloud infrastructure. However, sending raw data to the cloud can increase the inference time due to the communication delay. To reduce this delay, the first DNN layers can be executed at an edge infrastructure and the remaining ones at the cloud. Depending on which layers are processed at the edge, the amount of data can be highly reduced. However, executing layers at the edge can increase the processing delay. A partitioning problem tries to address this trade-off, choosing the set of layers to be executed at the edge to minimize the inference time. In this work, we address the problem of partitioning a BranchyNet, which is a DNN type where the inference can stop at the middle layers. We show that this partitioning can be treated as the shortest path problem, and thus solved in polynomial time.
Inference Time Optimization Using BranchyNet Partitioning