Can Resource-Efficient Deep Subnetworks Adapt to Dynamic Resource Constraints in REDS?

Original title: REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints

Authors: Francesco Corti, Balz Maag, Joachim Schauer, Ulrich Pferschy, Olga Saukh

This study addresses the challenge of adapting deep learning models to changing resources on edge devices, crucial due to varying energy levels and system priorities. Introducing Resource-Efficient Deep Subnetworks (REDS), this work diverges from existing models by using structured sparsity to capitalize on hardware-specific optimizations. REDS achieve computational efficiency by skipping identified computational blocks and rearranging operations within their computational graph, enhancing data cache utilization. The evaluation across multiple benchmark architectures and datasets demonstrates REDS’ superior performance in accuracy on submodels, even with simpler networks. Impressively, on various mobile and embedded platforms, REDS adapt to dynamic resource changes in less than 40 microseconds. This innovation showcases potential for adaptable and efficient deep learning models on edge devices, promising advancements in resource-constrained environments.

Original article: https://arxiv.org/abs/2311.13349