
Deep learning systems require vast amounts of computational resources and specialized hardware accelerators for effective functioning. Building massive data centers is necessary to accommodate the increasing demand for processing power and storage capacity driven by large-scale applications, such as machine learning and deep learning algorithms4.

Decentral model inference distributes the processing power of a deep learning model across a network of edge devices with loosely connected neural networks, while central model inference relies on a central data center with specialized hardware accelerators. Decentral model inference offers potential benefits such as improved resilience, adaptability, and scalability, but requires enhanced robustness in deep learning methods.