Machine learning (ML) services are commonly deployed in centralized data centers. However, for cable network applications, such as identifying anomalies within the cable network using DOCSIS® network performance data, a centralized model can lead to delays in processing and classifying data, hindering quick problem identification for operators. Implementing edge intelligence offers a solution to this issue, combining centralized training with edge inference.
During the learning phase, large amounts of data is utilized to calculate the weights and biases for training the model, necessitating high-performing machines easily found in centralized data centers. Once the learning phase concludes and the ML network is trained, the model can be deployed for inference, executing on devices with lower computational and memory capabilities at the edge. Models can be deployed on edge devices such as Cable Modems (CMs), gateway devices, or Access Points (APs), with enhancements like model compression and inference acceleration. While edge devices must possess sufficient power for specific tasks, embedding ML capabilities into such devices is achievable using certain class of algorithms.
This paper proposes a model to facilitate distributed edge intelligence on DOCSIS network equipment, including CMs and gateways, and potentially extending to other devices like Distributed Access Architecture (DAA) nodes or amplifiers. It aims to develop an architecture to support this deployment model in a DOCSIS network, detailing the process of downloading ML models to edge devices, implementing necessary security mechanisms, and providing the required application programming interfaces (APIs) to enable such functionality.