In traditional HFC deployments, the headend technician controls the services that are delivered to a fiber node because this is done via RF combining in the headend. CMTSs and Edge QAMs output RF channels at configured channel frequencies. This output is split and combined such that the appropriate services are delivered to the lasers going to the node; once the wiring in the combining network is complete, it is rarely changed.
By contrast, and as HFC infrastructure evolves toward a Distributed Access Architecture (DAA), consequent RPHY (Remote PHY) node combining is done virtually, with software. Each node must be virtually directed to appropriate service “Cores,” expressed by frequency plan per service, and video multiplexes to be joined. Service cores span DOCSIS flows, linear and on-demand video, and legacy, out of-band information.
More specifically, each node must be software configured to listen to its appropriate QAM broadcast and VOD feeds, as well as connect to the correct CMTS, and legacy Out-of- Band (OOB) components.
These advances mean that the industry’s technical workforce needs to be able to program the node, along with Cores which also have to be configured to provide the right data. Pre-planning which physical node (and MAC address) will be installed at a given fiber location is improbable, because line technicians typically carry many nodes. As a result, mechanisms to map logical nodes with intended service configurations and physical instantiations are required.
At the same time, an increasingly intrinsic design goal for network design is to prevent “vendor lock in.” In order to encourage a competitive cost and innovation environment, operators prefer and require multiple sources of components and to be able to pivot to new resources easily. In addition operators must deal with the realities of supporting differing QAM video conditional access (CAS) systems throughout their footprint.
If an operator the size of Comcast used a CMTS to provide all services and manage all aspects of a DAA node, for instance, it could yield as many as 18 permutations of nodes, CMTS, and CAS to test and integrate. Clearly, this is not sustainable. This led to the notion of applying separate “Cores” for broadcast video, VOD, out of band (OOB), and high speed data flows. Cores allow for best-of-breed product selection. Also, keeping video out of the HSD cores simplifies CMTS operations; more importantly, any call to pivot to a new CMTS, or multiple CMTS providers, would sidestep the need tore-integrate video services across six permutations of nodes and CAS systems.
The work related to disaggregating service flows into multiple Cores presented the next dilemma: Deciding which “Core” to make the “primary,” or lead coordination Core. This led to the creation of a “primary core” that is, in essence, a vendor-independent orchestrator. This allows us to mix Cores and nodes at will, and to use our own internal software management processes and tools when needed. We call this software “GCPP,” which stands for “Generic Configuration Protocol Principle.” This internally developed software performs the following functions, which will be discussed in the paper:
Yet another design goal was to do as little upfront design as possible, relying on auto-discovery instead of complex, pre-drawn wiring diagrams to connect nodes with switches and photonic muxes. This relates to the concept of the logical vs. physical node. The logical node has a name or ID known to billing systems and GIS systems; it has a known channel map and frequency plan. The physical node is the hardware that hosts the logical node. The physical node can be replaced because of hardware failure or natural disaster.
Our system uses “late binding” of physical to logical node mapping, meaning that it happens at the time of install (via an app), thus allowing any node in inventory to host the logical node. (This alone vastly simplified construction processes.) In addition, we tried to ensure that the node’s connectivity to a CMTS would be detected, rather than designed. This makes capacity management easier, ensures the databases are up to date, and gives greater visibility into the network for technicians..
This paper describes the software infrastructure used to manage the thousands of nodes that will be transitioning to DAA. The solution makes use of software defined networking (SDN), distributed cloud servers, and multiple Cores.