Cable operators face significant challenges in launching and improving services with agility and velocity within ever more complex service delivery architectures. The dilemma is clear: improvement requires change; but change drives performance incidents. So how can operators best test improvements and new services while minimizing unintended consequences? Evidence-based methodologies, such as A/B testing, that are transforming other industries provide a good answer to the what needs to be done. The more challenging question is how can operators meaningfully apply these techniques to their operations? Creating a formal evaluation program around every change occurring in MSO access networks would be both labor-intensive and cost-prohibitive, if even possible.
Two techniques that operators can use to improve upon existing ways of tracking and maintaining customer satisfaction before and after changes in their network are the following:
Change-Driven Segment Identification. This is an automated technique for identifying populations and micro-populations of subscribers within the operator network whose service, customer premise equipment (CPE), or service delivery network has experienced the same change.
Change Scoring. Once groups with like changes are identified, the quality of a given upgrade needs to be evaluated. There are many ways to score changes, from straight customer experience measures to automatic feature selection models. Big data and machine learning provide operators with options for systemically evaluating potentially service-impacting change.
The goal of this paper is to describe ways that operators can use machine learning in combination with well-designed operational practices to quickly differentiate between upgrades that improve service quality and those that make it worse – thereby allowing operators to embrace change, not fear it.