Customization of support sites across self-service platforms can significantly enhance overall customer experience and satisfaction. At Charter Communications, one of our primary objectives in this area is to drive improvements to digital troubleshoot and support pages by optimizing digital support content, content “findability” and content rules. By customizing support content, we aim to reduce the time between our customers’ questions and their resolutions, ease the navigation process for them by surfacing relevant content and eliminate the need for a customer service phone call. Machine learning offers powerful capabilities to customize content by leveraging vast amounts of data across several areas to predict support-seeking behaviors. These predictions enable us to proactively surface the right support articles pertinent to users at the right time. It’s not enough to build these machine learning models, which is a large feat in and of itself. Once a model is built and validated, monitoring model performance over time is necessary to make sure the models are performing as expected. Traditional model performance metrics, such as precision, recall and area under the receiver operating characteristic curve (AUC-ROC) allow data science teams to assess model accuracy and prevent model drift. In addition, a successful machine learning application also has a positive and significant impact on key performance indicators (KPIs), which requires experimentation (i.e., A/B testing) and robust statistical analyses. To assess the full impact of machine learning-driven experiences, it’s critical to implement a systematic process that rigorously and iteratively tests, validates and optimizes these applications. At Charter Communications, our Data Science teams have developed the Experimentation-Machine Learning Lifecycle Process, a standardized set of best practices for operationalizing machine-learning driven content rules. This process consists of five phases, which include 1) Model Discovery A/B, 2) Productionalization of the Model, 3) Model Implementation A/B, 4) Operationalization of the Machine Learning Application and 5) Continuous Improvement. In this paper, we walk through the Experimentation-Machine Learning Lifecycle Process, describe how to carry out each of the five phases for any machine learning application and provide examples for how these phases apply to our own teams’ work. In doing so, we provide a set of guidelines that can be implemented by others to evaluate and optimize their machine learning applications.