Low latency service is becoming a sought-after feature for access networks to improve user experiences of highly interactive applications such as gaming, video conferencing, virtual or augmented reality, and mission-critical computations. An overwhelming issue reported by users regarding experiences with those applications is the latency of the internet connection. An example of this is when a gamer playing a multiplayer game is on a mission with co-players, and someone in their household starts a video streaming session. Another example is when a meeting participant in a real-time conversion starts a video sharing or file downloading session. Addressing the market sector of those latency-sensitive applications may open revenue opportunities for network operators.
From an end-to-end view, latency is the time that elapses between a user request and the completion of that request. When a user requests information from a remote host through an application, that request is processed locally into Internet Protocol (IP) packets. Then the packets are sent over the network to the remote host. There, the packets are processed, and a response is formed, starting the reply process for the return trip. Along the way, and in each direction, are network components known as switches, routers, protocol translators, transport and media changes. At each step, delays are introduced as the packets are buffered, processed and transmitted. These delays could add up to discernible waiting times for the user.