WP1 – End systems and applications

Objectives

  • Get a deep understanding of the latency/throughput tradeoff
  • Make TCP more suitable to low-latency applications
  • Make non-TCP protocols more suitable to achieve low latency
  • Make TCP less aggressive against competing low-latency apps (low latency/bulk interactions)
  • Better exploiting interface diversity to achieve low latency and resilience
  • Optimise multiple-session resource sharing for low latency

Description of work

End systems can contribute significant sources of latency to the communication due to non-optimised protocol design, interactions within the protocol stack or local buffering. In response, WP1 considers latency-improving mechanisms in end systems and applications. The WP is structured into 4 tasks. Task 1.1 analyses the sources of latency in end systems to locate the most important contributors. The result of the analysis will be used to support and guide the design and implementation work in tasks 1.2. and 1.3, where task 1.2 focuses on buffer and transport protocol mechanisms and task 1.3 on application layer and API techniques including multipath issues. As the consortium already have extensive knowledge on end system optimisations for latency and some initial data sets for describing the selected use cases of online gaming and interactive video are already available tasks 1.2 and 1.3 will be initiated already from the start of the project in parallel to the extensive analysis that is performed in task 1.1. As results from the analysis become available they will be used to further refine and enhance the design work. Task 1.4 will also start from the beginning of the project and aims to bring the developed end system mechanisms to the IETF for presentation and evaluation. This will provide important feedback for the work and help maximise the impact as results on Internet transport need to be standardised to have global impact.

Task 1.1: Analysis of end system latency

Description: This task will focus on finding the sources for latency in the end systems. This in- volves an analysis of buffer latency issues for end systems, of protocol and API latency issues and of multipath latency issues.

  • We will analyze all buffers in use for network transfer to identify where transmitted packets are delayed before leaving the host. Analysis will identify where such waits can be avoided without sacrificing system stability or protocol functionality.
  • For reliable protocols, retransmission latency often becomes critically high for time-dependent applications. We will investigate where and why such extra delays occur for TCP and non- TCP transport protocols. We will analyse transport protocol functionality to identify where protocol-specific delays occur, and determine how they can be reduced. Analysis of operating system APIs for network transport to identify more latency-efficient userspace/OS interaction mechanisms.
  • Investigate how the use of multiple interfaces best can be used to lower the experienced latency for the end user. This analysis will be performed for both TCP and non-TCP transport protocols.

Task 1.2: Development of end system buffer- and transport protocol mechanisms for latency reduction

Description: A number of technical topics will be investigated and designed to reduce the experi- enced end-to-end latency by making changes to the end systems. The focus will be on host buffer optimizations for latency reduction, development of TCP enhancements for latency-reduction in transmissions and upon loss recovery, protocol enhancements for low-latency traffic over non-TCP transport.

  • We will optimise host buffers to reduce the latency induced by data segments waiting for transmission. We will use queue management algorithms to create low-latency queue/buffer strategies and, where possible, remove queues that can be avoided. The interaction of queues, protocols and APIs must be considered as such mechanisms are implemented and tested.
  • Using the analysis of transport protocols (task 1.1), we will develop protocol improvements that reduce latency for TCP. Modifications need to be compatible with the TCP standard in order to be incrementally deployable. We will therefore focus on sender-side modifications. Concepts developed will be evaluated with respect to preserving TCP fairness principles. This task will be evaluated in the light of the network results from WP2. Interaction between TCP and network/home gateway buffer modifications will be important areas of evaluation.
  • Using the results gained from the analysis in task 1.1, we will develop mechanisms for lowering latency in non-TCP transport protocols. We will investigate both retransmission mechanisms and general transmission mechanisms. For non-reliable transport protocols, we will need to develop mechanisms that keep latency low while avoiding network congestion. User space retransmission latency issues over non-reliable protocols will also be investigated.
  • By studying how bulk transfer streams sometimes dominate the performance of time-dependent streams, we expect to learn how to create systems that allow for time-dependent traffic to co- exist with bulk traffic while keeping the latency as low as possible. Mechanisms for improving such interaction can be developed both for systems where all streams share one interface, and for systems using multiple interfaces. We will evaluate such mechanisms in the context of op- timizations developed in WP2. Interaction with network mechanism results from task 2.3 is important.

This task will be performed in two sequential phases:  1) Simulations and initial development of mechanisms, 2) Prototype development of mechanisms ready for deployment in real-life scenarios and thorough evaluation of the developed mechanisms.

Task 1.3: Development of application layer- and API techniques for latency reduction.

Description: A set of related technical topics will be investigated and designed to reduce the experienced end-to-end latency by making changes to operating systems APIs and application layer techniques. The work includes API modifications for latency reduction, enabling of efficient re- source sharing, multipath mechanisms for lower latency and increased resilience and mechanisms for efficient single- and multipath resource sharing between latency sensitive and non-latency sensi- tive traffic.

  • We will update the transport APIs to enable low latency single- and multipath techniques. API mechanisms that delay packets will be re-designed to reduce the latency as much as possible while keeping protocol and application compatibility. We will develop APIs that are back- wards compatible with existing standards, maintaining backwards compatibility while reducing latency.
  • We will exploit the multi-interface environment present on many devices today to create mech- anisms that allow seamless selection and use of the lowest-latency links and transport strategies. Based on the data from the analysis in task 1.1, we will develop mechanisms that provide both network resilience and lowered latency. The work will include the possibility of multipath transmission.

This task will be performed in two sequential phases: 1) Simulations and initial development of mechanisms. 2) Prototype development of mechanisms ready for deployment in real-life scenarios and thorough evaluation of the developed mechanisms.

Task 1.4: IETF presentation and evaluation of end-systems results from RITE

Description: By presenting the results from RITE to the IRTF/IETF experts, we will gain valuable feedback and identify potential problems. Our group of IRTF/IETF experts will work with the stan- dardization body to create drafts, and ultimately standards based on RITE work. Such standards will help the industry in implementing the RITE results into existing systems, allowing lower latency for a wide range of users. These activities need to be started as soon as well-proven results are available. The feedback from IRTF/IETF discussions will loop back into the work on the appropriate task, and help to improve the quality of the final research results.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s