Jump to content

End-to-end connectivity

From ICANNWiki
Revision as of 16:08, 15 February 2021 by Jessica (talk | contribs)

The end-to-end principle is a classic design principle in computer networking. In networks designed according to the principle, application-specific features reside in the communicating end nodes of the network, rather than in intermediary nodes, such as gateways and routers, that exist to establish the network. The end-to-end principle originated in the work by Paul Baran in the 1960s, which addressed the requirement of network reliability when the building blocks are inherently unreliable. It was first articulated explicitly in 1981 by Saltzer, Reed, and Clark.[1] was published in ACM's TOCS in an updated version in 1984.

A basic premise of the principle is that the payoffs from adding features to a simple network quickly diminish, especially in cases in which the end hosts have to implement those functions only for reasons of conformance, i.e. completeness and correctness based on a specification. The full quote from the Saltzer, Reed, Clark paper states: "In a system that includes communications, one usually draws a modular boundary around the communication subsystem and defines a firm interface between it and the rest of the system. When doing so, it becomes apparent that there is a list of functions each of which might be implemented in any of several ways: by the communication subsystem, by its client, as a joint venture, or perhaps redundantly, each doing its own version. In reasoning about this choice, the requirements of the application provide the basis for the following class of arguments: The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the endpoints of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible, and moreover, produces a performance penalty for all clients of the communication system. (Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.) We call this line of reasoning against low-level function implementation the end-to-end argument."(p. 278) Furthermore, as implementing any specific function incurs some resource penalties regardless of whether the function is used or not, implementing a specific function in the network distributes these penalties among all clients, regardless of whether they use that function or not.

The canonical example for the end-to-end principle is that of an arbitrarily reliable file transfer between two end-points in a distributed network of some nontrivial size: The only way two end-points can obtain a completely reliable transfer is by transmitting and acknowledging a checksum for the entire data stream; in such a setting, lesser checksum and acknowledgment (ACK/NACK) protocols are justified only for the purpose of optimizing performance - they are useful to the vast majority of clients, but are not enough to fulfill the reliability requirement of this particular application. A thorough checksum is hence best done at the end-points, and the network maintains a relatively low level of complexity and reasonable performance for all clients.

The end-to-end principle is closely related, and sometimes seen as a direct precursor to the principle of net neutrality. This idea of net neutrality comes from Lawrence Lessig used to call the principle e2e for end to end.[2]

Reference

  1. The 1981 paper
  2. Net Neutrality