Science Fair Project Encyclopedia
The end-to-end principle is one of the central design principles of the Internet Protocol (IP) that is the basis of the Internet. It states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system.
The concept first arises in a 1981 paper entitled End-to-end arguments in system design by Jerome H. Saltzer, David P. Reed, and David D. Clark. They argue that reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing in intermediate system. They then demonstrate that the end-to-end processing alone would suffice to make the system operate, and that the intermediate processing stages are largely redundant. Given this fact, much intermediate processing can be made simpler, relying on the end-to-end processing to make the system work. This leads to the model of a "dumb network " with smart terminals, a completely different model to the previous paradigm of the smart network with dumb terminals.
For example, in the TCP/IP protocol stack, IP is a dumb, stateless protocol that simply moves datagrams across the network, and TCP is a smart end-to-end protocol operating between the client computers.
This paradigm was first made economically possible and then economically inevitable by the collapse in computer prices made possible by microprocessors.
- Jerome H. Saltzer, David P. Reed, and David D. Clark. End-to-end arguments in system design. ACM Transactions on Computer Systems 2, 4 (November 1984) pages 277-288. An earlier version appeared in the Second International Conference on Distributed Computing Systems (April, 1981) pages 509-512.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details