In 1981 Saltzer, Reed, and Clark identified “end-to-end” principles related to the design of modern layered protocols. The Internet today is not as transparent as envisioned by [SALTZER81]. While most of the intelligence remains concentrated in end-systems, users are now deploying more sophisticated processing within the network for a variety of reasons including security, network management, E-commerce, and survivability. Applications and application-layer protocols have been found to interact in unexpected ways with this new intelligent software within the network such as proxies, address translators, packet filters, intrusion detection, and differentiated service functions. In this paper we survey examples of the problems caused by the introduction of this new processing within the network which is counter to the end-to-end Internet model proposed by [SALTZER81]. * 1 2 3 The conflict between the end-to-end Internet model and the introduction of new processing within the network is being addressed on a case-by-case basis in each development effort. There are no indications that new devices installed within the network (which break the end-to-end model) will disappear and in fact there has been dramatic growth in their implementation due to recent denial-ofservice attacks. Transition to IPv6 only solves a subset of these issues, and its deployment is proceeding slowly. Future work is obviously needed to create a consistent environment for protocol development that preserves the transparency provided by the end-to-end Internet model.