It is no secret that high-performance and the Internet are often seen as contradictory terms. Even private IP networks see serious performance challenges once they extend beyond the local building and around the globe. As devices and their users become more mobile, it has become critical to design with high-speed, Wide Area Networks in mind. But software and hardware designers are traditionally given few choices and little control for ensuring high network performance.

At the application level, network communication is viewed through the interfaces to TCP/IP, the protocol that manages nearly all data transfer across IP networks. Its thirty-four year old data model is very simple and general — a heavy weight, full-duplex, internally buffered, byte pipe. But this generalpurpose model can a) be difficult to program against and b) have a devastating performance cost.

Figure 1. Throughput testing of TCP versus MTP across 72 live Internet paths of varying capacities.
Designers looking for ways to work around TCP's well-known performance and scaling limitations have had few options. Compression and caching only take you so far. When networks stressed by high congestion, loss, and latency enter the picture, the fundamental inefficiencies of TCP become intractable. The traditional bare-bones alternative, UDP/IP, offers only fire-and-hope packet service and is, thus, impractical for any serious data transfer.

Enter the Multipurpose Transaction Protocol (MTP/IP). Developed in recent years to address the growing problems with TCP performance on WANs, MTP approaches the concept of a transport protocol with an entirely new data model. Taking cues from the high-volume, request-response pattern of modern network applications, Core MTP follows a simplified transaction data model. Each core network operation consists of small request datagrams exchanged for a potentially huge collection of response datagrams. Combined with a more robust packet design, this simpler model eliminates overhead like three-way handshakes and time-wait states. These core transactions can then be modularly combined to create more sophisticated data models, with only the minimum overhead appropriate to the task.