Which protocol is reliable




















But on the Internet, it's possible for a packet to arrive with erroneous data, so the real TCP has to check for errors and request re-transmission of packets with errors too. On your own: You can code your own Transmission Control Protocol. If your connection blocks YouTube, watch the video here. This project provides a simulation of unreliable data transmission by Internet Protocol.

Click the green flag to initialize the incoming transmission variables before each experiment. Click either character to enter a message for it to send to the other one. In this simulation, the complete message is a string of text that is divided into packets of one letter each. In reality, the packet length is not so strictly limited and messages are usually much longer. Read Blown to Bits pages Build a simple TCP. Resolve the unreliability so that messages are received reliably despite the limitations of IP packets.

If a packet seems not to have arrived, then the recipient waits a few moments to see if it does arrive -- potentially right up to the moment when the viewer needs to see that block of video -- and if the buffer gets to the point where the missing packet should be, then it simply carries on, and the application skips the point where the missing data is, carrying on to the next packet and maintaining the time base of the video.

You may see a flicker or some artifacting, but the moment passes almost instantly and more than likely your brain will fill the gap. If this error happens under TCP then it can take TCP upward of 3 seconds to renegotiate for the sequence to restart from the missing point, discarding all the subsequent data, which must be requeued to be sent again. Just one lost packet can cause an entire "window" of TCP data to be re-sent.

All this adds overhead to the network and to the operations of both computers using that link, as the CPU and network card's processing units have to manage all the retransmission and sync between the applications and these components.

For this reason HTTP which is always a TCP transfer generally introduces startup delays and playback latency, as the media players need to buffer more than 3 seconds of playback to manage any lost packets. Indeed, TCP is very sensitive to something called window size, and knowing that very few of you ever will have adjusted the window size of your contribution feeds as you set up for your live Flash Streaming encode, I can estimate that all but those same very few have been wasting available capacity in your network links.

You may not care. The links you use are good enough to do whatever it is you are trying to do. In today's disposable culture of "use and discard" and "don't fix and reuse," it's no surprise that most streaming engineers just shrug and assume that the ability to get more bang for your buck out of your internet connection is beyond your control.

For example, did you know that if you set your maximum transmission unit MTU -- ultimately your video packet size -- too large then the network has to break it in two in a process called fragmentation? Packet fragmentation has a negative impact on network performance for several reasons. First, a router has to perform the fragmentation -- an expensive operation.

Second, all the routers in the path between the router performing the fragmentation and the destination have to carry additional packets with the requisite additional headers. Also, in the event of a retransmission, larger packets increase the amount of data you need to resend if a retransmission occurs.

Alternatively, if you set the MTU too small then the amount of data you can transfer in any one packet is reduced and relatively increases the amount of signaling overhead the data about the sending of the data, equivalent to the addresses and parcel tracking services in real post.

Where you are trying to do large-video file transfer, UDP should be a great help, but its lossy nature is rarely acceptable for stages in the workflow that require absolute file integrity. If that transfer to the LOVEFiLM or Netflix playout lost packets then every single subscriber of those services would have to accept that degraded master copy as the best possible copy.

In fact, if UDP was used in these back-end workflows, the content would degrade the user's experience in the same way that historically tape-to-tape and other dubbed and analog replication processes used to. Digital media would lose that perfect replica quality that has been central to its success.

Having video editors drinking coffee while videos transfer from one place to another is inefficient even if the coffee is good. Given they cannot operate in a lossy way, are these production facilities stuck with TCP and all the inherent inefficiencies that come with the reliable transfer? Because TCP ensures all the data gets from point to point, it is called a "reliable" protocol.

In UDP's case, that reliability is "left to the user," so UDP in its native form is known as an "unreliable" protocol. The good news is that there are indeed options out there in the form of a variety of "reliable UDP" protocols, and we'll be looking at those in the rest of this article. One thing worth noting at the outset, though, is that if you want to optimize links in your workflow, you can either do it the little-bit-hard way and pay very little, or you can do it the easy way and pay a considerable amount to have a solution fitted for you.

Reliable UDP transports can offer the ideal situation for enterprise workflows -- one that has the benefit of high-capacity throughput, minimal overhead, and the highest possible "goodput" a rarely used but useful term that refers to the part of the throughput that you can actually use for your application's data, excluding other overheads such as signaling. In the Internet Engineering Task Force IETF world, from which the IP standards arise, for nearly 30 years there has been considerable work in developing reliable data transfer protocols.

RFC , dating from way back in , is a good example. It was proposed as an RFC request for comment but did not mature in its own right to become a standard. Probably because of the "task-specific" nature of RUDP implementations, though, RUDP hasn't become a formal standard, never progressing beyond "draft" status. One way to think about how RUDP types of transport work is to use a basic model where all the data is sent in UDP format, and each missing packet is indexed. Once the main body of the transfer is done, the recipient sends the sender the index list and the sender resends only those packets on the list.

As you can see, because it avoids the retransmission of any windows of data that have already been sent that immediately follow a missed packet, this simple model is much more efficient.

However, it couldn't work for live data, and even for archives a protocol must be agreed upon for sending the index. It responds to that rerequest in a structured way which could result in a lot of random seek disc access, for example, if it was badly done.

There are many reasons the major vendor implementations are task-specific. For example, where one may use UDP to avoid TCP retransmission after errors, if the entire data must be faultlessly delivered to the application, one needs to actually understand the application.

If the application requires control data to be sent, it is important for the application to have all the data required to make that decision at any point. If the RUDP system for example only looked for and re-requested all the missing packets every 5 minutes! This could break the key function of the application if the control decision needed to be made sooner than within 5 minutes.

On the other hand, if the data is a large archive of videos being sent overnight for precaching at CDN edges, then it may be that the retransmission requests could be managed during the morning. Direct member-to-member "point-to-point" communication, including messages, asynchronous acknowledgments ACKs , asynchronous negative acknowledgments NACKs and peer-to-peer heartbeats.

Under some circumstances, a message may be sent through unicast even if the message is directed to multiple members. This is done to shape traffic flow and to reduce CPU load in very large clusters. The TCMP protocol provides fully reliable, in-order delivery of all messages. This is a key element in the scalability of Coherence, in that regardless of the number of servers, each node in the cluster can still communicate either point-to-point or with collections of cluster members without requiring additional network connections.

Coherence comes with a pre-set configuration.



0コメント

  • 1000 / 1000