[Gambas-user] Socket Limitations

Doriano Blengino doriano.blengino at ...1909...
Sun Jan 3 09:05:53 CET 2010


Kadaitcha Man ha scritto:
> 2010/1/3 Doriano Blengino <doriano.blengino at ...1909...>:
>
>   
>> after a timeout occurs, one can only assume that
>> the entire write has failed, even if, in fact, some data has been
>> succefully written.
>>     
>
> Yes, you are quite correct, but the problem of partial data transfer
> due to a timeout is not a problem for the socket to sort out. It is
> for the server and client to sort out between themselves.
>
> If you look at the sample code I attached earlier, there is this line:
>
> Private Const MULTILINE_BLOCK_TERMINATOR As String = "\r\n.\r\n"
>
> A CRLF "." CRLF sequence is the RFC3977 "termination octet". The
> server knows it has not got the data that client said it is sending
> until it gets the termination octet. If the socket goes quiet and the
> termination block has not been received after a specific time interval
> then the server closes the connection and discards the transaction.
>
> Again, I fully agree with your thoughts up there, but I genuinely
> question if it is a problem that the socket itself needs to be
> concerned about. In my view, it does not and should not. The socket
> only need alert the programmer to a timeout and try to gracefully
> exit.
>
> The socket shouldn't be taking care of network communications problems
> where a properly signalling pair of entities should already have a set
> of agreed rules to abide by when a transmission is incomplete.
>   
I looked at your sample and I must say that, in this specific case, you 
are right. But sockets are more general: things are not always as you 
depict them. It is a difficult task to simplify things that are itself 
complicated. In particular, I/O (sockets included) under unix can be 
blocking or not blocking (the default is blocking), and when things go 
wrong sometimes the program is notified at a different point than the 
one that was responsible for the error. All put together, renders things 
complicated. You are right when you say that blocking could be the 
default, for two reasons: because the underlying OS does already so, and 
because this way the logic of a program is simpler. But especially in 
the case of a server, where many connections are alive in the same time, 
the blocking mode works very bad. If, to this mess, you add "automatic, 
behind the scenes" behaviours, like the one argued by Benoit ("may be 
that, when a big chunk of data has to be transmitted, we should activate 
blocking mode automatically"), you add a further step of uncertainity.

After a few minutes I suggested that a timeout could simplify things, I 
changed my mind. It would not be a totally bad idea but, as most other 
mechanisms, it has its problems. First, what is the right timeout? 
Second, if a timeout occurs, how many data has been sent? And we don't 
want to send data twice... so the problem is again what Benoit told: at 
the moment you can't know how much data has been sent, and this remains 
true all the time you use non-blocking mode. Personally, I found that a 
timeout is more handy when reading (at least you know how much data you 
readed), but it is not perfect anyway.

A question arises to my mind. It seems that non-blocking mode is the 
default in gambas, but no check is done for errors. In this situation, 
avoiding blocking mode would just solve. There is no speed penalty, 
because gambas writes to the OS buffers, so the speed is the same for 
blocking and non-blocking mode. In the case that data does not fit in 
the buffers, with blocking mode there is a delay; without blocking mode 
there is a serious problem... anyway, this is a truely complicated matter.

Regards,

-- 
Doriano Blengino

"Listen twice before you speak.
This is why we have two ears, but only one mouth."





More information about the User mailing list