[Gambas-user] Socket Limitations
Doriano Blengino
doriano.blengino at ...1909...
Sun Jan 3 12:05:37 CET 2010
Kadaitcha Man ha scritto:
> 2010/1/3 Doriano Blengino <doriano.blengino at ...1909...>:
>
>> After a few minutes I suggested that a timeout could simplify things, I
>> changed my mind. It would not be a totally bad idea but, as most other
>> mechanisms, it has its problems. First, what is the right timeout?
>>
>
> It is either 0 for no timeout or it is set by the application.
>
Uhm... I see the point. I intended that the timeout would be set by the
application. Nevertheless, timeouts are often stupid, and should only be
used to raise an error, not as part of the normal logic of a program.
For example, in your application: what is the right timeout? May be a
few seconds, but if there is something 1Mib to send to a very busy
server, on a slow connection, some minutes would be required. Would you
set the timeout to some minutes? Ok, let's do some minutes (we don't
want the program fail, if there is no real reason; a slow connection is
not a good reason to fail, right?). Now improve your example
application; instead of sending a single message to a single host, it is
a proxy which accepts several incoming messages and deals them to
several hosts. If at a certain point a remote host is very busy (or
down), your proxy ceases to work because it is blocking, for several
minutes. You don't want to do so, so you need non-blocking sockets. The
timeout of the socket is still there, because sooner or later the socket
will have to raise an error, but your application won't stop to work.
This is how I mean timeout - only used for error recovering. But the
first time I suggested a timeout, it was related to the program logic.
It can slightly simplify things, but at a cost - a possible incorrect
behaviour. Using timeouts for communication can be a good idea only in
precise situations, but gambas can not know what the situation is, it is
a general purpose programming language.
So, I think, gambas could also implement timeouts, but it would be
responsibility of the user to use them in the correct way.
> http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.receivetimeout%28VS.80%29.aspx
> http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.sendtimeout%28VS.80%29.aspx
>
> $ man socket
>
> SO_RCVTIMEO and SO_SNDTIMEO
> Specify the receiving or sending timeouts until reporting an
> error...
>
> perl:
> timeout([VAL])
> Set or get the timeout value associated with this socket. If called
> without any arguments then the current setting is returned. If called
> with an argument the current setting is changed and the previous value
> returned.
>
> As you can see, the idea of a timeout is not a strange one to many
> languages on either Unix, Linux or Windows. In fact, I'd say it is an
> absolute necessity. And if you are using the OS socket, which you seem
> to be doing, then why should Gambas hide a property that is already
> available to C/C++ and even script programmers?
>
>
>> Second, if a timeout occurs, how many data has been sent?
>>
>
> Again, that is not the business of the socket. The business of the
> socket is alert the program that a problem exists, nothing more.
>
Here I would say a concise *no*. From ground up, things works like this:
"send as many data as you can, and tell me how much you sent". In fact,
the lowest level OS calls work like this: the return value is the number
of bytes written or read (not only for sockets, even files, standard
input and output, and so on). Of course you can use blocking mode, and
rely on the fact that when the system call returns, either it failed or
it wrote all the data. But the blocking mode is not always the best way
to go.
>
>> anyway, this is a truely complicated matter.
>>
>
> It is only complicated if you believe that the socket should poke its
> nose into business it shouldn't :)
>
> If the connection goes belly up, the socket can, at best, know how
> many bytes it sent into the ether, but it cannot ever know how many of
> those bytes went into hyperspace never to be seen again. How can it?
> It's not possible. That's why the client and server have to deal with
> the problem between themselves.
>
False. TCP/IP is a very robust transport, and a client socket knows
exactly how many bytes it launched in the hyperspace, and how many of
them have been acknowledged by the other endpoint. When the remote side
acknowledges, part of the transmit buffer is freed, and some space is
gained for you to write additional data. The TCP/IP sockets ensure that
the data sent out arrives on the other end, correctly (with checksum),
in-sequence, and completely. This is the job of a stream socket, and it
works very well. If we could have no hurry, there would be no need for
non-blocking mode and timeouts.
> Really, there is nothing strange in a having timeout. It is up to the
> client and the server to work out what to do if the connection goes
> down. It is not up to the socket.
>
I think we two are talking from two different points of view. I am
talking about a general point of view, where a socket is used in many
different situations, so one can not make assumptions about data size
and timeouts. This is the point of view of an operating system or a
general purpose language. In fact, most OSes and languages let you
specify buffer dimensions and timeouts (and blocking or non-blocking
options, and many other). In most of your thoughts, you specifically
refer to a single, relatively simple situation. Why not! A single
situation is a nice situation to speak about, but there are many
different. I think that non-blocking sockets are good for the example
you sent in this list; but for your real application (a proxy, right?),
a non-blocking system, with no timeouts except for errors, would be
better suited. Just my thought.
Regards,
--
Doriano Blengino
"Listen twice before you speak.
This is why we have two ears, but only one mouth."
More information about the User
mailing list