[Gambas-user] Socket Limitations

Kadaitcha Man nospam.nospam.nospam at ...626...
Mon Jan 4 01:35:37 CET 2010


2010/1/3 Benoît Minisini <gambas at ...1...>:

> <Kadaicha irony mode>
>
> I'm not sure that the MSDN documentation is a good reference for socket
> programming:
>
> - The Microsoft documentation is often not connected to the reality it tries
> to describe.
>
> - The Windows socket implementation often behaves differently than the other
> OS implementation.

I wasn't suggesting it was. You took the information completely out of
context. The correct context is "As you can see, the idea of a timeout
is not a strange one to many languages on either Unix, Linux or
Windows." i also included Linux and perl information.

> My problem is: how can I have the easier Socket interface in Gambas for all
> possible socket scenarii?
>
> That is the reason why I said that if the user asks for writing a big chunk of
> data to the socket, I should temporarily other blocking mode. Maybe he is
> doing something wrong (i.e. using a non-blocking socket, but not the Write
> event), but Gambas can't be sure.

Gambas should mind its own business and do as it's told. If the
program hangs or SIG faults then bad luck. Your argument here appears
to be the same as Doriano's; the programming language should know what
is happening, not the program. You are making program implementation
decisions at the language level. Anyway, that's how it comes across
when I read it, even if that's not what is meant.

I think you're on the wrong track. You do not need to turn on blocking
temporarily in gambas for large chunks of data at all; You can keep
the gambas socket in full asynch mode if the maximum size of the
transmit buffer is specified by gambas, can be made smaller by the
programmer, and an event is implemented to allow the socket to request
more data from the application, plus a timeout event to manage
excessive idle time during a transmit/receive operation. The timeout
should operate in both synch and asynch mode, whereas the data request
event should only be triggered in asynch mode.

In that way, the programmer has a very powerful tool in being able to
use both blocking and non-blocking at will to perform highly complex
and relatively simple functions.

Blocking sockets is a problem for gambas because it is single-threaded
and can cause the application to be unresponsive when large chunks of
data are sent between systems on a slow connection. However by knowing
the buffer size and having a "more data needed" event, the programmer
can keep the socket in asynch mode.

> Gambas 2 behave this way (not my decision), and I removed this behaviour in
> Gambas 3, but now I don't remember why. I should test all the scenarii...

There is no need to remember why it was turned off in gb3. It makes
sense to turn it off if you want an asynch socket, however turning it
off in gb3 without implementing a known buffer size and a request for
additional data without a timeout has broken it. Well, that's my
opinion.




More information about the User mailing list