[Gambas-user] Socket Limitations
Doriano Blengino
doriano.blengino at ...1909...
Sun Jan 3 18:38:55 CET 2010
Kadaitcha Man ha scritto:
> 2010/1/3 Doriano Blengino <doriano.blengino at ...1909...>:
>>>> anyway, this is a truely complicated matter.
>>>>
>>>>
>>> It is only complicated if you believe that the socket should poke its
>>> nose into business it shouldn't :)
>>>
>>> If the connection goes belly up, the socket can, at best, know how
>>> many bytes it sent into the ether, but it cannot ever know how many of
>>> those bytes went into hyperspace never to be seen again. How can it?
>>> It's not possible. That's why the client and server have to deal with
>>> the problem between themselves.
>>>
>>>
>> False. TCP/IP is a very robust transport,
>>
>
> False. TCP/IP is a network protocol. TCP = Transmission Control
> Protocol, and IP = Internet Protocol. The transport layer is contained
> within the protocol.
>
> Protocols define how two systems communicate with each other and deal
> with success or failure. If you take another look at the code I
> attached earlier, it is using a protocol, a defined RFC protocol (RFC
> 3977), and TCP/IP is also a defined RFC protocol (RFC 1122). Protocols
> are the whole reason that the gambas socket should not make decisions
> that the programmer should be making. Protocols define how
> conversations take place between systems; protocols are the reason
> that timeouts are necessary.
>
I used the word "transport" exactly to stress on the fact that, from the
programmer point of view, a TCP connection is far from being a protocol;
HTTP and FTP are, but not TCP. I say so because I wrote from scratch,
some years ago, a TCP/IP stack, an FTP server and a HTTP client. Anyway
you are right, relying on the mnemonic TCP.
>
>> I think we two are talking from two different points of view. I am
>> talking about a general point of view, where a socket is used in many
>> different situations, so one can not make assumptions about data size
>> and timeouts.
>>
>
> One does not need to make assumptions. One tests and verifies, then
> one sets appropriate timeouts based on empirical proof.
>
> To be honest, and no insult intended, the only time I could ever
> understand not having a timeout is if one is blindly sending and
> receiving data with no protocols to define what is being sent or
> received. now that's mad. Neither the client nor the server knows for
> sure what the other one sent or received.
>
This is the TCP scenario: you send data, and don't know what the hell
those data have done - only that it is arrived. Never tried to speak FTP
to a server which talks HTTP? Or to speak HTTP to a CIFS server? There
must be a higher protocol, like your example below, which copes with
requests and replies.
> Client: [HEY, SERVER]
> Server: [WHAT?]
> Client: [I HAVE SOME DATA FOR YOU!]
> Server: [OH HUM! OK, SEND IT, BUT TERMINATE IT WITH XYZ SO I KNOW I'VE GOT IT!]
> Client: Sends data and [XYZ]
> Server: [OH HUM! OK, I GOT IT!]
> Client: [BYE]
> Server: <hangs up the phone>
>
> That is a protocol, as daft as it looks.
>
>
>> This is the point of view of an operating system or a
>> general purpose language.
>>
>
> The point of view of the general purpose language is irrelevant
> because it should have no point of view whatsoever about how long it
> should take to transmit or receive some data. Whereas the gb3 socket
> has the point of view that it should take an infinite amount of time
> to transfer a single byte.
>
>
>> In fact, most OSes and languages let you
>> specify buffer dimensions and timeouts (and blocking or non-blocking
>> options, and many other). In most of your thoughts, you specifically
>> refer to a single, relatively simple situation.
>>
>
> That's only for now. The proxy sits between unknown clients and
> unknown servers that have defaults for the numbers of sockets they
> will create or accept. Without a timeout I cannot have a client create
> 4 sockets to a remote server if the remote server only accepts two
> connections from any one IP address, unless the remote server sends an
> explicit rejection message for that socket, and since the remote
> server is unknown, I cannot even guarantee that the remote server will
> do such a thing because, even if the protocol says the remote server
> must send a rejection message, I have no way of knowing that the
> remote server is fully protocol compliant.
>
>
>> Why not! A single
>> situation is a nice situation to speak about, but there are many
>> different. I think that non-blocking sockets are good for the example
>> you sent in this list; but for your real application (a proxy, right?),
>> a non-blocking system, with no timeouts except for errors, would be
>> better suited. Just my thought.
>>
>
> Without a timeout I cannot create more than a single socket and be
> certain that the remote server will accept it. I would be very happy
> if socket (Socket and ServerSocket) accepted timeouts and raised
> errors when the connection timed out, where timeout means a period of
> time where there is no activity while a send or receive is in
> progress. That, btw, is why Linux socket implements both a send and a
> receive timeout.
>
Mmm, I think that trying to establish a connection is different than
trying to send data to an already established one. I don't remember
well, but a server which accept only, say, two connections from the same
IP should either RESET the exceeding connections, or put them in queue.
In the first case it is an error (error connecting... connection reset
by peer), in the second case it would be reasonable to wait. But I agree
on the fact that some form of timeout must be in effect, at least for
good precaution.
Regards,
--
Doriano Blengino
"Listen twice before you speak.
This is why we have two ears, but only one mouth."
More information about the User
mailing list