[Gambas-user] odd gb3 issue

Kevin Fishburne kevinfishburne at ...1887...
Wed Jun 1 08:06:55 CEST 2011


On 05/30/2011 03:31 PM, Benoît Minisini wrote:
>> Any reason a compiled app would behave differently than an app run in
>> the IDE? My app stalls half-way through receiving network packets from
>> the server when compiled, but when both are running in the IDE it works
>> fine. Didn't have this problem earlier tonight. This is new to me. I'm
>> using build 3866.
>
> Mmm. Very hard to guess as soon as you do network programming.
>
> When your program is stalled, do that in a terminal:
>
> $ gdb /usr/bin/gbx3<pid of your application>
> ...
> (gdb) bt
> ...
>
> And send me the result of the 'bt' command, which should tell where the
> program is stalled.
>
> Note: replace /usr/bin/gbx3 by where the gambas interpreter is actually
> installed.

When I said "stalled" I meant it continued execution but without 
processing the expected events (it didn't freeze or raise an error). I'm 
sorry for not being more specific. I've also discovered it has nothing 
to do with whether or not the program has been compiled. That was a red 
herring. This is what is happening:

The client sends its username and password to the server. The server 
tells the client that it has authenticated, then begins to send it 
multiple transactions (packets) with the information that it needs to 
start the game. All the transactions use the same procedures (client and 
server share transaction processing code), so the theory is that if one 
works they should all work. It has worked in the past, so I think a 
change in programming logic (multiple medium-sized packets in sequence) 
may have exposed a bug.

The weird thing that started happening is that a series of similar 
transactions being sent by the server began to be received irregularly 
by the client. Frequently the number of successful transactions changes 
between sequential program runs. Usually I expect something to fail 
consistently, not work sometimes then fail other times, so it's very 
confusing. Also it seems like the smaller the packet size the less 
frequently it fails.

For example, if the server sends a series of nine packets to the client, 
each around 25K, the client will receive less than nine 100% of the 
time. If the server sends nine 15K packets the client may fail to 
receive them all 75% of the time. If the packets are under 10K then it 
may fail 25% of the time.

Also sometimes the client receives packets with packets missing in the 
middle. For example, the client may receive packets 0 - 8, miss packets 
9 - 11, and finally receive packet 12.

The server is actually sending all the UDP data properly. I've 
step-traced the client and the UDP_Read event is simply not triggered 
for the missing packets. It's like they're lost in the wire...like they 
disappeared.

I thought it might be a hardware problem, so I tried it with different 
combinations of bare metal and VMs. I even upgraded my kernel and used 
more recent NIC firmware. No effect at all, so I don't think it's 
hardware or my code.

Anyone have any insight?

-- 
Kevin Fishburne
Eight Virtues
www: http://sales.eightvirtues.com
e-mail: sales at ...1887...
phone: (770) 853-6271





More information about the User mailing list