[Gambas-user] sdl Draw event overhead is killing frame rate

Benoît Minisini gambas at ...1...
Mon Jan 27 13:43:27 CET 2014


Le 27/01/2014 02:55, Kevin Fishburne a écrit :
> On 01/26/2014 06:49 PM, Benoît Minisini wrote:
>> Le 20/01/2014 05:28, Kevin Fishburne a écrit :
>>> It must provoke some acid reflux deep within the bowels of SDL. :) I
>>> don't know...it's damn strange for sure. I also find it strange that the
>>> FPS is around 500, but when you minimize the window it jumps to over
>>> 2000. Even if it's just refreshing the window itself, you'd think on a
>>> modern system with hardware acceleration it would be faster than that.
>>>
>>> I attached my test app. It has all the OpenGL commands and variable
>>> declarations commented out, so it's running just an empty SDL loop with
>>> the frame rate calculation and console printout. Feel free to test the
>>> two revisions yourself to see the difference.
>>>
>>> Just thought of something...how can you change the current font SDL is
>>> using? I wonder if changing it from the bitmap to an arbitrary TTF (even
>>> though no text is being rendered) would make a difference?
>>>
>> I solve my FPS problem on Intel GPU with the driconf program (that is
>> buggy) and that link:
>>
>> https://wiki.archlinux.org/index.php/Intel_Graphics
>>
>> A flag in the "~/.drirc" file allowed me to disable automatic VSYNC, and
>> now my SDL programs go to the maximum speed.
>>
>> I don't know which GUI driver you use exactly, but I suggest you look in
>> that direction.
>>
>> Regards,
>>
>
> I installed driconf and created that config file, though it had no
> effect. I'm using an NVIDIA graphics card and driver and think that
> solution is just for Intel chipsets.
>
> I've tried every binary NVIDIA driver in the Kubuntu 13.10 repositories
> and tried the open source NVIDIA driver. It performs virtually the same
> on my main workstation using two different NVIDIA cards (the second one
> brand new), a VM with hardware acceleration enabled and my server (also
> trying several versions of the binary driver), which uses a much older
> NVIDIA card.
>
> With the binary driver there is a GUI (NVIDIA X Server Settings) that
> allows you to change vsync, page flipping, full screen anti-aliasing,
> anisotropic filtering, etc. I've run my test program using all
> variations of these settings. The only one that makes a difference is
> vsync, which predictably either caps the FPS at 60 or allows it to max
> out around 238. The window size is also largely irrelevant. Even setting
> it to 1x1 pixels gives nearly the same frame rate as a 1280x720 window.
>
> So, in the immortal words of Sherlock Holmes, "when you have eliminated
> the impossible, whatever remains, however improbable, must be the
> truth". So with all other possibilities eliminated I have to wonder what
> exactly Gambas and SDL are -doing- executing that Draw event loop. My
> test app maxes out one core of my four-core, 3.5 GHz AMD Phenom II X4
> 970 CPU, with a $100+ new video card with vsync disabled and a 1x1 pixel
> render target. So it's not giving any time back to an idle process; it's
> using every bit of that CPU core to do -something-. So, what is it
> doing? That's a lot of burned watts to increment a Long datatype 238
> times per second.
>

It's difficult for me to test as we have differents cards.

But as soon as we are sure that the CPU core running the Gambas program 
is 10O% busy, (i.e. there is no vsync), we can run the program with 
valgrind and use kcachegrind to detect which function consumes CPU.

I will do that with my intel card. Just do the same, with your program, 
or the glxgears example.

To run valgrind:

$ cd /my/gambas/project
$ valgrind --tool=callgrind --num-callers=50 gbx3
...
^C
$ kcachegrind callgrind.out.<pid>

Regards,

-- 
Benoît Minisini




More information about the User mailing list