[Gambas-user] sdl Draw event overhead is killing frame rate
Kevin Fishburne
kevinfishburne at ...1887...
Mon Jan 27 02:55:34 CET 2014
On 01/26/2014 06:49 PM, Benoît Minisini wrote:
> Le 20/01/2014 05:28, Kevin Fishburne a écrit :
>> It must provoke some acid reflux deep within the bowels of SDL. :) I
>> don't know...it's damn strange for sure. I also find it strange that the
>> FPS is around 500, but when you minimize the window it jumps to over
>> 2000. Even if it's just refreshing the window itself, you'd think on a
>> modern system with hardware acceleration it would be faster than that.
>>
>> I attached my test app. It has all the OpenGL commands and variable
>> declarations commented out, so it's running just an empty SDL loop with
>> the frame rate calculation and console printout. Feel free to test the
>> two revisions yourself to see the difference.
>>
>> Just thought of something...how can you change the current font SDL is
>> using? I wonder if changing it from the bitmap to an arbitrary TTF (even
>> though no text is being rendered) would make a difference?
>>
> I solve my FPS problem on Intel GPU with the driconf program (that is
> buggy) and that link:
>
> https://wiki.archlinux.org/index.php/Intel_Graphics
>
> A flag in the "~/.drirc" file allowed me to disable automatic VSYNC, and
> now my SDL programs go to the maximum speed.
>
> I don't know which GUI driver you use exactly, but I suggest you look in
> that direction.
>
> Regards,
>
I installed driconf and created that config file, though it had no
effect. I'm using an NVIDIA graphics card and driver and think that
solution is just for Intel chipsets.
I've tried every binary NVIDIA driver in the Kubuntu 13.10 repositories
and tried the open source NVIDIA driver. It performs virtually the same
on my main workstation using two different NVIDIA cards (the second one
brand new), a VM with hardware acceleration enabled and my server (also
trying several versions of the binary driver), which uses a much older
NVIDIA card.
With the binary driver there is a GUI (NVIDIA X Server Settings) that
allows you to change vsync, page flipping, full screen anti-aliasing,
anisotropic filtering, etc. I've run my test program using all
variations of these settings. The only one that makes a difference is
vsync, which predictably either caps the FPS at 60 or allows it to max
out around 238. The window size is also largely irrelevant. Even setting
it to 1x1 pixels gives nearly the same frame rate as a 1280x720 window.
So, in the immortal words of Sherlock Holmes, "when you have eliminated
the impossible, whatever remains, however improbable, must be the
truth". So with all other possibilities eliminated I have to wonder what
exactly Gambas and SDL are -doing- executing that Draw event loop. My
test app maxes out one core of my four-core, 3.5 GHz AMD Phenom II X4
970 CPU, with a $100+ new video card with vsync disabled and a 1x1 pixel
render target. So it's not giving any time back to an idle process; it's
using every bit of that CPU core to do -something-. So, what is it
doing? That's a lot of burned watts to increment a Long datatype 238
times per second.
--
Kevin Fishburne
Eight Virtues
www: http://sales.eightvirtues.com
e-mail: sales at ...1887...
phone: (770) 853-6271
More information about the User
mailing list