driedfruit
SDL_RenderCopy clips dstrect against the viewport. Then it adjusts the
srcrect by "appropriate" amount of pixels. This amount is actually
wrong, quite a lot, because of the rounding errors introduced in the "*
factor / factor" scale.
real_srcrect.x += (deltax * real_srcrect.w) / dstrect->w;
real_srcrect.w += (deltaw * real_srcrect.w) / dstrect->w;
For example:
I have a 32 x 32 srcrect and a 64 x 64 dstrect. So far the
stretching is done perfectly, by a factor of 2.
Now, consider dstrect being clipped against the viewport, so it becomes
56 x 64. Now, the factor becomes 1.75 ! The adjustment to "srcrect"
can't handle this, cause srcrect is in integers.
And thus we now have incorrect mapping, with dstrect not being in the
right proportion to srcrect.
The problem is most evident when upscaling stuff, like displaying a 8x8
texture with a zoom of 64 or more, and moving it beyond the corners of
the screen. It *looks* really really bad.
Note: RenderCopyEX does no such clipping, and is right to do so. The fix would be to remove any such clipping from RenderCopy too. And then fix the software renderer, because it has the same fault, independently of RenderCopy.
[attached patch]
this leaves Software Renderer buggy, as it does it's own clipping later on
Mason Wheeler
The SDL_RenderGetLogicalSize function should always return the amount of pixels that are currently available for rendering to. But after updating to the latest SDL2, I started getting crashes because it was returning (0,0) as the logical size! After a bit of debugging, I tracked it down to the following code in SDL_SetRenderTarget:
if (texture) {
renderer->viewport.x = 0;
renderer->viewport.y = 0;
renderer->viewport.w = texture->w;
renderer->viewport.h = texture->h;
renderer->scale.x = 1.0f;
renderer->scale.y = 1.0f;
renderer->logical_w = 0;
renderer->logical_h = 0;
}
This is obviously wrong; 0 is never the correct value for a valid renderer. Those last two lines should read:
renderer->logical_w = texture->w;
renderer->logical_h = texture->h;
Charles Huber
If SDL_CreateTexture() takes the !IsSupportedFormat() path it will return a SDL_Texture* with a NULL driverdata member.
If you then SDL_GL_BindTexture() this will cause a segfault in GL_BindTexture() when it unconditionally dereferences driverdata.
This resolves lots of confusion around resizable windows. Most people don't expect a viewport to be implicitly set when the renderer is created and then not to be reset to the window size if the window is resized.
Added common test command line parameters --logical WxH and --scale N to test the render logical size and scaling APIs.
Marcus von Appen
If one wants to disable the SDL render subsystem, the build breaks on several platforms due to an empty render_drivers array in SDL_render.c.
This lets us change things like this...
if (Failed) {
SDL_SetError("We failed");
return -1;
}
...into this...
if (Failed) {
return SDL_SetError("We failed");
}
Fixes Bugzilla #1778.
Alexander Hirsch 2012-08-25 20:01:29 PDT
When creating a SDL_Texture with unsupported format (I'll now refer to it as
texture A), SDL_CreateTexture will call SDL_CreateTexture again with
GetClosestSupportedFormat to set texture->native (which I will now refer to as
texture B).
This causes texture B to be put before A in renderer->textures.
If texture A is explicitly destroyed, everything is fine. Otherwise, upon
SDL_DestroyRenderer, the loop will first encounter texture B, destroy it, then
texture A, destroy that which will want to destroy texture->native and since it
is already destroyed set an error.
The solution could be as simple as swapping texture A with B after
texture->native gets set in SDL_CreateTextures.