-
-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix gray alpha png #2234
base: dev
Are you sure you want to change the base?
Fix gray alpha png #2234
Conversation
Is there a tester on cpp-tests? |
I don't quite understand how you are not able to reproduce the result, because no other changes were done when I did the test, and definitely no change to Also, please always add screenshots of actual and expected output. Aside from that, while I am not sure if the existing code in
The only way you would still be seeing a problem is if you forgot to pass the correct format of
Without that, then it would still be treated as RGBA, in which case the modification you made to
All this does is allow the existing
This is because in
You should confirm this in your own test with a breakpoint on the above section of code. So, please double-check your code. Re-test after reverting the change to I think that the change to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this custom gray+alpha shader solution is probably the best one.
Btw, I think the reason why convertRG8ToRGBA8()
expands RG
into this "weird" RGRG
format is for backwards compatibility with GLES2
. Because in GLES2
and old OpenGL
versions there was only GL_LUMINANCE_ALPHA
pixel format for two channels textures, that loaded RGBA channels with LLLA
. So usually in shaders you would read ra
channels. But this format was removed in favor of RG8
, and now you read rg
. So expanding RG
into RGRG
is backwards compatible with GLES2
and current graphic API formats. Though, not very efficient. :)
void main() | ||
{ | ||
vec4 c = texture(u_tex0, v_texCoord); | ||
FragColor = v_color * vec4(c.r, c.r, c.r, c.g); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will fail on GLES2, because for two channel textures it uses GL_LUMINANCE_ALPHA
pixel format which loads data into shader's rgba channels as LLLA. So for GLES2 you need to use (c.r, c.r, c.r, c.a)
. So you either need an #if defined(GLES2)
, or use compatibility macro RG8_CHANNEL
defined in base.glsl
. Check videoTextureNV12.frag
shader for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would (c.r, c.r, c.r, c.a) work for both cases? For RG8, it's being loaded as RGRG into the RGBA channels, so instead of using R and G, why not just use R and A, since A has the same value as G?
@@ -0,0 +1,16 @@ | |||
#version 310 es | |||
precision highp float; | |||
precision highp int; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use lowp precision as default for both float and int. That's enough for simple color processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shader was copied from positionTextureColor.frag
, with the only section changed being what is in the main()
function. The existing shaders use the same settings for precision etc..
precision highp int; | ||
|
||
layout(location = COLOR0) in vec4 v_color; | ||
layout(location = TEXCOORD0) in vec2 v_texCoord; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use highp precision for texture coordinates.
I've tracked the initialization of the
Since the format change in I don't see the point of the conversion from RG8 to RGRG because the pixel format, in the end, still claims it's RGBA! It means that no code can rely on the actual data when |
Sorry, I'm not clear on the purpose of your post, or what it is you're actually asking. I am also not overly familiar with that section of code, or why the conversions happen, so perhaps the person who can answer it for you is @halx99. |
I think it's a bug if you request RG texture and actual image is RG and it still gets converted to RGBA (loading RGRG). But as I've said, it might have been done for backwards compatibility. But maybe it's ok to break it in this case, since it behaves like a bug, because if you request RG and supply RG, then you should get RG. Now, if you request RGBA and the image is RG, then it's another question if it should be loaded as RGRG or RRRG. Maybe it could be a separate feature, where you could request some channel swizzles before loading data into the requested format. |
This is mostly what is described by @rh101 in #2230 with the addition of the following:
convertRG8ToRGBA8
to copy the pixels as RRRA instead of RARA (as it was introduced in Remove deprecated pixel formats L8, A8, LA8 #1839).I could not reproduce the output shown by @rh101 in its comment without the point above. From my understanding the PNG is loaded into an Image with the correct pixel format (RG8), then this image is passed to a Texture2D alongside a render pixel format set to RGBA. The Texture2D subsequently calls updatWithMipmaps which may convert the pixels into the render format. By the time the Texture2D reaches the Sprite the pixel format of the texture is RGBA.
Tested with cpp-tests' Texture2D/5:PNG Test (Linux) and in my app (Linux and Android).
I hereby invoke @smilediver to double check the loading of two-channels images, and @halx99 for the
convertRG8ToRGBA8
part :)