[Kde-graphics-devel] Quasar
Zack Rusin
zack at kde.org
Wed Jun 11 00:37:30 CEST 2008
On Tuesday 10 June 2008 05:34:52 pm Matthias Kretz wrote:
> Ah, then Composition::setOutput(QWidget *) would check for
> qobject_cast<QGLWidget *> and render the textures directly, or if it's not
> a GL widget convert the texture to a pixmap and draw that, right?
Yes, (except that I'd like to see RenderOutputNode do that selection, but
based on what was passed to setOutput) :)
> Things to consider:
> - a composition should allow more than one output
Definitely.
> - split processing and drawing between Composition::execute() and
> Composition::paint(QWidget / QImage / QPixmap / QGLWidget / QPainter). Then
> the app could call execute once and paint the result as often as it wants
> to.
I think this binds with the one above, right? One would basically create
multiple RenderOutputNode's with different targets as output.
> Sidenote: One could try to make the vertex shader use less instructions.
I don't think we should worry about that. The vertex shader constructed for
the "famous" GL gears example has 15 instructions (due to lightning) and it's
running in multiple of hundreds of fps with software GL implementations.
> Anyway. Here's what I had in mind for rotation:
> glMatrixMode(GL_MODELVIEW);
> glPushMatrix();
> glTranslatef(-width * 0.5, -height * 0.5, 0);
> glRotatef(angle, 0, 0, 1);
> glTranslatef(width * 0.5, height * 0.5, 0);
> mesh->render();
> glPopMatrix();
>
> And that's what's currently not easily possible with a FilterNode subclass.
Yea, I'll implement geometrical transformations filter sometime this week.
Hopefully it's gonna be clearer by then.
> I didn't mean the input texture, but the resulting texture. I.e. the one
> that is written into when rendered onto the fbo. The problem was visible
> with recent Quasar when you had an image that was greater than the
> QGLWidget. Then only the lower left part of that image got processed, the
> rest of the texture was a dark grey.
Ah, interesting, we'll need a test for that. I haven't seen it. The resulting
texture should be of the size of the fbo.
Of the top of my head I don't remember anything in the EXT_framebuffer_object
spec that would make it impossible to have fbo's bigger than the surface
direct rendering context was constructed for. I'll look into it. If you have
a simple example, that would help as well :)
> > > Alternatively the projection could also be glOrtho(0, 1, 0, 1, -1, 1).
> > > Depending on the node this might make sense, so it would be good if the
> > > node can override the projection matrix easily.
> >
> > Any particular reason for it?
>
> Say you want to do a translation in the vertex shader. If the projection
> matrix is glOrtho(0, width, 0, height, ...) then the vertex shader doesn't
> know how far it has to move the vertex to do e.g. a 50% translation to the
> left.
Yea, I'm going to pass dimensions as a vec4 to the shader. So then the shader
would simply do
uniform vec4 dimensions;
gl_Position = gl_ModelViewProjectionMatrix * (gl_Vertex + dimensions.xy*0.5);
or such.
> The shader would need to know the image dimensions - those could be
> passed as uniforms. But if the vertices are "normalized" to the 1x1
> rectangle the uniforms become unnecessary.
But then the vertices in the mesh have to mapped within the 0-1 viewport for
all the textures.
So it's really a question of where you want to be doing the math. I also
wanted to keep a more natural coordinate mode 0-width/0-height. I never liked
the fact that in Qt we did upper-left 0,0, but I guess at somepoint we'll
need to switch to the same coordinate system Qt uses by default to integrate
better.
> Probably QGraphicsView would be the best framework to integrate with?
Yea, I think so.
z
More information about the Kde-graphics-devel
mailing list