QML Scene Graph in Master

Published Tuesday May 31st, 2011 | by

Earlier this week, we integrated the QML Scene Graph into qtdeclarative-staging.git#master. Grab the modularized Qt 5 repositories and start hacking!

The primary way to make use of it is by running qmlscene or by instantiating a QSGView and feeding it your own .qml file. The import name has been upgraded to QtQuick 2.0, so upgrade your .qml. If you do not upgrade your .qml files, the qmlscene binary will try to load all QtQuick 1.0 files as 2.0. QDeclarativeItem based plugins will not be loaded.

For a quick tour of what the QML Scene Graph is all about, we’ve compiled this video:

The source code for the video is available here. It uses the QML presentation system.

I’ll answer the question of why we are not using an existing system, to preempt the comment: We wanted something small and lightweight to solve the use cases for QML. The scene graph core is less than 10K lines of code (including class documentation) and tailored to our use case. Everything else is integration with QML and Qt. Code we would have had to write for any system. We could not have achieved something so lean for QML based on existing technologies.

Disclaimer: Do not take this blog post as the final documentation. I’m explaining the current state of things. They may change in the time to come.

Primary Features

The scene graph is not so much about offering new features as it is about changing the core infrastructure of our graphics stack to ensure that Qt and QML work their best. Some of my colleagues and I did a series of posts outlining our existing graphics stack and some of the issues with it. In the initial scene graph blog I explained how we intend to address these issues.

The QML team is working on additions to QML, like the new particle system, but I’ll let them comment on that when they feel they are ready.

  • Kim already talked about the ShaderEffectItem in his post The Convenient Power of QML Scene Graph. The main idea of the shader effect item is to open up the floodgates and let creativity run loose.
  • We’re using a new method of text drawing. The default is now based on distance fields, which gives us scalable glyphs with just a single texture. This technique supports floating point positioning and when GPU processing power allows, we can also use it to do sub-pixel anti-aliasing. This effectively makes Text elements in QML, faster, nicer and more flexible than before. We also have a different font rendering mechanism in place that is similar to native font rendering (how we draw glyphs with QPainter today), but that is not enabled at the moment.
  • We have changed some of the internals of Qt’s animation framework to be able to drive animations based on the vertical blank signal. I talked about the concept in my post Velvet and the QML Scene Graph.

Public API and Back-end API

We have split the API into two different types. The public API, which is what we expect application developers to use, and back-end API which we expect system integrators to use. The public API contains all the needed classes to render graphics in QML; to introduce new primitives and custom shading plus some convenience on top of the low-level API. All files that are visible in the generated documentation are to be considered public API. I wished I could point to the public docs, but we don’t have automatic documentation generation for the modularized repositories yet, so that will have to come later.

The back-end API includes the things like renderer and texture implementations. The idea is that we can optimize certain parts of the system on a per-hardware basis where needed. The back-end API is in private headers. Some of it might move into the public API over time, but we are not comfortable with locking down this API just yet.

Rendering Model

The scene graph is fundamentally a tree of predefined nodes. Rendering is done by populating the tree with geometry nodes. The geometry node consists of a geometry which defines the vertices or mesh to be rendered and a material which defines what to do with that geometry. The material is essentially a QGLShaderProgram with some added logic.

When its time to draw the nodes, a renderer will take the tree and render it. The renderer is free to reorganize the tree as it sees fit to improve performance, as long as the visual rendering looks the same. The default renderer will separate opaque geometry from translucent geometry. The opaque geometry is rendered first, ordered by its material to minimize state changes. Opaque geometry is rendered to both the color buffer and the depth buffer. The depth of items is decided by their original ordering in the graph. When drawing the translucent geometry, depth testing is enabled, so if opaque geometry  is covering transparent geometry, the GPU will not be doing any work for the transparent pixels. The default renderer also has a switch for rendering the opaque geometry strictly front-to-back (right now enabled by passing --opaque-front-to-back to qmlscene). Custom renderers to use different algorithms for rendering the tree can be implemented through the back-end API.

The scene graph operates on a single OpenGL context, but does not create or own it. The context can come from anywhere, and in the case of QSGView, the context comes from the QGLWidget base-class. With the graphics stack inversion which is expected to land in master later this summer, the OpenGL context will come from the window itself.

Threading Model

The QML Scene Graph is thread agnostic, it can run on any thread that has a OpenGL context bound. However, once the scene graph is set up in that context / thread, it cannot be moved. Initially we wanted to run QML animations and all the OpenGL calls in a dedicated rendering thread. Because of how QML works, this turned out to not be possible. QML animations are bound to QML properties which are QObjects which will trigger C++ code. If we were to process animations in the rendering thread and these would call into C++ we would have a synchronization mess, so instead we came up with a different approach.

The OpenGL context and all scene graph related code runs on the rendering thread, unless explicitly stated otherwise in the documentation. Animations are run in the GUI thread, but driven by an event sent from the rendering thread. Before a frame rendering starts, there is a short period where we block the GUI thread to copy the QML tree and changes in it into the scene graph. The scene graph thus represents a snapshot of the QML tree at that point in time. In terms of user API, this happens during QSGItem::updatePaintNode(). I tried to visualize this in a diagram.

The benefit of this model is that during animations, we can for the upcoming frame calculate the animations and evaluate the JavaScript related to bindings on the GUI thread while we are rendering the current frame on the render thread. Advancing the animations and evaluating the JavaScript typically takes quite a bit longer than doing the rendering, so that continues to happen while the render thread blocks for the next vsync signal. So even for a single-core CPU, there is a benefit in that animations are advanced while the render thread is idly waiting for the vsync signal, typically via swapBuffers().

Integration with QPainter

The QML Scene Graph does not use QPainter itself, but there are different ways to integrate it. The most obvious way is to make use of the QSGPaintedItem class. This class will open a painter on a QImage or an FBO depending on what the user requests and has a virtual paint() function which will be called when the user has requested an update() on the item. This is the primary porting class when changing a QDeclarativeItem to work with the scene graph. The two classes are API equivalent, but their internal workings are somewhat different. The paint() function is by default called during the “sync” phase when the GUI thread is blocked to avoid threading issues, but it can be toggled to also run on the rendering thread, decoupled from the GUI thread.

Another option is to manually render to an FBO or a QImage and add the result to a QSGSimpleTextureNode or similar.

Integration with OpenGL

We have three primary ways of integrating with OpenGL.

  • The QSGEngine::beforeRendering() signal is emitted on the rendering thread after the rendering context is bound. This signal can be used to execute GL code in the background of the scene graph with the QML UI rendered on top. Naturally, we need to not clear the background when rendering the scene graph when using this mode and there are properties in QSGEngine help with that. A typical usecase here would be to have a 3D game engine render in the background and have a QML UI on top.
  • The QSGEngine::afterRendering() signal is emitted on the rendering thread after the scene graph has completed its rendering, but before swapping happens. This can be used to render for instance 3D content on top of a QML UI.
  • Render to an FBO and compose the texture inside the scene graph. This is the preferred way to embed content inside QML that should conform to the QML states like opacity, clipping and transformation. Below is an example of Ogre3D embedded into the QML Scene Graph using an offscreen texture. An easy way to do this is to subclass the QSGPaintedItem, use FBO based rendering and do QPainter::beginNativePainting() in the paint() function.

 

Debug Modes

Right now we offer a few environment variables to help track down runaway animations.

  • QML_TRANSLUCENT_MODE=1 Setting this environment variable makes all geometry nodes render with an opacity of 0.5. Some materials may choose to completely ignore opacity, in which case this variable has no effect for it, but these should be few. This is helpful if you have expensive QML elements which are completely obscured by something else.
  • QML_FLASH_MODE=1 Setting this environment variable is similar to the QT_FLUSH_UPDATE we have in Qt. Any QML element that has any kind of graphical update happening to it will get a flash rectangle on top of it for one frame. This is helpful in tracking down runaway animations

Together these two can for instance be used to track down runaway animations behind the current view.

Where to find us?

When you starting using the new stuff, you might find issues to report, features that are missing, suggestions for improvements. The relevant places to contact us are:

  • Bugs and features: http://bugreports.qt.nokia.com. For scene graph related topics, meaning the rendering API’s, there is a “SceneGraph” component.
  • IRC: #qt-graphics on freenode.net is where most of the graphics people are.
  • Mail: There is the qt5-feedback mailing list which was set a few weeks back.
  • We will also have a discussion on QML Scene Graph during the Qt Contributor Summit.

Some Numbers

We thought it would be nice to share a few numbers on where we are at right now. Below are the numbers from running the photoviewer demo under demos/declarative with both QML 1 using Raster, OpenGL and Mesa software rendering using LLVMpipe and QML 2 using OpenGL and Mesa/LLVMpipe. Its run on an Intel Sandy Bridge i7-2600K using the on-die Intel HD Graphics 3000 GPU, Linux, Qt 5 HEAD using the XCB back-end (equivalent to X11 in 4.8 for Raster and the OpenGL paint engines).

As you can see, the QML Scene Graph gives an overall 2.5x speed up of an arbitrary QML example compared to the graphics stack we have in QML 1. The other interesting part is that LLVMpipe is in the same range as our software raster engine, in fact it is a little bit faster. This is not too surprising given that they are essentially doing the exact same. With QML 2, the multi-threaded LLVMpipe version is in fact faster than the OpenGL based QML 1. I hope this helps to reduce some of the concerns that have been raised towards Qt 5′s dependency on OpenGL.

Did you like this? Share it:

Posted in OpenGL, Painting, Performance, Qt, Qt Quick

80 comments to QML Scene Graph in Master

João Barbosa says:

Great news!

However I wonder how is the support for blend modes like photoshop (for instance Overlay)

Also, how is the support for masking.

Cheers

João Barbosa says:

Sorry for double post..

I think the best way to use a QPainter to update a texture is to use a mapped PBO as the data for a QImage. Then updating a texture is very fast. Correct me if I’m wrong!

Do you generate mipmaps? Because when a item scale changes too much, the filtering won’t be good without mipmap. How do you overcome that?

Best regards

gunnar says:

Joao: Everything is possible with shader items :) Right now, we don’t have a suite of default effects, but we expect to build up a few over the coming time. Either inside the declarative module or as a stand alone module. Masking is done by creating a ShaderEffectItem which combines the alpha from one source with the colors from another. The sources can be QML items, QML subtrees or Image elements if they are static, so you could use a Text for the alpha source and a Image for the color source.

As for updating using a QPainter, both QImage and FBO are available to you. What you use depends on what you want to achieve. QImage typically has higher rasterization quality with for instance antialiasing, but comes with the extra cost of having to upload the image into a texture afterwards. FBO’s are generally very fast and does not require the upload step, but they need to live on the rendering thread and they don’t support the full QPainter feature set. Automatic mipmap generation can be enabled on QSGPaintedItem via setMipmap(true).

João Barbosa says:

@gunnar

But non-trivial composition like overlay blending requires the current composited screen right? So the shader needs to receive the item texture and the background texture. And the rendering order depends not only on opacity but also content and composition mode.

For instance: the window decorations in windows7 blurs the lower windows. Is that possible?

Stephen Chu says:

What does this mean to C++/QWidget programmers?

Will QWidgets be mapped through QSGPaintedItem or will go through a different internal path? If they go through QSGPaintedItem, will there be one item per widget, or one item per widget hierarchy?

gunnar says:

Joao: Access to the source target pixels will not be possible, so it will only work for in-scene content. Some tweaks can be achieve by using GL blend modes, which we currently don’t offer in the Effect API, but you would not be able to composite the screen in such a way, no. I know that certain chips allow for reading the target pixel in the fragment shader, in which case this would be possible (which is probably what windows does internally), but then you are writing hardware specific shaders. Which is fine, it just locks you down a bit.

Stephen: It does not mean anything. If you don’t intend to use QML, you won’t get the benefits the scene graph has to offer.

Giovanni: That part is not implemented yet, but the most likely candidate is that a widget subtree gets its own windowsurface which is rendered with raster and then uploaded to GL. That being said, the biggest performance problem we had with QGraphicsView was the QGraphicsProxyWidget. Because of its convenience it has been severely misused and a major cause of slowdown. There is little to indicate that an easily accessible ProxyWidget scene graph class would perform any better, so we’re looking at alternatives that work and don’t promote missuse. Ideas?

mja says:

Great news; in fact the best news for an otherwise miserable Nokia-news day.

João Barbosa says:

@gunnar

So custom composition won’t be possible at all in this architecture? This is a must have for developers who implement design from photoshop guys! :)

So how do you determine if a item is transparent or not?

Assuming you are synchronised to vblank as you mention, how is it that you are drawing at what is (presumably) far over the hardware refresh rate for that benchmark?

gunnar says:

Joao: The only usecase which is not supported is to read the destination pixel in the fragment shader, this covers all but a very extreme use cases, which are pretty much never present in UI specs that I have seen. I suggest you try it out.

Robin: We cheated disabled the vblank when running the benchmark :)

Is the code for these examples available? I’d be very interested to see the code for the OGRE3D example as I had been planning on doing something like this myself.

Yoann Lopes says:

@Matt
The code for the Ogre3D example is on gitorious at https://qt.gitorious.org/qt-labs/qmlogre
Please note that it currently works only on Linux and Windows.

gunnar says:

Joao: I realize that I wrote “access to source pixels is not possible”. This should of course be target pixels. Without access to source pixels, it would indeed be quite useless… Comment has been updated.

Really impressive,

I’m currently building a QML application with a QGraphicsWebView integrating a Youtube video.

Is Webkit + Flash going to be usable with the scenegraph ?

B.A.

gunnar says:

Benjamin: webkit is not supported today, but I expect the webkit team will start to make things working now that the API is a bit more stable. I dont think a Qt 5 without a webview for QML would be feasable. What their plans are for Flash, I don’t know.

Michael says:

Will it be possible with the new QPainter / QSGPaintedItem stuff to paint directly in QML? What I mean is something like a simple Paint program with the possibility to select later single strokes and change their color, delete them, move them and so on?
Bye the way … I am excited about the progress!!! :D

Sylv says:

Great news!
Will it be possible to access to the depth component using a sampler2DShadow (for GLSL 1.2)? It would be nice to have this feature for at least QtQuick3d (multi-pass shaders on 3d models, shadow maps…)
Any idea of a release date/technical preview for Qt5?

mario says:

Holy crap! I really hope we’ll enjoy this great stuff in some product

qtnext says:

Good :) …. Reading this I am not sure about this …. If for example I have a flipping book (Qt3D or opengl … imagine for example curl page example in qt3D) can I now render one qml 2D item in each page ?

minimoog77 says:

Is it possible to use, for example, DirectX backend for rendering?

And, what I am more interested, is it possible to render directly to FBO?

gunnar says:

Michael: How you implement such an application is up to you. One way would be to create the custom QSGItems to represent the strokes and use QML to interact with them. Your QSGItem subclass would instantiate geometry nodes to render the strokes with their color. Alternatively you could implement this canvas idea using plain QPainter in a QSGPaintedItem subclass. Either way would be doable. The QSGItem approach requires probably a bit more lowlevel code to get the strokes right, but would also run faster in the end.

Sylv: What we in the end are using are just plain OpenGL shader program so if you want to use a particular instruction or sampler type, then it will be available for you. I don’t think sampler2DShadow is part of the ES spec though, so you might not end up with portable code. Nor am I familiar with that particular sampler, so I don’t know how it would interact with the rest of the scene graph. Bear in mind that this is first and foremost a 2D rendering graph, not a full 3D thing. Lars indicates beta towards end of 2011 and release in 2012 in his Qt 5 blog: http://labs.qt.nokia.com/2011/05/09/thoughts-about-qt-5/

qtnext: yes, you could have a 2D QML item on each page, but no Qt3D or OpenGL required. You could just implement the page curl as a shader effect item in plain QML 2.

gunnar says:

minimoog77: DirectX is not possible and is out of scope for us. Since we are OpenGL ES based, we should be able to render using GL content to DirectX9 using ANGLE (http://code.google.com/p/angleproject/) which gets you effectively the same thing. This has not been tested though. As for renderering QML to an FBO, the QML scene is currently closely tied to the widget it renders to and can only render there. The scene graph could in theory render to offscreen buffers if the QSGView class was structured a bit differently. I think there is a task for this in the public bugtracker already…

minimoog77 says:

gunnar: I was asking about writing custom rendering backend. The questing was not related directly to DirectX. :) About second question I was more interested about possibility to render QML item to FBO (lets say with some flag) and then to use that FBO for source texture in another QML item. Maybe I want too much. :)

Jens Bache-Wiig says:

Michael: I already did a port of the html5 canvas API to Qt Quick 1.0. There is an optimized QtQuick 2.0 version in the making as well. I am not sure if it will be supported officially but if you are willing to compile the plugin yourself it already works quite well.
See https://qt.gitorious.org/qt-labs/qmlcanvas

gunnar says:

minimoog77: as opengl and glsl is part of the public api I don’t see how a DirectX renderer could be made that worked together with the rest of the system. Writing a custom backend to take advantage of some special GL extensions on a given hardware is fully poosible using the back-end API. As for rendering a QML item into an FBO and using it as input to an effect or just as a cache, that is what the shader effects are all about. There is no C++ api to control this though, only declarative elements.

jens: I think the HTML canvas is already in master. There is at least a qsgcontext2d.cpp in the declarative source tree…

rikrd says:

Theres something I don’t know how to do.

My goal is to have several listview items and connect a delegate item to a delegate item of the other with lines (or curves).

I first thought I could inherit from QSGPaintedItem add a few properties: startx, starty, endx, endy and reimplement the painting function to draw a line between these two points. But if in QML I want to bind the startx and starty properties to the x and y properites of a given delegate item of a list, they would not be notified when I move the listview item since the coordinates are local to the list view and they don’t really change.

I must be looking at this from the wrong perspective but it has been quite a while since I have been trying this with no success.

I have to say that this blog post was great. A great description about how things work with the new QML Graphics Scene, can’t wait to get my hands on it!

qtnext says:

@gunnar : perhaps curl page is not the good example … because we can now use shader … but imagine I wants to render one qml scene on each page on a 3d Cube then redraw this in a qml scene using fbo ?.. in short is it possible to render a qml scene in a fbo to use it in a 3d scene as texture … I know that it implies to manage mouse and touch event in a complex way

João barbosa says:

@gunnar

So how would you implement the ripple effect like when a widget is added to the Mac dashboard?

minimoog77 says:

@gunnar: Thanks for the answer. About text rendering, you mention that you are using distance fields, I know what are they, but blog post about how it’s done would be awesome!

Well after studying Javascript and seeing how Qt is progressing, especially with lighthouse, my objections to these major direction shifts are melting away. I feel really good about what you are doing with QML and QT in general. The changes you have made to get here are amazing. We now have tables in QML and shader effects in OpenGL. Keep up the good work.

gunnar says:

rikrd: If you registered for x/y changes on the complete parent chain of the items you are connecting to and from, you would get notified and you could calculate the position based on that.

qtnext: As I mentioned in a previous comment, we don’t have a function to render the QML scene to an FBO that can be used externally, but we use FBOs extensively internally among other things to do the shader effects. We might add such a function in the future.

Joao: I would have the root frame of the windowing system as one item and the item that gets dropped in as another. This is a compositor effect and the compositor has access to both the background and the other rendered items so it can do that.
– When the animation starts, grab the toplevel item into a ShaderEffectSource
– Feed the source to a ShaderEffectItem implementing the ripple
– Add the item that gets dropped as a sibling to the ShaderEffectItem
– Let the ripple fade out
– Kill the effect, return the toplevel item to normal
– Reparent the item that got dropped into its “proper” hierarchy

qtnext says:

adding function to render the qml scene into an fbo is a very important feature … and like you said, you use fbo extensively internally so I supose it’s quite easy to add this feature

sroedal says:

João: A ShaderEffectItem can have multiple sources. So like Gunnar says, for any effect where you need the background you just need to make sure the background items are in a single sub-tree and make a ShaderEffectSource item with that sub-tree as source. Then you can do overlay blending and similar effects that require the background pixels by having one source for the background and one for the foreground. ShaderEffectSource lets you specify a source rectangle too so that the whole sub-tree doesn’t need to be rendered if you’re only interested in a small part.

Max says:

Hello!

I have short question. Do i’m right that i need graphics acceleration for using these effects? To use QtQuick 2 do i need graphics acceleration? I have a very small embedded system with limited hardware resources. At the moment QML is no problem. What will change with Qt5?

Regards
Max

Gunnar Sletta says:

Max Qt 5 will require OpenGL support. Either in hardware or in software using a solution like Mesa with LLVMpipe. The numbers at the bottom of the blog show the performance of that is.

João Barbosa says:

@gunnar @sroedal

Ok I understand. But if a sub-tree is a source to an effect then it is rendered twice?

sroedal says:

@João: you can choose to hide the source tree used for generating a ShaderEffectSource (which is really an FBO behind the scenes). Thus you have the options of rendering both the source tree and the shader effect result or just the shader effect result.

André says:

Really, really impressive. The text rendering looks awesome and so smooth, and I like the demo integrating custom OpenGL with QML/SceneGraph. Also nice to see that porting from QML 1 has been considdered by introducing QSGPaintedItem.

João Barbosa says:

@sroedal @gunnar

IMHO scene graph should support composition effects. So a source could be an item or the current composited content. For the developer it would be much simpler to do custom blend modes or things the the bluring the background before painting the item.

Do you benchmark with lots of shader (large) items? and with items with size animation? I ask this because switching FBO and resizing FBO causes a huge bottleneck..

Gunnar Sletta says:

Joao: It is fairly easy to redirect a given QML tree into a shader item. Redirecting the current render output is generally not supported on graphics hardware and would require partial rendering which would be horrendously slow, so this is not going to happen. Resizing FBO’s can be costly, so its better to structure the code so that this does not need to happen. I don’t think I have written an shader yet, were resizing the FBO was necessary.

João Barbosa says:

@gunnar

But if an item is resizing, and that item is a source of a shaderitem, then the internal fbo will resize right?

Tom Cooksey says:

Awesome. :-)

rikrd says:

@gunnar

Ok, thanks, that seems like a solution. I was just wondering if QSGItem will have an equivalent flag to QGraphicsItem::ItemSendsScenePositionChanges and an equivalent method to QGraphicsItem::mapToItem(), that will make all this much easier.

Now I just have to figure out how to do drag and drop between QSGItems and I will be good to go!

hate-engine says:

Number are nice, but, well, i7 is very modern hardware. May I ask you to do benchmarking on something less awesome, like core2 or pentium4?

Gunnar Sletta says:

Joao: indeed it would, but you could just as well use a fixed-size parent for the FBO to avoid it or used a fixed FBO size for a varying sized item. There are plenty of solutions, IF this should turn out to be a problem in practice. I suggest you try it out.

rikrd: that sounds like a suggestion for QML 2, please file it under bugreports.qt.nokia.com

hate-engine: The point is that QML2+LLVMpipe > QML1+raster. If we ran it on a slower CPU, the numbers for the software implementation of Qt would equally much worse. Since most people have claimed that raster is sufficient and QML 2 + LLVMpipe performs better than raster, it is a gain regardless.

hate-engine says:

@Gunnar Sletta
>The point is that QML2+LLVMpipe > QML1+raster.
Thanks! I’ve missed that :)

Aris says:

Hello !
Nice work! What does this mean for Qt based cad systems ?
I noticed that adding a large number (e.g 200,000) of QGraphicsItems in a QGraphicsScene the system becomes very slow.

Cliff Yapp says:

Remarkable! That Ogre/Qt demo in particular is impressive – there have been a variety of efforts to integrate Qt and Ogre over the years, and a clean/supported way to do that is very, very exciting.

Two questions – is the ogre/qt demo licensed the same way Qt itself is, and what is the current issue keeping it from working on the Mac?

Cheers, and thanks for the awesome work!

CY

Rems says:

Hello,
I have a quite similar concern : I’m actually using QGV for dynamic geographic vector maps (100 000s of items, on desktop), there’s no eye candy but the perfs are acceptable. What should I do when Qt 5 is out ? Should I consider using QGS (and QML then ?) or keep on with QGV (though almost deprecated) for this use case ?
Anyway, it’s a very nice presentation, I’m sure I will try this at home…

Yoann Lopes says:

@Cliff Yapp
The Ogre/QML demo is under the BSD license.
It should not be considered as an official and supported way of integrating QML with Ogre, but rather as a proof of concept.
I haven’t investigated the problems on Mac, all I know is that the rendering is messed up. It is probably caused by OpenGL being left in a wrong state by Ogre, preventing Qt to render properly.

Yoann Lopes says:

Instead of “It is probably caused by OpenGL being left in a wrong state by Ogre, preventing Qt to render properly.”
Read rather : It is probably caused by OpenGL being left by Ogre in an unexpected state for Qt to render properly.

Yoann Lopes says:

@minimoog77
I planned for a long time to write a blog about the new text rendering in the QML Scenegraph (distance-fields) but I couldn’t find the time to do it… Don’t worry, it will come shortly ;-)

Gunnar Sletta says:

Aris / Rems: You are touching on a use-case that the scene graph is not intended to solve. For a simple implementation that supports up to a few 100k items, mostly static with only a small subset of them was visible on screen, I would stick with QGraphicsView.

The QML Scene Graph is first and foremost implemented to render QML as fast as possible. One thing the scene graph does not do is to clip away items whose bounding box are outside the screen. The scene graph will sort, reorder and render everything it is given. It does so in a more optimal manner than QPainter / QGraphicsView would, but if its told to draw to much, it will overload.

Some of the QML items, such as the list view, manage their children and discard the elements that are outside, but the logic sits one level up in the stack from the scene graph. The result is that lists are fast because there is no need to decide on bounding box intersection in device space on the rendering level. On the ListView level, this is a fairly simple check. So, if you just put 200.000 items into a scene, then I doubt it would run faster than QGraphicsView today.

However. Say that you implemented a Node in the graph that could swap in and out items depending on where in the scene the view is, based on zoom factor and all that. Then I have no doubt that the rendering of such a scene through the scene graph would beat QGraphicsView / QPainter, but it does requires more work for the user.

So, in conclusion. Feel free to poke around and try it out and if you think it is worth it, try to implement it. If you don’t think it is worth the effort and you’re happy with the way QGraphicsView works today, then stay with QGraphicsView. The classes are not going away :)

Tom Cooksey says:

What was the CPU load when running on the gpu? Presumably QSG is significantly lower overhead?

Aris says:

Gunnar Sletta :
“…Say that you implemented a Node in the graph that could swap in and out items depending on where in the scene the view is…”

Current GIS technology has a lot of tools for : fast background item clipping, generalization / simplification etc. I am very exciting reading here about animations and rendering threads, something very big to be implemented in a custom GIS application.

So, maybe this tool was not designed having in mind GIS,
BUT if swapping in and out graphical objects in the scene is real fast…(what about 5000 dynamic painted polygons?),
then Scene Graph will be a nice base also for CAD/GIS apps development.

Thank you.

Gunnar Sletta says:

Tom: I’m pretty sure we’re not GPU bound in the photoviewer example, so you could get the relative CPU load from from the framerates. The multithreaded LLVMpipe one might behave slightly differently, but for that particular example, if we clamped it to 60 FPS, then the QML 2 one would run with at about 2.5x lower CPU. This fits with what we normally observe. QML 2 does use less CPU when clamped to 60 FPS.

Aris: The cost of adding and removing nodes is for sure lower than in a QGraphicsScene, but having 5000 polygons that are tessellated on the fly might be on the costly side. If they are pre-tessellated, then it might be a match made in heaven. If you get a chance to try, then I would love to hear about the results.

qtnext says:

@gunnar : sorry to insist, but for example if I wants to render all a qml scene in an fbo, then after that reuse it to do some 3D stuff : does it works to do like this :

signal beforeRendering() -> set opengl to render in my fbo
—- QML Rendering (I suppose multiple fbo are use in qml …. )
signal afterRendering() -> end opengl to render in my fbo
-> then draw the fbo in my opengl view and do what I want ( you said that I can shader but it’s not enough for me .. i like to have more control )

Does it make sense?

Gunnar Sletta says:

qtnext: Your suggested method would work (I think) unless you have shader effects and offscreen targets used inside QML, like shader items or QSGPaintedItems. In that case they will rebind to the default FBO (0 normally) and thus unbind your FBO. This is simply not supported at the moment. There is a task for it on bugtracker. Vote it up, if you need the feature :)

Stefan says:

I’m currently concerned about a very specific usecase: Would it be feasible (and performant) to implement a plotting widget on QML Scene Graph? It should draw mostly lines and scatter points (think little crosses at the data points) with few material changes, but for some 10000 or so data points, and should preferably do that blazingly fast. :-D

shartte says:

@Stefan: Since you can use the full extent of OpenGL 2 to render your custom items, you should be able to do this with full hardware acceleration in “native” OpenGL, and embed it into a QML scene.

Anojan S. says:

Does QML scenegraph integrate with Qt3D’s QGLView? I have a custom QGLView class and would like to QML scenegraph for the UI.

Brynjard says:

Yes, finally we got vsync’ed animations and desktop rendering!
Microsoft, take notice (can’t believe I’m still patiently waiting for them to “fix” (ie: rediscover vsync) their desktop rendering engine).

qtnext says:

I am trying to grab qt5 git on windows …. I have git clone git://gitorious.org/qt/qt5.git, then execute perl scrip init-repository, it works but I don’t see scenegraph in declarative module … it’s graphicsitem based… Do I need to do something ?

Marcel Wiesweg says:

Reading your answer that the scene graph does not clip items outside the screen … What about the case of one very large image item clipped in a Flickable. The pixel size of that item may easily exceed memory.
Will there be a way to paint only the currently visible part of that item? Or would I need to implement that logic in a custom class?

Rems says:

@gunnar thanks a lot for your answer

JubiluM says:

@gunnar:
“Stephen: It does not mean anything. If you don’t intend to use QML, you won’t get the benefits the scene graph has to offer.”

Ok, so it is a decision of a political nature not to give the benefits of the scene graph system to C++ programmers?

After all, its C++ behind the curtains so it would be technically possible to get this also within C++ programmers’ reach, am I right?

I highly like and appreciate your effort in inventing and research, but the new atmospheric change that concerns C++ users, is just….well, better not say anything….

SumWon says:

@Anojan S.: There isn’t a way to use Qt3D with SG yet. The Qt3D guys have it in their TODO list…

@qtnext: You need to use the declarative from staging if you want SG. git://gitorious.org/+qt-developers/qt/qtdeclarative-staging.git

@Gunnar @qtnext: I have SG rendering into an FBO using Lighthouse. There are some (small) patches that need to be done to QtOpenGL, but nothing has to change in SG.

Gunnar Sletta says:

qtnext: https://qt.gitorious.org/+qt-developers/qt/qtdeclarative-staging/trees/master/src/declarative/scenegraph. I just grab the staging repos of qtbase, qtsvg, qtxmlpatterns, qtscript and qtdeclarative. and compile them in that order.

Marcel Wiesweg: If the image does not fit in graphics memory, it will fail. We are not doing anything special to handle that particular case, but a custom user class could.

JubiluM: The scene graph is usable from C++, but only in the context of QML based applications. The reasons for not applying the scene graph to the QWidget based stack, is that it requires declarative graphics, and the widgets are very imperative due to their use of QPainter in the paintEvent() function. There would be no benefit in using the scene graph approach for to graphics there. The reason is purely technical.

Rupert says:

Looks great, I hope this is powerful enough to swim against the stream:

http://www.engrish.com/2009/09/browntooth-enabled/

salmon style

JubiluM says:

Gunnar Sletta:
Ok…so let’s think about it for a second. The scene graph reguires declarative graphics. But the whole declarative stuff is C++ underneath and QML is just an upper level abstraction to all this, am I right? So in principle one could make a/some convenient class/classes to use all this scenegraph/QML niceness in an C++/.ui based application?

Gunnar Sletta says:

JubiluM: C++ yes, but not for QWidgets and .ui files for reasons already explained.

JubiluM says:

Ok…so it would be impossible to construct an .ui/QWidget based interface and embed, say, QDeclarativeView into it and use it to display scenegraph stuff? QDeclarativeView can be used to display QML-stuff in a widget based application, right?

Sorry about these simple questions, haven’t really had enough of time to get a picture (getting into qt source) what really can and cannot be done.

The imperative and/or declarative nature of the code in itself is not a showstopper…as it all really turns to C++/QObjects behind the QML abstraction. You would open a lot of doors and possibilities for designers by letting users mix .ui – based and QML-based stuff (it is allready possible). Especially these scenegraph-based demos look very interesting.

Thanks.

JubiluM says:

“Grab the modularized Qt 5 repositories and start hacking!”

I’d love to, but am in pain trying to grab it :) !

Ok, if you’re trying to download the master as .tar.gz you get a few megabytes of…something. Instructions you got on readme tell you to start from ./init-repository. Well ok, I get:

+ git submodule init
fatal: Not a git repository (or any of the parent directories): .git
git submodule init exited with status 32768 at ./init-repository line 254
Qt::InitRepository::exe(‘Qt::InitRepository=HASH(0x100804d20)’, ‘git’, ‘submodule’, ‘init’) called at ./init-repository line 355
Qt::InitRepository::git_submodule_init(‘Qt::InitRepository=HASH(0x100804d20)’) called at ./init-repository line 573
Qt::InitRepository::run(‘Qt::InitRepository=HASH(0x100804d20)’) called at ./init-repository line 592

Since I am no git expert, I cannot really know what the heck is going on. It would serve non-gitters if one could really “grab it” and start hacking. Most of the repos allow download the true branch as a zipped file but somehow this particular thing is a differently configured beast. Why oh why and what to do to be able to get started with hacking.

Thanks in advance, again :) !

P.S. I’m on mac

babraq says:

JubiluM: Instead of downloading snapshot from gitorious.org, try to _clone_ qt5 repo:

git clone git://gitorious.org/qt/qt5.git
cd qt5
./init-repository

However, my attempts to build modularized Qt always fail beacuse of WebKit tests. Even with “-nomake tests” and “-no-webkit” arguments given to configure.

Is there a way to build only modules which I specify?

JubiluM says:

babraq:

Thanks for the info. Obviously there is a way to grab it build it since this nice demo has seen the daylight :) !

But really, it would help a lot if there were some info on the matter. Something more than “grab it and start hacking” if it were to be more complicated than that in reality.

Thanks.

Michele says:

@Gunnar Sletta, @Max: Regarding performance on embedded platforms I think that the comparison of performance between software and hardware rendering done on a such powerful PC doesn’t reflect difference you can find on an embedded platform.
On the platform I’m working on (dual core 400Mhz mips) using DirectFB acceleration and avoiding operations that use software rendering we were able to do QML interfaces that can do fullscreen updates at 50/60 fps (synchronized with vblank) at 720p resolution and the same frame rates for partial updates at 1080p resolution.
Using software rendering the results are acceptable for small update areas but unusable for larger areas or full screen updates. The numbers vary a lot depending from the operations but just to give one I can say we are around 4fps.

So I think this choice to depend in such a tight way from OpenGL 2.0 can be really a big change for embedded platforms that don’t have OpenGL 2.0 support and can make the difference between a good and an unusable system.

Generally speaking, even for desktop platforms, for such a generic and wide framework as Qt is, this dependency seems a bit a forcing to me. Just my opinion.
Anyway I understand that Qt is part of a company that has its focus, targets and constraints and that it’s free to do the choices they think are the best ones.

Regards

Woah this weblog is excellent i love reading your articles. Keep up the great work! You know, a lot of persons are looking around for this info, you could aid them greatly.

Commenting closed.