Qt Quick 2.0

Qt Weekly #8: Custom Bezier easing curves in Qt Quick 2

Published Thursday May 1st, 2014 | by

Qt Quick 2 comes with a huge number of predefined easing curves for animations. However, in Qt Quick 2 it is also possible to define custom easing curves using Cubic Bézier curves.

Bézier curves are typically used in computer graphics to model smooth curves, but they can also be used in the time domain to define easing curves. As a result, users of Qt Quick 2 are not limited by the built-in easing curves and can instead define their own custom ones. This allows users to mimic very specific physics of an animation, for example.

Whether it is the transition between screens or the behavior applied to UI controls that can be interacted with, motion has become a significant element of mobile design. Therefore it is also important to control more subtle elements of animations like easing curves.

The custom Bézier easing curve is defined as a JavaScript array of control points.

PropertyAnimation {
easing.bezierCurve: [0.43,0.0025,0.65,1,1,1]
}

Read more …

2 Comments


Posted in Declarative UI, Graphics, Learning, Mobile, Qt in depth, Qt Quick 2, Qt Quick 2.0

Implementing in app purchase on Android

Published Thursday December 12th, 2013 | by

In-app purchase is an increasingly popular form of application monetization, especially in saturated markets such as the mobile apps markets, since it enables users to download applications for free and try them out before they pay. When the only thing you have to lose is time, the threshold for installing an unknown application is likely a lot lower than if you had to pay up front.

But you already know this, because you have been asking us regularly how to do it in Qt for the past few months. Our short answer is that Qt does not have a cross-platform API for this, at least not yet, so you will have to add some platform-specific code to your application.

This blog post is the long answer. Using a simple game as an example, I’ll go through each of the steps to enable in-app purchases in an Android application. The application source is also available, so you can take a look at it before doing your own implementation.

So what is it?
For those of you who have not been asking us about this, and therefore cannot be proven to know what in-app purchases are, I’ll give a very quick overview.

In brief, in-app purchase is this: Instead of paying to download and run an application, there are instead features of the application that are available for purchase when you are already running it. Some examples of the types of purchases the application can provide are:

  • Subscription to content, like in an online magazine with monthly content updates.
  • Consumable items, e.g. a magic potion in a game, of which you can buy an unlimited amount.
  • Or a permanent purchase to unlock vital features of the application. For instance, the Save function in a photo editing application might be disabled until you’ve paid an in-app fee, so that users can test the application before they decide if it’s worth the price.

The way this all works in Google Play is that you add one or more “In-app Products” to the store listing for your application. You give each item a product ID (SKU), and add code in your application to launch the purchase process. The process itself is out of your hands. The user will be asked to confirm the purchase, and your application will be informed about the status of the transaction asynchronously. I will not go into the details of the APIs in this blog, so if you’re planning to implement something like this, make sure you read the Android documentation on the subject first.

The example
As my example, I’ve written a simple Hangman game in QML. Consonants are free (except for the traditional non-monetary penalty of wrong guesses), but you have to pay a minimal fee for vowels. The application is available for download, so you can quickly run it and see it in action, but rather than buy any vowels in the downloaded game, you can compile it yourself from source, upload it as a draft to your personal Google Play Developer Console and run it unpublished. As long as your personal version of the application remains unpublished, you can test the full purchase process without actually being charged. Just make sure you add your test account in the Settings page of your Developer Console.

I should note that if this were a proper game, I would probably have users pay for packs of, say, 50 vowels or so, both to avoid overpricing and to avoid going through the steps for purchasing every time they want to guess for a vowel. But for this example, it would only make the code more complex, so I’ve left it as simple as possible.

The game
First, a quick run-through of how the game works: I started by developing everything in the game on my desktop machine, leaving out the platform-specific purchase code for now. This approach has several benefits over writing directly for device:

  • It makes iterative testing faster, as the application does not need to be deployed before it can be run.
  • It makes it easy to test that the application adapts to different screen sizes, as I can easily resize my main window and see it adapt in real time.
  • And it forces me to write the application in a cross-platform way, so that it can be ported to other platforms later with relatively little effort.

Regardless of what type of application I’m writing, I’ll usually try to take this approach, and if the application depends on inputs or features that are only available on the target platform, I’ll make an effort to simulate them in the desktop build.

Qt Hangman Screenshot

Click the letters to guess

The game itself is just a scene in QML, hooked up to a Data object written in C++. The Data object is used to manage the current solution word and the guesses. You play the game by selecting letters at the bottom. If you select a vowel, you will be asked to pay before it is tested against the solution. You can also wager a guess for the full solution if you think you’ve got it. That part is free ;)

The word list is based on the public domain ENABLE dictionary by Alan Beale and M. Cooper. This initially contained a lot of words we had never heard before, but Paul Olav Tvete limited it to words used in the Qt documentation or on the Qt Development mailing list, so the game should be familiar to avid Qt users :)

As you make guesses, they will either be displayed as part of the solution if they are contained in the word, or one line of the hangman picture will be added to the square in the middle of the screen. When the entire drawing is finished, the game will be over. If you manage to guess the solution before this happens, you win the game.

Qt Hangman Game Over screenshot

If you make too many wrong guesses, it’s game over.

You can click on the Reset button in the top left corner to get a new puzzle, picked randomly from the ENABLE dictionary, or you can hit the Reveal button to give up and show the solution.

I won’t go into any more detail about the game itself. The interesting part here is the in-app purchasing, so I’ll spend the rest of the blog post on that.

On iOS
Before I continue with the technical details, I’ll briefly mention iOS as well: As I said, while the the example code is not 100% cross-platform, it is structured to be easily adaptable to other platforms, attempting to minimize the amount of extra code you have to write to port it. And in fact, Andy Nichols already wrote some code to move the game to iOS which is nearly done, but not quite ready for release just yet. He will blog about this later on, but the code we have so far is already in the source repository for you to look at.

In-app purchase: Structure
So, I wanted to finish the desktop version of the game before implementing the Android-specific part. As an abstraction of the platform-specific code I would have to add in later, I identified the need for the following function to take care of actually purchasing a vowel from the market place:

void buyVowel(const QChar &vowel);

Since the Android in-app purchase APIs are asynchronous, I also added a signal which is emitted when the purchase succeeds:

signals:
    void vowelBought(const QChar &vowel);

When this signal is emitted, it means the transaction has been made, and I add the selected vowel to the list of current guesses. In order to run the application on desktop (and other platforms as well), I add a file named data_default.cpp with the following default implementation of the buyVowel() function:

void Data::buyVowel(const QChar &vowel)
{
    emit vowelBought(vowel);
}

This code will never be compiled on Android, but for other platforms, it will imitate a successful purchase. To avoid compiling the code on Android, I add the following to my .pro file:

android: SOURCES += data_android.cpp
else: SOURCES += data_default.cpp

Now it’s quite easy for me to add an Android-specific implementation of buyVowel() in data_android.cpp, and also to add implementations for other platforms down the road.

The Java code
Since the Android APIs are based in Java, I made the main bulk of my implementation in Java and then later accessed this through JNI from my C++ application. I won’t go into detail about the actual usage of the Android APIs, since that’s already thoroughly documented in the official documentation. I will however highlight a few areas of particular interest in the Java code for my game.

First of all, I needed to add the Android-specific files to my project. I started by adding an AndroidManifest.xml file to my project using Qt Creator.

Screenshot of "Create Manifest" button in Creator

Qt Creator gives you the option to quickly add a default manifest to your app.

I chose to put the manifest in the subdirectory android-source. After adding this directory, all my Android-specific files can go into it. In general, it should contain the files you want to add to the Android project and the directory structure should follow the regular Android project directory structure. The contents of the directory will later be merged with a template from Qt, so you should only put your additions and modifications here. There is no need to include an entire Android project.

Next, I added a new Activity subclass. Java sources need to go in the src directory to be recognized correctly when building the package, and in a subpath which matches the package namespace of the class. In my case I placed the class in android-source/src/org/qtproject/example/hangman/.

To make sure Qt is loaded correctly, I had to subclass Qt’s default Activity class. This is very important.

import org.qtproject.qt5.android.bindings.QtActivity;
public class HangmanActivity extends QtActivity

I also had to make sure to call into the super class from all reimplementations of methods. Like here:

@Override
public void onCreate(Bundle savedInstanceState)
{
    super.onCreate(savedInstanceState);

    bindService(new Intent("com.android.vending.billing.InAppBillingService.BIND"),
                m_serviceConnection, Context.BIND_AUTO_CREATE);
}

In my game, the Activity is a singleton, so I store a static reference to the object in the constructor:

private static HangmanActivity m_instance;
public HangmanActivity()
{
    m_instance = this;
}

(My C++ Data class has the same logic. I’m doing this so that I can facilitate the communication between the Java and the C++ code using static methods. For a more complex example, it’s also possible to store references and pointers in each C++ and Java object that maps it to its equivalent in the other language, but that is not necessary in this game.)

In my Activity class in Java, I implemented a method to handle the request for purchasing a vowel:

public static void buyVowel(char vowel)

And I also added a native callback method which I can call when I’ve received the asynchronous message that the vowel has been purchased:

private static native void vowelBought(char vowel);

The native keyword indicates that the method is implemented in native code. I’ll come back to the implementation of that later.

My buyVowel() method follows the documentation closely. The main part to note is the following snippet:

Bundle buyIntentBundle = m_instance.m_service.getBuyIntent(3,
                                                           m_instance.getPackageName(),
                                                           "vowel",
                                                           "inapp",
                                                           "" + vowel);

This code will create a Buy Intent for API version 3, the package name of my application (note that this is the application package in the Google Play store and AndroidManifest.xml, not the package namespace of the Java class), and the in-app product identified as “vowel”. In addition, I’m passing “inapp” as the type and I’m passing the actual vowel requested as the developer payload. The latter will be returned back to me along with the message of a successful purchase, so that I can easily inform my application of which letter was actually purchased.

The message informing my application whether the purchase was successful or not is delivered in the method onActivityResult(). In this method I can retrieve several pieces of information about the purchase in JSON format:

JSONObject jo = new JSONObject(purchaseData);
String sku = jo.getString("productId");
int purchaseState = jo.getInt("purchaseState");
String vowel = jo.getString("developerPayload");
String purchaseToken = jo.getString("purchaseToken");

I quickly verify that it’s the correct product and that it was successfully purchased (purchaseState == 0). If this is the case, I inform my native code of the purchase, and I immediately consume it:

if (sku.equals("vowel") && purchaseState == 0) {
    vowelBought(vowel.charAt(0));

    // Make sure we can buy a vowel again
    m_service.consumePurchase(3, getPackageName(), purchaseToken);
    return;
}

Consuming the purchase is very important in this case, as you will not be able to purchase the same product again later unless it has been consumed. So for consumable items, like these vowels, which you should be able to purchase an unlimited number of times, we must consume them immediately after they have been registered in the application. For permanent purchases (imagine if I also had a slightly more expensive product called “Free vowels forever”), you would skip this step. You could then later query Google Play for the product and it would tell you that it has already been purchased.

Finally, in order to be able to access the billing API, I need to add its interface to my project, as explained in the Android documentation. I copy the file IInAppBillingService.aidl into subdirectory android-source/src/com/android/vending/billing.

AndroidManifest.xml
A few modifications are necessary to the default AndroidManifest.xml file. This is the file which describes your application to the device that is running it, and also to the Google Play store which needs the information to properly advertise it to the correct devices.

Like for all Android applications, I need to set an icon, a name, a package name, etc. In the source tree, I’ve left the package name empty. This is intentional, as you will need a unique package name for your instance of the application in order to register it in Google Play. Make sure you set this before building. I’ve also locked the screen orientation to “portrait”, because that’s how the application was designed.

Specifically for using the in-app purchase API, I need to declare that I am using the “BILLING” permission:

<uses-permission android:name="com.android.vending.BILLING"/>

If I neglect to add this, then my application will get an exception when trying to access the APIs.

In addition, I need to set my own Activity class as the main activity rather than Qt’s class:

<activity ... android:name="org.qtproject.example.hangman.HangmanActivity">

This ensures that the HangmanActivity class will be instantiated when the device launches the application.

The native code
All the Android-specific native code is in the data_android.cpp file. As mentioned, it needs a platform-specific implementation of the buyVowel() function:

void Data::buyVowel(const QChar &vowel)
{
    QAndroidJniObject::callStaticMethod("org/qtproject/example/hangman/HangmanActivity",
                                              "buyVowel",
                                              "(C)V",
                                              jchar(vowel.unicode()));
}

The only thing this code does is issue a call to the Java method described in the previous section. Thus, it will launch an asynchronous request for the vowel and we will wait until the payment goes through before doing anything else.

In addition, we need to implement the native vowelBought() method, which will be called from Java when the purchase is successful:

static void vowelBought(JNIEnv *, jclass /*clazz*/, jchar c)
{
    Data::instance()->vowelBought(QChar(c));
}

This is just a regular C++ function, with the exception that it will always get a JNIEnv pointer as its first argument, and a jclass argument which is a reference to the declaring class in Java (since this is a static method.) As you can see, it simply accesses the Data singleton and emits the vowelBought signal to register the purchased vowel.

Finally, the native method is registered using a standard boiler plate when the library is loaded. Check the code to see the full thing.

Putting it in the store
Then we’ve reached the final step, which was to actually upload the APK to the Google Play market and add the products for purchase there. Note that you do not have to publish the application in order to test the in-app purchases: You can keep the APK in Draft mode and mark the products as “To be activated”, in which case you have to handle the distribution of the application yourself, but the Google accounts listed in the “LICENSE TESTING” section of your Developer Console Settings will be able to make test purchases once they have installed it. You can also publish a Beta version of your application in the store, in which case you can manage who will be able to download it and make test purchases using Google Groups.

I started by registering a listing for my application. Once this was done, and I’d filled out the necessary pieces of information, I had to upload the APK. You cannot register any products in the listing before you’ve uploaded an APK. (Make sure you sign the package with your private key before uploading it. The whole process of generating a key and signing the package can be done from inside Qt Creator, in the Project -> Run -> Deploy configurations.)

Once this has been done, I can add a product. I click on the “In-app products” tab, and select to create a new product. Then I fill out the necessary information:

Screenshot of registration process in Google Play

Google Play registration of my “vowel” product

I had to make sure I picked “Managed product” here, as this is the only type supported by API version 3. It means that Google Play will remember that the product was purchased for you, and you will need to explicitly consume it before you can purchase the same product again.

When the product has been added, I can add some more details:

Setup for the vowel product

You can add descriptions and pricing information to your products in the developer console.

I’ve added a short description of the item, and set the price to the minimum possible price (6 NOK which is approximately 1 USD). I also make sure to mark the product “To be activated” so that it can be purchased. When the application is published, the product will become activated automatically.

Done
And that’s it. I can now run the application on my devices and purchase vowels as much as I want. Until the application is published into “Production”, no transactions will actually be carried through, so you can test your local build without fearing that you’ll run out of money.

But do note that if you decide to use Digia’s version of the application, then purchases are real, since Google Play has a set minimum price.

Good luck!

 

Discover more Qt news: Introducing Qt Mobile

3 Comments


Posted in Android, C++, Qt, Qt in use, Qt Quick, Qt Quick 2, Qt Quick 2.0, Tutorial

New Scene Graph Renderer

Published Monday September 2nd, 2013 | by

Few phrases are more misused in software, whispered in social channels, promised over the negotiating table or shouted out loud in blogs, as the words “The next release will make everything better!”

So… I won’t say that.

But…

Qt 5.2 introduces a new renderer to the Qt Quick Scene Graph.

When we set out to do the scene graph some three years ago, one of my visions was that we would be able to really take advantage of OpenGL and have game-like performance. The renderer we have in Qt 5.0 went a long way towards that goal. For opaque content, we sort based on state and render content front-to-back to minimize GPU overdraw. In the playground/scenegraph repository, we have a different renderer which batches similar primitives together to reduce the number of draw calls (which implicitly also reduces state changes).

The new renderer in Qt 5.2 combines both of these techniques and also tries to identify non-changing parts of the scene so they can be retained on the GPU. Qt 5.2 also adds the texture atlas support which was previously located in the playground/scenegraph’s customcontext. This greatly helps batching of textured content. I think that with Qt 5.2, we are now pretty close to that goal.

A new doc article explaining the renderer in more detail has been added for the more curious, though a deep understanding of the renderer is not needed to write well-performing applications. However, I suspect many people will still find it interesting.

http://doc-snapshot.qt-project.org/qt5-stable/qtquick-visualcanvas-scenegraph-renderer.html

 

There are still a lot of other ideas that could be employed for those who want to have a stab at it. If you have ideas, ping “sletta” on IRC or clone and start coding.

Now some numbers:

Three of the benchmarks are available here: https://github.com/qtproject/playground-scenegraph/tree/master/benchmarks

  • Extreme Table contains a large static table with some animated content on top. It shows the benefit of the GPU retention.
  • List Bench shows a number of simultaneously scrolling lists with an icon, alternating background color, and two texts per cell.
  • Flying Icons contains over 3000 images which are being animated with a unique animation per item.

I also included the front-page of the Qt Quick Controls gallery example and Samegame. Samegame is played in “Zen” mode while clicking to the rhythm of “Where Eagles Dare” by Iron Maiden. (I know… very scientific)

The number of OpenGL draw calls and amount of traffic is measured using apitrace, an awesome tool if you’re debugging OpenGL. As can be seen by the green numbers, we’ve managed to cut down the number of glXxx calls quite a bit, especially for the cases where we have plenty of re-occurrence, meaning lists, tables and grids.

The amount of traffic per frame is also significantly improved. The best examples are the “ExtremeTable” and “ListBench” which are sampled at a frame where no new delegates were being added or removed. I’m quite happy that the “ListBench” comes out as a zero-transfer frame. There is of course some traffic, the draw calls themselves and a couple of uniforms; but no vertex data and no texture data, which is what the tool measures. “FlyingIcons” changes the entire scene every frame so nothing can be retained, so minimal difference is expected. Controls Gallery is mostly static, but has an animated progress bar which needs a new texture every frame. This is the majority of the 20+kb transfer. Samegame comes out pretty high, primarily because of its extensive use of particle systems. A lesson learned is that if you are on a tiny memory bus, limit your particles.

These are theoretical numbers, but they give a good idea that the new renderer is on the right track. Let us look at numbers from running on hardware. I’ve created tables out of the cases where the new renderer had the most impact. The full table of numbers is at the very bottom.

Note, these benchmarks are run without the new V4-based QML engine. This is because the V4 engine also affects the results and I wanted to focus on rendering only.

The rendering time is measured with vsync enabled, but excluding the swap. So when we see that the MacBook spends 16ms rendering, it is actually being throttled while we are issuing OpenGL commands. When Iooked at this in Mac OS X’s Instruments, I saw that the driver was spending a lot of time in synchronous waiting, aka we have stalling in the pipeline. With the Qt 5.2 rendering, the synchronous waiting while drawing is gone. This is good news, as lists and list-like content is quite common in UIs.

In both the MacBook with Integrated chip and on the Nexus, the new renderer drastically reduces the time spent issuing OpenGL commands. It should also be said that the only reason the render times for the “ExtremeTable” did not go all the way to 0 is because of https://bugreports.qt-project.org/browse/QTBUG-32997

“FlyingIcons” high CPU load is mostly primarily due to it running 3000+ animations in parallel, but as we can see from the time spent in the renderer, that there is still significant improvement.

Here are the rest of the numbers:

So this matches up pretty well with the theoretical results at the beginning. For reoccurring content, such as lists and grids, Qt 5.2 is quite an improvement over Qt 5.1. For those use cases we didn’t radically improve, we at least didn’t make them any worse.

Enjoy!

40 Comments


Posted in Graphics, OpenGL, Qt Quick 2.0

Qt Creator for Qt Enterprise users

Published Tuesday July 9th, 2013 | by

Qt Creator 2.8 provides a couple of new value additions for Qt Enterprise users. We would like to emphasize that this does not restrict the usage of the tools in our open source offering. In this release the focus is on QML and Qt Quick support.

The new extension plugins to the Qt Quick Designer and the QML Profiler add new features on top of the existing plugins.

Qt Quick Designer including enterprise differentiators in action.

This means smoother Qt Quick application development in the designer by:

  • Editing properties like text, color, and source inline in the form editor by double clicking.
  • Visual editor for paths.
  • Editor for connections, bindings, and dynamic object properties.

 

Inline Editing in the Form Editor

The user can double-click objects in the form editor to edit their text, color, or source properties. This is quicker and more convenient than using the property editor.
Because some QML types have several of these properties, such as TextEdit, you can also right-click objects to open the inline editors from a context menu. This feature also works on custom components if the properties follow the naming conventions from the Qt Quick Items.

 

Path Editor

The visual path editor in Qt Quick Designer allows changing the path in PathView much faster than in the text editor. You can move the control points to change the cubic Bezier curve. This speeds up the process of creating the path by a magnitude when compared to text or importing them from other tools. It is also possible to add and remove segments from the curve, and PathAttribute and PathAttribute elements defined in the text are preserved.

Connection View
The Connection View is a new pane in the sidebar like the property editor and navigator. It allows the user to manage Connections, Bindings and dynamic properties through a graphical interface.  Each of these three functions has its own separate tab.

 

Connections
The connection tab.

The connection tab handles Connections elements, binds them to a target item and executes an action by defining a signal handler. No need to edit QML code anymore, just trigger state changes based on clicked signals.

 

Bindings
The bindings tab.
Add, edit and remove bindings in the bindings tab. You get also a nice overview of the existing bindings, which might have an impact on the layout or even the performance.

 

Dynamic Properties
The dynamic properties tab.

The tab for dynamic properties allows to add additional properties to an item.
This makes it possible to follow good practice and define values like margins in one place, reusing them in other places by using bindings, instead of hard coding them. Another use case is creating reusable Components; in this case the properties of the root item define the interface.

 

QML Profiler
The QML Profiler in action

The QML Profiler is extended by the ability to trace Scene Graph events and to profile the size of the pixmap cache used for images in QML.  This is very valuable information, since memory consumption is typically related to image resources.
The SceneGraph row shows the events of the Qt Quick 2 SceneGraph renderer, split into GUI thread and Renderer thread parts. This allows a detailed analysis of the SceneGraph behavior for a given application, helping to fine-tune your QML code to achieve the best performance.
The pixmap cache row shows the loading, adding, and removing of pixmaps from the Qt Quick internal pixmap cache over time. This is especially helpful for Qt Quick applications in a resource constrained setup with slow disk loading.

 

61 Comments


Posted in Customers, Declarative UI, Qt, Qt in use, Qt Project, Qt Quick, Qt Quick 2, Qt Quick 2.0, QtCreator, Releases

Preview of Qt 5 for Android

Published Wednesday March 13th, 2013 | by

The first commit in the effort to port Qt 4 to Android was on Christmas Day, 2009: “Android mkspecs and semaphore” by BogDan Vatra.

On January 22nd, 2010, he committed “A small step for Qt, a giant leap for android” with a working graphics system plugin and could actually run Qt applications on an Android device. He uploaded a video to celebrate.

On February 20th, 2011, he announced the first usable release of Qt 4 for Android, under the moniker of Necessitas.

For the past 3+ years, BogDan and others have been (and are still) developing Necessitas on their spare-time, and on November 8th, last year, BogDan agreed to take his work into Qt 5 and submit the port to the Qt Project.

He pushed the first version of Qt 5 for Android to a WIP branch on January 4th, and recently we integrated into the “dev” branch, meaning that it will become part of Qt 5.1 when it is released.

For this preliminary release, we are focusing on the developer experience, working to enable Qt developers to easily run and test their applications on Android devices. While there’s nothing preventing you from deploying your app to an app store with Qt 5.1, we’re recommending that people wait until Qt 5.2 before they do that, as we’d like to put some more work into improving that experience: Making more options for how your app is deployed, adding more polish in general, and adding more support for Android APIs, both by allowing you to extend your app with Java code or by mapping them in C++ APIs, whichever makes the most sense.

On to the demos!

To start off, here’s a video of the Qt 5 Cinematic Experience demo running on (from left to right): a Nexus 4, an Asus Transformer Pad TF300T and a Nexus 7. The Cinematic Experience demo has quickly become our demo of choice at events, because it nicely shows a lot of the new graphical capabilities in Qt Quick 2, such as shader effects, particle effects, the new PathAnimation as well as the hardware-accelerated SceneGraph architecture underneath, which makes it possible to run all this at 60 fps.

Click to see video

In addition to the core parts of Qt, we also support the QML media player APIs in QtMultimedia. Here’s a nice video player written by Andy in QML, with fragment shader effects on top of the video, running on an Asus Transformer TF300:

Click to see video

To show off multi-touch support, here’s a simple hand painting demo running on a Nexus 4. This also shows the support for native menus:

Click to view video

The lowest Android API level supported by Qt 5 is API level 10, aka Android version 2.3.3. This means we can also have Qt apps running on reasonably priced devices, such as this Huawei Y100:

Click to view video

Here’s the overview of what we have right now:

  • Support for creating Qt Widgets and Qt Quick apps that run on Android devices.
  • Support for Android API level 10 (version 2.3.3) and up.
  • QML media player functionality in QtMultimedia.
  • A set of commonly used sensors in QtSensors.
  • Cross-platform features of Qt of course (including Qt Quick controls and QtGraphicalEffects.)
  • Developing and configuring apps in Qt Creator 2.7.
  • Deploying a test build to a device directly from Qt Creator.

In addition, we plan to soon support the possibility of distributing the Qt libraries through the Ministro distribution tool, which allows you to share a set of Qt libraries across several apps on a device, and which will be the primary way of deploying apps with Qt 5.1. Other than that, this is all already available: Just check out the wikifor instructions. Let us know if anything goes horribly wrong. We can usually be found in the #necessitas channel on the Freenode IRC servers.

What’s next, you ask? You can in fact help us decide! Both by reporting your bug findings and feature expectations to us, and by contributing your code. We will be working steadily on improving Qt 5 for Android, and would benefit greatly from your feedback. In the wiki, we are also compiling a list of devices where Qt has been verified to run. If you take the time to add devices you have tested to the list there (as well as any issues you have found), it would very much be appreciated :)

Finally: A big thanks to BogDan Vatra, Ray Donnelly and everyone else who has been contributing to the Necessitas project for the past years, as well as Qt 5 for Android in the past months. And thanks to everyone who will contribute in the future.

55 Comments


Posted in Qt, Qt Quick 2.0

Introducing QWidget::createWindowContainer()

Published Tuesday February 19th, 2013 | by

Qt 5 introduced a new set of OpenGL classes in Qt Gui and a new rendering pipeline for Qt Quick with the scenegraph. As awesome as these are, they were based on the newly introduced QWindow, making it very hard to use them in existing applications.

To remedy this problem, Qt 5.1 introduces the function QWidget::createWindowContainer(). A function that creates a QWidget wrapper for an existing QWindow, allowing it to live inside a QWidget-based application. Using QQuickView or QOpenGLContext together with widgets is now possible.

How to use

QQuickView *view = new QQuickView();
...

QWidget *container = QWidget::createWindowContainer(view);
container->setMinimumSize(...);
container->setMaximumSize(...);
container->setFocusPolicy(Qt::TabFocus);

widgetLayout->addWidget(container);

How it works

The window container works by forcing the use of native child widgets inside the widgets hierarchy and will reparent the window in the windowing system. After that, the container will manage the window’s geometry and visibility. The rendering of the window happens directly in the window without any interference from the widgets, resulting in optimal performance.

As can be seen from the code-snippet above, the container can also receive focus.

Embedding the “Other Way”

This feature covers the use case where an application wants to either port an existing view to either Qt Quick or the new OpenGL classes in Qt Gui. What about the use case where an application’s mainview is written with widgets, say QGraphicsView, and the application wants to keep this and rewrite the surrounding UI with Qt Quick? Well, this is doable by keeping the main application QWidget-based and making each of the big UI blocks a QQuickView embedded in its own container.

Enjoy!

22 Comments


Posted in OpenGL, Qt, Qt Quick 2.0

Native-looking text in QML 2

Published Wednesday August 8th, 2012 | by

One of the comments in Morten’s blog post about desktop components in QML 2 was that the text looks out of place on Windows. This is because QML 2 uses a custom rendering mechanism called “distance fields” for the text which allows hardware accelerated drawing of transformable text. Yoann blogged about this a year back.

This is what the distance field text looks like in the desktop components:

Qt 5 desktop components rendered with distance field text

Qt 5 desktop components rendered with distance field text

While distance field text is perfect for UI components for which you want smooth animated movement, rotations and zooms, it will look different from the text rendered by native applications on Windows. This is especially true because the distance field text renderer does not apply grid-fitting hints to the glyphs, whereas this is the standard on Windows. Note that on e.g. Mac OS X, the standard is to draw the text without applying grid-fitting, so the distance field text will not look as out-of-place on this platform.

Grid-fitting means that the shapes of the glyphs are modified for the current font size and device resolution in order to make them look sharper. This is done by moving control points around e.g. to avoid features of the glyph being positioned between two pixels, which would cause the anti-aliasing to fill both those pixels, partially blending the font color with the current background in order to give the visual impression that the glyph touches half of each pixel. A sharper image can be made if the feature fills exactly one full pixel instead. The mechanism is especially effective for low pixel densities and small font sizes.

The downside of grid-fitting, in addition to certain typographical consequences, is that the text and its layout becomes unzoomable. This is because the modifications to the glyphs happen based on the target size of the glyph on screen. When you scale it, the shapes of the glyphs will change, making the shapes wobble, and the metrics of the text will also change, requiring a relayout of the text. If you zoom on a web page, for instance, the paragraph you wanted to see up close might have moved around significantly by the time you get to a text size you can comfortably read, and then you will have to pan around to find it again.

When using distance fields, we will render the glyphs without grid-fitting to make them scalable. An additional convenience of this is that we only have to cache the shape of each glyph once, whereas grid-fitted glyphs have to be cached per size. This saves some memory and also makes effects like animated zooms a lot faster, since the glyphs can be redrawn at every zoom level using the same distance field cache.

Here’s what the previous screen shot looks like if you scale the push button by a factor of two:

Distance field rendered text with zoomed button

Distance field rendered text with zoomed button

But while the distance fields have many advantages, they don’t cover the need to have applications that look like they belong together with other Windows applications. For an application using desktop components, you can imagine this being a more important goal than having smoothly transformable text items. On a desktop machine running Windows, the memory cost of a glyph cache might not be the primary concern either. I’ve been meaning to fix this for a while. The code for drawing text through the system back-end instead of the distance field renderer has been in the scene graph since the text nodes were originally written, but so far there has not been any convenient way for developers to choose it over the default. With change 6a16f63df4a51edee03556f841d34aad573870f2 to Qt Declarative, this option has now been added. On any text component in QML, you can now set the renderType property to NativeRendering in order to use the system back-end to rasterize the glyphs instead of Qt. This will apply to any platform, although it will have the largest effect on Windows or on Linux when hinting is turned on in the system settings.

Text {
    text: "Some text"
    renderType: Text.NativeRendering
}

I’ve added the branch qt5-nativetext to the desktop components repository where the effect can be tested. The components in this branch will use the distance field renderer by default, but by setting the DESKTOPCOMPONENTS_USE_NATIVE_TEXT environment variable to “1”, you will be able to see the difference. Running qmlscene on Gallery.qml with this environment variable set yields the following result:

QML 2 widgets with native-looking text

QML 2 widgets with native-looking text

So the appearance is crisper and the text fits in on the platform, but scaling the text does not give the nice results we had before, and will instead look pixelated. See the following screen shot for comparison:

The effect of zooming native-looking text

The effect of zooming native-looking text

So the choice is yours. If you’re targeting Windows and you want your application to look like a standard, native Windows application, you can use the renderType property to achieve this. If you want the lean, flexible and transformable text of Qt, leave the property unaltered.

22 Comments


Posted in Qt, Qt Quick, Qt Quick 2, Qt Quick 2.0, Windows

Scene Graph Adaptation Layer

Published Wednesday August 1st, 2012 | by

Both the public documentation for the scene graph and some of my previous posts on the subject have spoken of a backend or adaptation API which makes it possible to adapt the scene graph to various hardware. This is an undocumented plugin API which will remain undocumented, but I try to go through it here, so others know where to start and what to look for. This post is more about the concepts and the ideas that we have tried to solve than the actual code as I believe that the code and the API will most likely change over time, but the problems we are trying to solve and the ideas on how solve them will remain.

Some of these things will probably make their way into the default Qt Quick 2.0 implementation as the code matures and the APIs stabilize, but for now they have been developed in a separate repo to freely play around with ideas while not destabilizing the overall Qt project.

The code is available in the customcontext directory of ssh://codereview.qt-project.org:29418/playground/scenegraph.git

Renderer

When we started the scene graph project roughly two years ago, one of the things we wanted to enable was to make sure we could make optimal use of the underlying hardware. For instance, based on how the hardware worked and which features it supports, we would traverse the graph differently and organize the OpenGL draw calls accordingly. The part of the scene graph that is responsible for how the graph gets turned into OpenGL calls is the renderer, so being able to replace it would be crucial.

One idea we had early on was to have a default renderer in the source tree that would be good for most use cases, and which would serve as a baseline for other implementations. Today this is the QSGDefaultRenderer. Other renderers would then copy this code, subclass it or completely replace it (by reimplementing QSGRenderer instead) depending on how the hardware worked.

Example. On my MacBook Pro at the time (Nvidia 8600M GT), I found that if I did the following:

  1. Clear to transparent,
  2. render all objects with some opaque pixels front to back with blending disabled, while doing “discard” on any non-opaque pixel in the fragment shader, but writing the stacking order to the z-buffer,
  3. then render all objects with translucency again with z-testing enabled, this time without the discard,

I got a significant speedup for scenes with a lot of overlapping elements, as the time spent blending was greatly reduced and a wast amount of pixels could be ignored during the fragment processing. Now, in the end, it turned out (perhaps not surprising) that “discard” in the fragment shader on both the Tegra and the SGX is a performance killer, so even though this would have been a good solution for my mac book, it would not have been a good solution for the embedded hardware (which was overall goal at the time).

On other hardware we have seen that the overhead of each individual glDrawXxx call is quite significant, so there the strategy has been to try to find different geometries that should be rendered with the same material and batch them together while still maintaining the visual stacking order. This is the approach taken by the “overlap renderer” in the playground repository. Cudos to Glenn Watson in the Brisbane office for the implementation.

Some other things that the overlap renderer does is that it has some compile-time options that can be used to speed things up:

  • Geometry sorting – based on materials, QSGGeometryNodes are sorted and batched together so that state changes during the rendering are minimal and also draw calls are kept low. Take for instance a list with background, icon and text. The list is drawn with 3 draw calls, regardless of how many items there are in it.
  • glMapBuffer – By letting the driver allocate the vertex buffer for us, we potentially remove one vertex buffer allocation when we want to move our geometry from the scene graph geometry to the GPU. glVertexAttribPointer (which is all we have on stock OpenGL ES 2.0) mandates that the driver takes a deep copy, which is more costly.
  • Half-floats – The renderer does CPU-side vertex transformation and transfers the vertex data to the GPU in half-floats to reduce the memory bandwidth. Since the vertex data is already in device space when transferred, the loss of precision can be neglected.
  • Neon assembly – to speed up the CPU-side vertex transformation for ARM.

If you are curious about this, then I would really want to see us being able to detect the parts of a scene graph that is completely unchanged for a longer period of time and store that geometry completely in the GPU as vertex buffer objects (VBO) to remove the vertex transfer all together. I hereby dare you to solve that nut :)

And if you have hardware with different performance profiles, or if you know how to code directly in the language of the GPU, then the possibility is there to implement a custom renderer to make QML fly even better.

Texture implementation

The default implementation of textures in the QtQuick 2.0 library is rather straightforward. It uses an OpenGL texture with the GL_RGBA format. If supported, it tries to use the GL_BGRA format, which saves us one RGBA to BGRA conversion. The GL_BGRA format is available on desktop GL, but is often not available on embedded graphics hardware. In addition to the conversion which takes time, we also make use of the glTexImage2D function to upload the texture data, which again takes a deep copy of the bits which takes time.

Faster pixel transfer

The scene graph adaptation makes it possible to customize how the default textures, used by the Image and BorderImage elements, are created and managed. This opens up for things like:

  • On Mac OS X, we can make use of the “GL_APPLE_client_storage” extension which tells the driver that OpenGL does not need to store a CPU-side copy of the pixel data. This effectively makes glTexImage2D a no-op and the copying of pixels from the CPU side to the GPU happens as an asynchronous DMA transfer. The only requirement is that the app (scene graph in this case) needs to retain the pixel bits until the frame is rendered. As the scene graph is already retained this solves itself. The scene graph actually had this implemented some time ago, but as I didn’t want to maintain a lot of stuff while the API was constantly changing, it got removed. I hope to bring it back at some point :)
  • On X11, we can make use of the GLX_EXT_texture_from_pixmap where available and feasible to directly map a QImage to an XPixmap and then map the XPixmap to a texture. On a shared memory architecture, this can (depending on the rest of the graphics stack) result in zero-copy textures. A potential hurdle here is that XPixmap bits need to be in linear form while GPUs tend to prefer a hardware specific non-linear layout of the pixels, so this might result in slower rendering times.
  • Use of hardware specific EGLImage based extensions to directly convert pixel bits into textures. This also has the benefit that the EGLImage (as it is thread unaware) can be prepared completely in QML’s image decoding thread. Mapping it to OpenGL later will then have zero impact on the rendering.
  • Pixel buffer objects can also be used to speed up the transfer where available

Texture Atlas

Another thing the texture customization opens up for is the use of texture atlases. The QSGTexture class has some virtual functions which allows it to map to a sub-region of a texture rather than the whole texture and the internal consumers of textures respect these sub-regions. The scene graph adaptation in the playground repo implements a texture atlas so that only one texture id can be used for all icons and image resources. If we combine this with the “overlap renderer” which can batch multiple geometries with identical material state together, it means that most Image and BorderImage elements in QML will point to the same texture and will therefore have the same material state.

Implementation of QML elements

The renderer can tweak and change the geometry it is given, but in some cases, more aggressive changes are needed for a certain hardware. For instance, when we wrote the scene graph, we started out with using vertex coloring for rectangle nodes. This had the benefit that we could represent both gradients, solid fills and the rectangle outline using the same material. However, on the N900 and the N9 (which we used at the time) the performance dropped significantly when we added a “varying lowp vec4″ to the fragment shader. So we figured that for this hardware we would want to use textures for the color tables instead.

When looking at desktops and newer embedded graphics chips, vertex coloring adds no penalty and is the favorable approach, and also what we use in the code today, but the ability to adapt the implementation is there. Also, if we consider batching possibilities in the renderer, then using vertex coloring means we no longer store color information in the material and all rectangles, regardless of fill style or border can be batched together.

The adaptation also allows customization of glyph nodes, and currently has the option of choosing between distance fields based glyph rendering (supports sub pixel positioning, scaling and free transformation) and the traditional bitmap based glyph rendering (similar to what QPainter uses). This can then also be used to hook into system glyph caches, should these exist.

Animation Driver

The animation driver is an implementation of QAnimationDriver which hooks into the QAbstractAnimation based system in QtCore. The reason for doing this is to be able to more closely tie animations to the screen’s vertical blank. In Qt 4, the animation system is fully driven by a QTimer which by defaults ticks every 16 milliseconds. Since we know that desktop and mobile displays usually update at 60 Hz these days, this might sound ok, but as has been pointed out before, this is not really the case. The problem with timer based animations is that they will drift compared to the actual vertical blank and the result is either:

  • The animation advances faster than the screen updates leading to the animation occasionally running twice before a frame is presented. The visual result is that the animation jumps ahead in time, which is very unpleasant on the eyes.
  • The animation advances slower than the screen updates leading to the animation occasionally not running before a frame is presented. The visual result is that the animation stops for a frame, which again is very unpleasant on the eyes.
  • One might be extremely lucky and the two could line up perfectly, and if they did that is great. However, if you are constantly animating, you would need very high accuracy for a drift to not occur over time. In addition, the vertical blank delta tends to vary slightly over time depending on factors like temperature, so chances are that even if we get lucky, it will not last.

I try to illustrate:

Timer-driven animations

The image tries to illustrate how advancing animations based on timers alone will almost certainly result in non-smooth animation

The scene graph took an alternative approach to this by introducing the animation driver, which instead of using a timer, introduces an explicit QAnimationDriver::advance() which allows exact control over when the animation is advanced. The threaded renderer we currently use on Mac and EGLFS (and other plugins that specify BufferQueueing and ThreadedOpenGL as capabilities), uses the animation driver to tick exactly once, and only once, per frame. For a long time, I was very happy with this approach, but there is one problem still remaining…

Even though animations are advanced once per frame, they are still advanced based to the current clock time, when the animation is run. This leads to very subtle errors, which are in many cases not visible, but if we keep in mind that both QML loaders, event processing and timers are fired on the same thread as the animations it should be easy to see that the clock time can vary greatly from frame to frame. This can result in a that an object that should move 10 pixels per frame could move for instance 8, 12, 7 and 13 pixels over 4 frames. As the frames are still presented to screen at the fixed intervals of the vertical blank, this means that every time we present a new frame, the speed will seem different. Typically this happens in the case of flicking a ListView, where every time a new delegate is created on screen, that animation advance is delayed by a few milliseconds causing the following frame feel like it skips a bit, even though the rendering performance is perfect.

I try to illustrate:

Animations using predictive times vs clock times

Animations using predictive times vs clock times

So some time ago, we added a “time” argument to QAnimationDriver::advance(), allowing the driver to predict when the frame would be presented to screen and thus advance it accordingly. The result is that even though the animations are advanced at the wrong clock time, they are calculated for the time they get displayed, resulting in is velvet motion.

A simple solution to the problem of advancing with a fixed time would be to increment the time with a fixed delta regardless, and Qt also implements this option already. This is doable by setting

QML_FIXED_ANIMATION_STEP=1

in the environment. However, the problem with this approach is that there are frames that take more than the vsync delta to render. This can be because it has loads of content to show, because it hooks in some GL underlay that renders a complex scene, because a large texture needed to be uploaded, a shader needed to be compiled or a number of other scenarios. Some applications manage to avoid this, but on the framework level, recovery in this situation needs to be handled in a graceful manner. So in the case of the frame rendering taking too much time, we need to adapt, otherwise we slow down the animation. For most applications on a desktop system, one would get away with skipping a frame and then continuing a little bit delayed, but if every frame takes long to render then animations will simply not work.

So the perfect solution is a hybrid. Something that advances with a fixed interval while at the same time keeps track of the exact time when frames get to screen and adapts when the two are out of sync. This requires a very accurate vsync delta though, which is why it is not implemented in any of our standard plugins, and why this logic is pluggable via the adaptation layer. (The animation driver in the playground repo implements this based on QScreen::refreshRate()). So that on a given hardware, you can get the right values and to do the right thing.

And last, despite all the “right” things that Qt may or may not do, this still requires support from the underlying system. Both the OpenGL driver and the windowing system may impose their own buffering schemes and delays which may turn our velvet into sandpaper. We’ve come to distinguish between:

  • Non blocking – This leads to low latency with tearing and uneven animation, but you can render as fast as possible and current time is as good as anything. In fact, since nothing is throttling your rendering, you probably want to drive the animation based on a timer as you would otherwise be spinning at 100% CPU (a problem Qt 5.0 has had on several setups over the last year). Qt 5.0 on linux and windows currently assumes this mode of rendering as it is the most common default setting from the driver side.
  • Double buffered w/blocking swap – This leads to fairly good results and for a long time I believe this was the holy grail for driving animations. Event processing typically happens just after we return from the swap and as long as we advance animations once per frame, they end up being advanced with current clock time with an almost fixed delta, which is usually good enough. However, because you need to fully prepare one buffer and present it before you can start the next you have only one vsync interval to do both animations, GL calls AND let the chip render the frame. The threaded renderloop makes it possible to at least do animations while the chip is rendering (CPU blocked inside swapBuffers), but it is still cutting it a bit short.
  • 3 or more buffers w/blocking – Combined with predictive animation delta and adaptive catch-up for slow frames, this gives perfect results. This has the added benefit that if the rendering is faster than the vsync delta, we can prepare queue up ready frames. Having a queue of prepared frames means we are much more tolerant towards single frames being slow and we can usually survive a couple of frames that take long to render, as long as the average rendering time is less than the vsync delta. Down side of this approach is that it increases touch latency.

So, we did not manage to come up with a perfect catch-all solution, but the scene graph does offer hooks to make sure that a UI stack on a given platform can make the best possible call and implement the solution that works best there.

Render Loop

The implementation inside the library contains two different render loops, one called QQuickRenderThreadSingleContextWindowManager and another one called QQuickTrivialWindowManager. These rather long and strange names have grown out of the need to support multiple windows using the QtQuick.Window module, and was named window manager for that reason, but what they really are are render loops. They control when and how the scene graph does its rendering, how the OpenGL context is managed and when animations should run.

The QQuickRenderThreadSingleContextWindowManager (what a mouthful) advances animations on the GUI thread while all rendering and other OpenGL related activities happen on the rendering thread. The QQuickTrivialWindowManager does everything on the GUI thread as we did face a number of problems with using a dedicated render thread, particularly on X11. Via the adaptation layer, it is possible to completely rewrite the render loop to fit a given system.

One problem that QML has not solved is that all animations must happen on the GUI thread. The scene graph has no problems updating it self in another thread, for instance using the QSGNode::preprocess() function, but QObject based bindings need to have sane threading behavior, so these need to happen on the GUI thread. So as a result, the threaded render loop is at the mercy of the GUI thread and it’s ability to stay responsive. Guaranteeing execution on the GUI every couple of milliseconds is hard to do, so jerky animations are still very much possible. The traditional approach to this has been that we promote the use of threading from the GUI thread to offload heavy operations, but as soon as an application reaches a certain complexity having all things forked off to other threads, including semi-complex JavaScript, becomes hard and at times unbearable, so having some enablers available allowing that certain elements to animate despite what goes on in the application’s main thread is very much needed.

To remedy this, I started playing with the idea that the GUI thread would rather be a slave of the render thread and that some animations could run in the render loop. The render loop in the playground repo implements the enablers for this in the render loop, opening for a potential animation system to run there regardless of how the GUI thread is running.

Conclusion

There are a lot of ideas here and a lot of work still to be done, and much of this does not “just work” as Qt normally does. Partially this is because we have been very focused on the embedded side of things recently, but also because graphics is hard and making the most of some hardware requires tweaks on many levels. The good news is that this API makes it at least possible to harness some of the ability on the lower levels when they are available, and it is all transparent to the application programmer writing QML and C++ using the public APIs.

Thank you for reading!

18 Comments


Posted in Performance, Qt Quick 2.0

New QML demos for Qt 5

Published Tuesday July 17th, 2012 | by

The QML core team down under gained a new friend for Qt 5. We finally have the privilege of working with a designer in our team and he’s been helping us in our quest to make QML the most designer friendly UI language. We have created a small video to show you the latest apps and games from our team. But remember that videos aren’t the main point of Qt demos, the source code for these demos is all available from qt-project.org and it’s BSD licensed. I’ll even sprinkle links throughout this post to show the code behind all the new features that have me excited.

First have a look at our new calculator demo, Calqlatr, which has been restyled like the other demos. From its humble few hundred LoC beginnings, SameGame is also looking like a real application now. The new particle effects look even better, and you have your choice of four game modes. This includes a mode that loads preset levels, also written in QML.

QML demos are really easy to modify and play with; SameGame selects a new theme per game mode already, so you can try out your own samegame designs with ease. We also have a new game, “Maroon in Trouble”, with an underwater theme inspired by the rich life at the Great Barrier Reef. The game is highly customizable with the towers implemented in QML, both for their appearance and their gameplay attributes.

When you watched the video of “Maroon in Trouble”, you’ll notice that it avails itself of some of the new visual embellishments available in QtQuick 2. The particle system allows for bubbles and clouds to float all over the screen, while the new sprite elements made the game piece animations trivial to implement. As a bonus, the game has sound effects if you have the QtMultimedia module installed – a benefit of the modularization work in Qt 5.

The new visual effects aren’t just for games though. The completely redesigned twitter search app, TweetSearch, uses sprites and shaders for much more subtle designs. The central logo has a touch of animation to keep the main screen from being static, and the rotating bars use a custom ShaderEffect to get that three dimensional look (although you could use the Qt3D module instead). It also uses view transitions to populate in a more fluid fashion.

If you still can’t get the beautiful effects you want with all these new visual elements, QtQuick 2 also introduces the Canvas element for imperative painting. Especially good for graphing data, see it in action with the new StocQt stock chart viewer example.

The QML core team is really proud of how well QtQuick 2 has developed. This is especially true since modules from the other Brisbane teams, like the team behind QtMultimedia, are finally inside Qt after the modularization efforts. I hope that our demos help launch other developers to write even better games and apps.

14 Comments


Posted in Qt Quick 2.0 | Tags: , ,