[FOSDEM 2014] Utilizing GPUs to accelerate 2D content
10 0 1703
Utilizing GPUs to accelerate 2D content
Speaker: Bas Schouten
Over the last 15 years, GPUs have gone from being a piece of hardware found almost exclusively on the machines of gamers to being present in almost every single desktop and laptop computer. This hardware presents opportunities to greatly improve power usage and performance for graphics applications. Over the last 5 years GPU utilization in the desktop application world for accelerating 2D graphics has slowly moved forward, however their intended use for video games also presents us with a number of limitations.
Over the last 15 years GPUs have gone from being a piece of hardware found almost exclusively on the machines of gamers, to being present in almost every single desktop and laptop computer. This hardware presents opportunities to greatly improve power usage and performance for graphics applications. Over the last 5 years GPU utilization in the desktop application world for accelerating 2D graphics has slowly moved forward, however their intended use for video games also presents us with a number of limitations.
In this presentation I will talk about what GPUs are, why we want to use them, in what different ways they can be put to use, and some of the challenges we've encountered when using them at Mozilla. I will also try and touch on some of the technical details on the different tradeoffs that the most common algorithms present.
By anonymous 2017-09-20
Why it is hard
Popular font formats like TrueType and OpenType are vector outline formats: they use Bezier curves to define the boundary of the letter.
Transforming those formats into arrays of pixels (rasterization) is too specific and out of OpenGL's scope, specially because OpenGl does not have non-straight primitives (e.g. see Why is there no circle or ellipse primitive in OpenGL?)
The easiest approach is to first raster fonts ourselves on the CPU, and then give the array of pixels to OpenGL as a texture.
OpenGL then knows how to deal with arrays of pixels through textures very well.
We could raster characters for every frame and re-create the textures, but that is not very efficient, specially if characters have a fixed size.
The more efficient approach is to raster all characters you plan on using and cram them on a single texture.
And then transfer that to the GPU once, and use it texture with custom uv coordinates to choose the right character.
This approach is called a https://en.wikipedia.org/wiki/Texture_atlas and it can be used not only for textures but also other repeatedly used textures, like tiles in a 2D game or web UI icons.
The Wikipedia picture of the full texture, which is itself taken from freetype-gl, illustrates this well:
I suspect that optimizing character placement to the smallest texture problem is an NP-hard problem, see: What algorithm can be used for packing rectangles of different sizes into the smallest rectangle possible in a fairly optimal way?
The same technique is used in web development to transmit several small images (like icons) at once, but there it is called "CSS Sprites": https://css-tricks.com/css-sprites/ and are used to hide the latency of the network instead of that of the CPU / GPU communication.
Non-CPU raster methods
There also exist methods which don't use the CPU raster to textures.
CPU rastering is simple because it uses the GPU as little as possible, but we also start thinking if it would be possible to use the GPU efficiency further.
This FOSDEM 2014 video https://youtu.be/LZis03DXWjE?t=886 explains other existing techniques:
- tesselation: convert the font to tiny triangles. The GPU is then really good at drawing triangles. Downsides:
- generates a bunch of triangles
- O(n log n) CPU calculation of the triangles
- calculate curves on shaders. A 2005 paper by Blinn-Loop put this method on the map. Downside: complex. See: Resolution independent cubic bezier drawing on GPU (Blinn/Loop)
- direct hardware implementations like OpenVG https://en.wikipedia.org/wiki/OpenVG. Downside: not very widely implemented for some reason. See:
Fonts inside of the 3D geometry with perspective
Rendering fonts inside of the 3D geometry with perspective (compared to an orthogonal HUD) is much more complicated, because perspective could make one part of the character much closer to the screen and larger than the other, making an uniform CPU discretization (e.g. raster, tesselation) look bad on the close part. This is actually an active research topic:
- What is state-of-the-art for text rendering in OpenGL as of version 4.1?
Distance fields are one of the popular techniques now.
The examples that follow were all tested on Ubuntu 15.10.
Because this is a complex problem as discussed previously, most examples are large, and would blow up the 30k char limit of this answer, so just clone the respective Git repositories to compile.
They are all fully open source however, so you can just RTFS.
FreeType looks like the dominant open source font rasterization library, so it would allow us to use TrueType and OpenType fonts, making it the most elegant solution.
Was a set of examples OpenGL and freetype, but is more or less evolving into a library that does it and exposes a decent API.
In any case, it should already be possible to integrate it on your project by copy pasting some source code.
It provides both texture atlas and distance field techniques out of the box.
Does not have a Debian package, and it a pain to compile on Ubuntu 15.10: https://github.com/rougier/freetype-gl/issues/82#issuecomment-216025527 (packaging issues, some upstream), but it got better as of 16.10.
Does not have a nice installation method: https://github.com/rougier/freetype-gl/issues/115
Generates beautiful outputs like this demo:
Examples / tutorials:
- a NEHE tutorial: http://nehe.gamedev.net/tutorial/freetype_fonts_in_opengl/24001/
- http://learnopengl.com/#!In-Practice/Text-Rendering mentions it, but I could not find runnable source code
- SO questions:
Other font rasterizers
Those seem less good than FreeType, but may be more lightweight:
Anton's OpenGL 4 Tutorials example 26 "Bitmap fonts"
- tutorial: http://antongerdelan.net/opengl/)
- source: https://github.com/capnramses/antons_opengl_tutorials_book/blob/9a117a649ae4d21d68d2b75af5232021f5957aac/26_bitmap_fonts/main.cpp
The font was created by the author manually and stored in a single
.png file. Letters are stored in an array form inside the image.
This method is of course not very general, and you would have difficulties with internationalization.
make -f Makefile.linux64
opengl-tutorial chapter 11 "2D fonts"
- tutorial: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-11-2d-text/
- source: https://github.com/opengl-tutorials/ogl/blob/71cad106cefef671907ba7791b28b19fa2cc034d/tutorial11_2d_fonts/tutorial11.cpp
Textures are generated from DDS files.
For some reason Suzanne is missing for me, but the time counter works fine: https://github.com/opengl-tutorials/ogl/issues/15
glutStrokeCharacter and FreeGLUT is open source...
TrueType raster. By NVIDIA employee. Aims for reusability. Haven't tried it yet.
ARM Mali GLES SDK Sample
http://malideveloper.arm.com/resources/sample-code/simple-text-rendering/ seems to encode all characters on a PNG, and cut them from there.
Lives in a separate tree to SDL, and integrates easily.
Does not provide a texture atlas implementation however, so performance will be limited: Rendering fonts and text with SDL2 efficiently
- How to do OpenGL live text-rendering for a GUI?
- Text/font rendering in OpenGLES 2 (iOS - CoreText?) - options and best practice?
Popular Videos 65
Submit Your Video